Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Machine Learning
  • Technology

In Tech We Trust?

  • September 18, 2019
  • admin

Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.

Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new Strategic Research Initiative on Trustworthy Technologies, which brings together science, technology and humanities researchers from across the University.

With penalties including fines of up to €20 million, people are realising that they need to take data protection much more seriously

Jat Singh

In fact, Singh, a researcher in Cambridge’s Department of Computer Science and Technology, has been collaborating with lawyers for several years: “A legal perspective is paramount when you’re researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.”

Governance and public trust present some of the greatest challenges in technology today. The European General Data Protection Regulation (GDPR), which came into force last 2018, has brought forward debates such as whether individuals have a ‘right to an explanation’ regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. “With penalties including fines of up to 4% of global turnover or €20 million, people are realising that they need to take data protection much more seriously,” he says.

Singh is particularly interested in how data-driven systems and algorithms – including machine learning – will soon underpin and automate everything from transport networks to council services.

As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the ‘Internet of Things’ continues to instrument the physical world, machines will increasingly mediate and influence our lives.

Read More  Lockheed Martin, Radisys To Enhance 5G.MIL™ Interoperability With Open Standards-Based Stack

It’s a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: “We work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they’re doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.”

What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.

“Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

Read More  Google Cloud Next 2019 | Organizing Your Resources for Cost Management on GCP

How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”

But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that “black-sounding” names were 25% more likely to result in the delivery of this kind of advertising.

Read More  Microsoft Build 2019 | MLOps: How to Bring Your Data Science to Production

Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”

Transparency, reliability and trustworthiness are at the core of Weller’s work at the Leverhulme Centre for the Future of Intelligence and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.

Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”

 

Republished from University Of Cambridge.

admin

Related Topics
  • Algorithms
  • Internet of Things
  • Law
  • Trust
You May Also Like
View Post
  • People
  • Technology

DBS Singapore: The Best Boasting To Be The Best For So Long, Humbled By Hubris

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Software
  • Technology

Bard And ChatGPT — A Head To Head Comparison

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Technology

Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Try Bard And Share Your Feedback

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • Technology

Google Data Cloud & AI Summit : In Less Than 12 Hours From Now

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles

  • March 28, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

ChatGPT 4.0 Finally Gets A Joke

  • March 27, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

Mr. Cooper Is Improving The Home-buyer Experience With AI And ML

  • March 24, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    DBS Singapore: The Best Boasting To Be The Best For So Long, Humbled By Hubris
    • March 31, 2023
  • 2
    Bard And ChatGPT — A Head To Head Comparison
    • March 31, 2023
  • 3
    Modernize Your Apps And Accelerate Business Growth With AI
    • March 31, 2023
  • 4
    Why Your Open Source Project Needs A Content Strategy
    • March 31, 2023
  • 5
    From Raw Data To Actionable Insights: The Power Of Data Aggregation
    • March 30, 2023
  • 6
    Effective Strategies To Closing The Data-Value Gap
    • March 30, 2023
  • 7
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 8
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 9
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 10
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Introducing GPT-4 in Azure OpenAI Service
    • March 21, 2023
  • 2
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023
  • 3
    How AI Can Improve Digital Security
    • March 27, 2023
  • 4
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 5
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.