Liwaiwai Liwaiwai



Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence

Push For AI Innovation Can Create Dangerous Products

  • August 5, 2022
  • Ackley Wyndam

This past June, the U.S. National Highway Traffic Safety Administration announced a probe into Tesla’s autopilot software. Data gathered from 16 crashes raised concerns over the possibility that Tesla’s AI may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact.

It echoes the revelation that Uber’s self-driving car, which hit and killed a woman, detected her six seconds before impact. But the AI was not programmed to recognize pedestrians outside of designated crosswalks. Why? Because jaywalkers are not legally there.

Some believe these stories are proof that our concept of liability needs to change. To them, unimpeded continuous innovation and widespread adoption of AI is what our society needs most, which means protecting innovative corporations from lawsuits. But what if, in fact, it’s our understanding of competition that needs to evolve instead?

If AI is central to our future, we need to pay careful attention to the assumptions around harms and benefits programmed into these products. As it stands, there is a perverse incentive to design AI that is artificially innocent.

A better approach would involve a more extensive harm-reduction strategy. Maybe we should be encouraging industry-wide collaboration on certain classes of life-saving algorithms, designing them for optimal performance rather than proprietary advantage.

Every fix creates a new problem

Some of the loudest and most powerful corporate voices want us to trust machines to solve complex societal problems. AI is hailed as a potential solution for the problems of cross-cultural communication, health care and even crime and social unrest.

Corporations want us to forget that AI innovations reflect the biases of the programmer. There is a false belief that as long as the product design pitch passes through internal legal and policy constraints, the resulting technology is unlikely to be harmful. But harms emerge in all sorts of unexpected ways, as Uber’s design team learned when their vehicle encountered a jaywalker for the first time.

What happens when the nefarious implications of an AI are not immediately recognized? Or when it is too difficult to take the AI offline when necessary? Which is what happened when Boeing hesitated to ground the 737 Max jets after a programming glitch was found to cause crashes — and 346 people died as a result.

In 2019, Boeing admitted that its software was the cause of two deadly crashes.

We must constantly reframe technological discussions in moral terms. The work of technology demands discrete, explicit instructions. Wherever there is no specific moral consensus, individuals simply doing their job will make a call, often without taking the time to consider the full consequences of their actions.

Moving beyond liability

At most tech companies, a proposal for a product would be reviewed by an in-house legal team. It would draw attention to the policies the design folks need to consider in their programming. These policies might relate to what data is consumed, where the data comes from, what data is stored or how it is used (for example anonymized, aggregated or filtered). The legal team’s primary concern would be liability, not ethics or social perceptions.

Research has called for taking an approach that considers insurance and indemnity (responsibility for loss compensation) to shift liability and allow stakeholders to negotiate directly with each other. They also propose moving disputes over algorithms to specialized tribunals. But we need bolder thinking to address these challenges.

Instead of liability, a focus on harm reduction would be more helpful. Unfortunately, our current system doesn’t allow companies to easily co-operate or share knowledge, especially when anti-trust concerns might be raised. This has to change.

steering wheel and display screen in the interior of a vehicle
An investigation by the National Highway Traffic Safety Administration found that Tesla’s autopilot function turned off in advance of an imminent collision. (Shutterstock)

Re-thinking the limits of competition

These problems demand large-scale, industry-wide efforts. The misguided pressures of competition pushed Tesla, Uber and Boeing to release their AI too soon. They were overly concerned with the costs of legal liability and lagging behind competitors.

My research proposes the somewhat counter-intuitive idea that the ethical positions a corporation takes should be a source of competitive parity in its industry, not a competitive advantage. In other words, a company should not stand out for finding ethical ways to run its business. Ethical commitments should be the minimum expectation required of all who compete.

Companies should compete on variables like comfort, customer service or product life, not on whose autopilot algorithm is less likely to kill. We need an issues-based exemption to competition, one that is centred around a particular technological challenge, like autonomous driving software, and guided by a shared desire to reduce harm.

What would this look like in practice? The truth is that more than 50 per cent of Fortune 500 companies already use open-source software for mission-critical work. And their ability to compete has not been stifled by giving up on proprietary algorithms.

Imagine if the motivation to reduce harm became a core target function of technology leaders. It would end the incentive individual firms currently have to design AI that is artificially innocent. It would shift their strategic priorities away from always preventing imitation and towards encouraging competitors to reduce harm in similar ways. And it would grow the pie for everyone, as customers and governments would be more trusting of technology-driven revolutions if innovators were seen as putting harm reduction first.The Conversation

David Weitzner, Assistant professor, Administrative Studies, York University, Canada

This article is republished from The Conversation under a Creative Commons license.

Ackley Wyndam

Related Topics
  • Accountability
  • Artificial intelligence (AI)
  • Competition
  • Ethics
  • Harm reduction
  • Innovation
You May Also Like
View Post
  • Artificial Intelligence

Microsoft‘s Big AI Ambitions Go Beyond Just OpenAI And ChatGPT

  • February 3, 2023
View Post
  • Artificial Intelligence
  • Technology

Deepfakes: Faces Created By AI Now Look More Real Than Genuine photos

  • February 3, 2023
View Post
  • Artificial Intelligence

GPT-3 In Your Pocket? Why Not!

  • February 3, 2023
View Post
  • Artificial Intelligence
  • Design
  • Engineering

Can AI Replace Cloud Architects?

  • February 2, 2023
View Post
  • Artificial Intelligence

Meet Aiko And Aiden: The World’s First AI Interns

  • February 2, 2023
View Post
  • Artificial Intelligence
  • Technology

Google Scrambles To Catch Up In The Wake Of OpenAI’s ChatGPT

  • January 31, 2023
View Post
  • Artificial Intelligence
  • Technology

9 Ways We Use AI In Our Products

  • January 31, 2023
View Post
  • Artificial Intelligence
  • Technology

7 Ways Google Is Using AI To Help Solve Society’s Challenges

  • January 30, 2023
Stay Connected!
LATEST
  • 1
    Microsoft‘s Big AI Ambitions Go Beyond Just OpenAI And ChatGPT
    • February 3, 2023
  • 2
    Deepfakes: Faces Created By AI Now Look More Real Than Genuine photos
    • February 3, 2023
  • 3
    GPT-3 In Your Pocket? Why Not!
    • February 3, 2023
  • 4
    Can AI Replace Cloud Architects?
    • February 2, 2023
  • 5
    Meet Aiko And Aiden: The World’s First AI Interns
    • February 2, 2023
  • 6
    Google Scrambles To Catch Up In The Wake Of OpenAI’s ChatGPT
    • January 31, 2023
  • 7
    9 Ways We Use AI In Our Products
    • January 31, 2023
  • 8
    Google Cloud Unveils New AI Tools for Retailers
    • January 31, 2023
  • 9
    7 Ways Google Is Using AI To Help Solve Society’s Challenges
    • January 30, 2023
  • 10
    The Ethics Of Machine Learning: Understanding The Role Of Developers And Designers
    • January 30, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    8 Best Human Behaviour Datasets For Machine Learning
    • January 30, 2023
  • 2
    Built With BigQuery: How To Accelerate Data-Centric AI Development With Google Cloud And Snorkel AI
    • January 29, 2023
  • 3
    What Kind Of Future Will AI Bring Enterprise IT?
    • January 29, 2023
  • 4
    Prompt Engineering For ChatGPT And Generative AI
    • January 29, 2023
  • 5
    AI Might Be Seemingly Everywhere, But There Are Still Plenty Of Things It Can’t Do—for now
    • January 27, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.