Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence

What Science Fiction Tells Us About Our Trouble With AI

  • December 18, 2019
  • admin

AI, as conceived of in popular culture, does not yet exist, even if autonomous and expert systems do. Smartphones might not be supercomputers, but they are called “smartphones” for good reason, in terms of how their operating systems function. Equally, we are happy to talk about a computer game’s “AI”, but gamers quickly learn to take advantage of its limitations and inability to “think” creatively. There is an important difference between these systems and what is termed Artificial General Intelligence (AGI) or “strong AI”, an AI with the general intelligence and aptitudes of a human.

Mclek/Shutterstock

Given that the reality of AI may be fast approaching, it’s of the utmost importance that we work out what might a future with artificial intelligence might look like. Last year, an open letter with signatories including Stephen Hawking and Nick Bostrom called for AI to be of demonstrable benefit to humanity, or risk something that exceeds our ability to control it.

Both the US and British governments’ exploration of the significance and implications of AI research has focused on potential economic and social impacts. But politicians would do well to consider what science fiction can tell them about public attitudes – arguably, one of the biggest issues concerning AI

Culturally, our understanding is informed by the ways in which it is represented in science fiction, and there is an assumption that AI always means AGI, which it does not. Fictional representations of AI reveal far more about our attitudes to the technology than they do about its reality (even if we sometimes seem to forget this). Science fiction can therefore be a valuable resource from which the public view of AI can be assessed – and therefore corrected, if need be.

Read More  The Autonomous Killing Systems Of The Future Are Already Here, They’re Just Not Necessarily Weapons – Yet

I, Robot: the greater good

In Alex Proyas’s adaptation of Isaac Asimov’s stories, I, Robot (2004), there is a heart-to-heart scene in which we learn of the reason for a detective’s mistrust of robots. He recounts a car crash in which two cars end up in a river, and that a robot determined that it was better to save the detective than it was to save a child because it the detective had a higher percentage chance of survival. The scene serves to demonstrate the inhumanity of AI and the humanity of the detective, who would have opted to save the child. This scene, for all its Hollywood gloss, is indicative of the core ethical issues concerned with AI research: it denigrates AI as not being “moral” but merely a pattern of encoded behaviours.

But is the robot in this situation actually wrong? Isn’t it better to save one life than lose two? Here, emergency triage is not seen as “inhuman” but necessary. “Greater good” arguments have been going on for centuries and, in this situation, the “greater” good, saving the policemen or the child, is debatable, especially as the detective later saves humanity from the ravages of VIKI, an AI gone rogue.

The context in which this decision is made, the parameters through which the robot reached its percentage conclusion, could also factor in any number of concerns, albeit limited by those programmed into it. Is the emotional response, if saving the child is a fundamentally emotional approach, the correct one?

One of the problems we face as a society engaging with an AI-future is that machine intelligences might actually demonstrate the contingency of our own moral codes, when we want to believe them to be universally applicable. Is the problem not that the robot was wrong, but that in fact it might be right?

Read More  Google Cloud Next 2019 | NASA FDL + Google Cloud: Identifying Exoplanets - Finding Life

Interacting with AI

The ways in which AI have been represented lead to pretty much the same conclusion: any AI is inhuman(e) and therefore dangerous. Just as VIKI in I, Robot turns against humanity, as she finds another “logical” interpretation of Asimov’s three laws (designed to protect humans), there are a plethora of stories and films in which AIs take over the world (Daniel H Wilson’s Robopocalypse and Robogenesis, the Matrix and Terminator franchises). There are many more about how they are insidious and will directly control humanity, or enable factions to take more complete control of society (Daniel Suarez’s Kill Decision, Neal Asher’s Polity stories, the TV series Person of Interest).

But there are relatively few about how they might cooperate with humanity (Asimov got in here early on this and remains one of the few, although Ann Leckie’s Ancillary trilogy is also of interest). The hypocrisy is that this trend suggests that it’s fine for governments to monitor its citizens and for corporations to analyse social media feeds (even using software bots), but an AI shouldn’t. It’s like saying that you’re happy being screwed over, but only by a political system or another mammal, not a computer.

One solution, therefore, is to consider how to limit AIs and teach them human ethics. But if we “train” AIs to have ethical behaviours, who do we trust to train them? To whose ethical standards? Given the issues Microsoft had with Tay (members of the public tried to “trick” an AI into making potentially offensive statements), it is clear that if an AI learns from humanity, what it learns might be precisely that we’re not worth the time it takes to tweet back to us. We don’t trust robots to think for themselves, we don’t trust ourselves to program them or use them ethically, and we can’t trust ourselves to teach them. What’s an AI to do?

Read More  First Dedicated AI Management And MLOps Platform On Google Cloud Marketplace

Public perceptions of AI will be governed by just this sort of mistrust and suspicion, fostered by such public debacles and by the broadly negative view evident in much science fiction. But what such examples perhaps reveal is that the problem with AI is not that it is “artificial”, nor that it is immoral, nor even in its economic or social impact. Perhaps the problem is us.The Conversation

 

Will Slocombe, Lecturer in American Literature, University of Liverpool

This article is republished from The Conversation under a Creative Commons license. Read the original article.

admin

Related Topics
  • Future
  • Robots
  • Science Fiction
You May Also Like
View Post
  • Artificial Intelligence
  • Software
  • Technology

Bard And ChatGPT — A Head To Head Comparison

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Platforms

Modernize Your Apps And Accelerate Business Growth With AI

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Technology

Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Try Bard And Share Your Feedback

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • Technology

Google Data Cloud & AI Summit : In Less Than 12 Hours From Now

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles

  • March 28, 2023
View Post
  • Artificial Intelligence
  • Tools

Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing

  • March 28, 2023
View Post
  • Artificial Intelligence
  • Design
  • Practices

How AI Can Improve Digital Security

  • March 27, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    DBS Singapore: The Best Boasting To Be The Best For So Long, Humbled By Hubris
    • March 31, 2023
  • 2
    Bard And ChatGPT — A Head To Head Comparison
    • March 31, 2023
  • 3
    Modernize Your Apps And Accelerate Business Growth With AI
    • March 31, 2023
  • 4
    Why Your Open Source Project Needs A Content Strategy
    • March 31, 2023
  • 5
    From Raw Data To Actionable Insights: The Power Of Data Aggregation
    • March 30, 2023
  • 6
    Effective Strategies To Closing The Data-Value Gap
    • March 30, 2023
  • 7
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 8
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 9
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 10
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Introducing GPT-4 in Azure OpenAI Service
    • March 21, 2023
  • 2
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023
  • 3
    How AI Can Improve Digital Security
    • March 27, 2023
  • 4
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 5
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.