Intelligence, Inside and Outside.

Artificial Intelligence Safety & Cybersecurity: A Timeline Of AI Failures

Breakthrough after breakthrough, artificial intelligence (AI) has continued to challenge the human definition of impossible. Our lives and AI technologies are intertwining further as time goes on. For us, this means a more convenient life aided by this technology. At the same time though, this means that we are making ourselves vulnerable to the consequences of the errors that AIs can commit.

Of course, a 100% secure system is desirable in order to ensure the safety of humans interacting using. However, there is no such thing as a perfect security system.

In his paper, Roman Yampolskiy details how AI failures will only increase in frequency and intensity as we continue to grow dependent on the technology.

Timeline of Failures

Yampolskiy listed down some notable failures committed by AI across the years,

1959 AI designed to be a General Problem Solver failed to solve real-world problems.

1982 Software designed to make discoveries, discovered how to cheat instead.

1983 Nuclear attack early warning system falsely claimed that an attack is taking place.

2010 Complex AI stock trading software caused a trillion dollar flash crash.

2011 E-Assistant told to “call me an ambulance” began to refer to the user as Ambulance.

2013 Object recognition neural networks saw phantom objects in particular noise images

2015 Automated email reply generator created inappropriate responses.

2015 A robot for grabbing auto parts grabbed and killed a man.

2015 Image tagging software classified black people as gorillas.

2015 Adult content filtering software failed to remove inappropriate content.

2016 AI designed to predict recidivism acted racist.

Read More  The Challenge of Abundance: Boredom, Meaning, and the Struggle of Mental Freedom

2016 Game NPCs designed unauthorized superweapons.

2016 Patrol robot collided with a child.

2016 World champion-level Go playing AI lost a game.

2016 Self-driving car had a deadly accident.

2016 AI designed to converse with users on Twitter became verbally abusive.

Causes of Failures

Intuitively, these AI failures can be attributed to errors during the performance phase. However, failures may arise from errors from as early as the learning phase. In this case, the AI system learns a correlated function instead of actually learning what the designers intend for it to learn.

For instance, a computer vision system designed to identify tanks from images may have instead learned to identify based on the background of the images.

Doomed to Fail?

Yampolskiy posits that the frequency and severity of AI failures are bound to increase over time. This is due to AI taking up more roles in human lives. The problem is that the future point towards AI taking up multiple roles at the same time, in the form of Artificial General Intelligence (AGI).

With AGIs, humans will receive assistance without having to transition from one device to another. However, instead of making mistakes over one particular role, AGIs will commit failures that will impact a wider span of human activities. With a large range of capabilities, it can also transcend human ability. In the worst case, a system like AGI which surpasses humans in every way could wipe out humanity with a single mistake.

Is There a Solution?

The legal system is lagging behind the developments we have in AI. Similarly, the moral implications of the use of AI is unexplored. Overall, AI safety is a domain that is only starting to gain recognition as a valid concern. In the long run, however, this is an issue of high importance. If we want to make sure that developments in AI will be beneficial, we must be able to restrain it so that it doesn’t commit severe errors.

Read More  Document AI Offers The Ability To Search And Store Documents Efficiently With Document AI Warehouse

We can never really assure the safety of a completely autonomous system. For starters, it isn’t possible to program the highly subjective and complex values of humanity into a machine so we can’t teach it to act “good.” Even if we’re able to pull off such a feat, there is no guarantee that a super intelligent system like an AGI will comply to the humanity’s set of values as it continues to learn — it may even dismiss it as an unnecessary hindrance it should cleanse itself of.

To start with, the question is if there really is a need for an AGI. Domain-specific AI is useful as they are. Given that they are restricted to their particular roles, they won’t pose as much of a danger in terms of severely damaging human lives.

However, to restrict the development of such technology is almost impossible. More and more people are gaining access to AI designing tools. At the same time, the cost of developing such tools is getting exponentially cheaper.

At this point, we could only hope that if an AGI does come out in this world, we are prepared for it. This starts by putting more attention in AI security, making sure that AI failures do not ever become beyond what we could handle.


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

Google Cloud Next 2019 | How to Make Enterprise Search More Effective with Google Cloud Search

Next Post

Google I/O 2019 | What’s New in Android Machine Learning

Read next