Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Data Science
  • Machine Learning

Adversarial Machine Learning And Its Role In Fooling AI

  • February 15, 2021
  • admin

Can you fool Artificial Intelligence?

No?


Partner with liwaiwai.com
for your next big idea.
Let us know here.


cyberpogo

Think again.

Three years ago, Apple launched IphoneX with cutting-edge facial recognition technology. This advanced AI technique (Face ID) replaced the old fingerprint recognition technology (Touch ID).

The latest technology claimed to be more secure and robust. However, shortly after the launch of Face ID, researchers from Vietnam breached it by designing a 3D face mask.

Such crafted attacks against ML-based AI systems come under the umbrella of a fascinating research field: adversarial machine learning. This attack against Face ID led to new debates in the field of cybersecurity: How secure are AI systems against adversarial attacks, and whether such attacks are practical in real-world scenarios or not.

In this article, we will explore the answers to these questions. We will further explore adversarial machine learning beyond cybersecurity — its application in AI software testing. But first, let’s understand what is adversarial machine learning.

 

What is Adversarial Machine Learning?

You are familiar with optical illusions. They play with our minds into seeing things that are not there. In adversarial machine learning, attackers design optical illusions for machine learning algorithms.

For example, say an AI system classifies apples and oranges. Now, look at the picture below. You can see that both are images of the same apple. However, the image of the apple on the right is crafted a little differently.

An attacker has added a tiny perturbation in this image, making the ML algorithm misclassify it to orange. The human eye cannot see any difference between the left and right apple. However, the AI system is fooled to see it as an orange.

Read More  Using AI To Enrich Digital Maps
Apple vs Adversarial Apple

The above illustration is inspired by the panda example given in this paper by the guru of adversarial machine learning: Ian Goodfellow.

This example seems pretty straightforward to understand right? There can be even simpler ways to sabotage AI systems.

There are also highly sophisticated attacks designed to evade machine learning and artificial intelligence systems.

In research, such attacks have proven to sabotage a variety of safety-critical systems, including self-driving cars and medical imaging systems. Let’s have a look at them:

 

Fooling AI: Evasion Attacks

Photo by Photo by Joshua Hoehne on Unsplash

In evasion attacks, attackers design adversarial examples that fool the AI model. For example, attackers can target self-driving cars by using stickers or paint on road signs.

The sign appears the same to the naked eye; however, a self-driving car may interpret it differently. This may cause deadly accidents. The example of the Face ID breach given at the start of this article is also a type of evasion attack.

There can be a variety of evasion attacks — from simple image manipulation to sophisticate white-box attacks.

 

Poisoning AI: Data Poisoning Attacks

Photo by Markus Spiske on Unsplash

In poisoning attacks, the threat actor can add incorrect examples in the training data, making a faulty, mutant model. The examples apparently seem harmless, and such attacks are usually launched on self-learning or reinforcement learning systems.

For example, back in 2016, an interactive, self-learning chatbot Tay was poisoned (trained) by the users into a racist Nazi!!

 

Rethinking AI Security and Software Testing

Image Source: InvoZone

Adversarial machine learning is also being used by software engineers to test the robustness of AI systems. The researchers are combining software testing techniques with adversarial machine learning to generate and evaluate the test data.

Humans fight with the idea that whether seeing is believing or believing is seeing. In the case of ML-based AI, learning is seeing and learning is believing.

It is quite tricky how most of the complex AI systems learn. The key to secure and robust AI systems is thorough testing.

Read More  Explaining Model Predictions On Structured Data

This article is republished from hackernoon.com


Our humans need coffee too! Your support is highly appreciated, thank you!

admin

Related Topics
  • AI
  • AI Systems
  • Algorithm
  • Artificial Intelligence
  • Data Science
  • Machine Learning
You May Also Like
View Post
  • Artificial Intelligence

Introducing 100K Context Windows

  • May 30, 2023
View Post
  • Architecture
  • Artificial Intelligence
  • Design

Sandvik unveils the Impossible Statue – an AI-enabled collaboration between Michelangelo, Rodin, Kollwitz, Kotaro, Savage and Sandvik

  • May 30, 2023
View Post
  • Data
  • Machine Learning

Effective Management Of Data Sources In Machine Learning

  • May 29, 2023
View Post
  • Artificial Intelligence

How Auditoria.AI Is Building AI-Powered Smart Assistants For Finance Teams

  • May 29, 2023
View Post
  • Artificial Intelligence
  • Technology

AI Coming To The PC At Scale

  • May 27, 2023
View Post
  • Artificial Intelligence
  • Platforms

Build Next-Generation, AI-Powered Applications On Microsoft Azure

  • May 26, 2023
View Post
  • Artificial Intelligence
  • Data
  • Machine Learning

Faster Together: How Dun & Bradstreet Datasets Accelerate Your Real-Time Insights

  • May 24, 2023
View Post
  • Engineering
  • Machine Learning
  • Practices

5 Skills Every Successful MLOps Engineer Should Have

  • May 24, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Introducing 100K Context Windows
    • May 30, 2023
  • 2
    Sandvik unveils the Impossible Statue – an AI-enabled collaboration between Michelangelo, Rodin, Kollwitz, Kotaro, Savage and Sandvik
    • May 30, 2023
  • 3
    Effective Management Of Data Sources In Machine Learning
    • May 29, 2023
  • 4
    How Auditoria.AI Is Building AI-Powered Smart Assistants For Finance Teams
    • May 29, 2023
  • 5
    G7 2023: The Real Threat To The World Order Is Hypocrisy.
    • May 28, 2023
  • 6
    AI Coming To The PC At Scale
    • May 27, 2023
  • 7
    Build Next-Generation, AI-Powered Applications On Microsoft Azure
    • May 26, 2023
  • 8
    Faster Together: How Dun & Bradstreet Datasets Accelerate Your Real-Time Insights
    • May 24, 2023
  • 9
    5 Skills Every Successful MLOps Engineer Should Have
    • May 24, 2023
  • 10
    London & UK Is The Best For International Students! You Have Been Warned.
    • May 23, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Wipro Expands Google Cloud Partnership To Advance Enterprise Adoption Of Generative AI
    • May 23, 2023
  • 2
    Google Cloud Launches AI-Powered Solutions To Safely Accelerate Drug Discovery And Precision Medicine
    • May 16, 2023
  • 3
    Huawei And Partners Announce Yucatan Wildlife Conservation Findings
    • May 18, 2023
  • 4
    Cloudflare’s R2 Is The Infrastructure Powering Leading AI Companies
    • May 16, 2023
  • 5
    TCS Announces Generative AI Partnership With Google Cloud And New Offering For Enterprise Customers
    • May 22, 2023
  • /
  • Artificial Intelligence
  • Explore
  • About
  • Contact Us

Input your search keywords and press Enter.