Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Automation
  • Data Science

What are Adversarial AI Attacks and How Do We Combat Them?

  • August 23, 2021
  • Aelia Vita

Deep learning is the main force behind the recent advances in the field of artificial intelligence (AI). Deep learning models are capable of performing on par with, if not exceeding, human levels, at a variety of different tasks and objectives. However, deep neural networks are vulnerable to subtle adversarial perturbations applied to their inputs – adversarial AI. These adversarial perturbations, which can be imperceptible to the human eye, can easily mislead a trained deep neural network into making wrong decisions.

The field of adversarial machine learning focuses on addressing this problem by developing high-performing deep learning models that are also robust against this type of adversarial attack. At Modzy, we’re conducting cutting-edge research to improve upon past approaches that defend against adversarial attacks, ensuring our models maintain peak performance and robustness when faced with adversarial AI.

What you need to know

Although impressive breakthroughs have been made by leveraging deep neural networks in many fields such as image classification and object detection, Szegedy et al. [1] discovered that these models can easily be fooled by adversarial attacks. For example, an image manipulated by an adversary with only a few modified pixels can easily fool an image classifier into confidently predicting the wrong class for that image (Figure, [2]).

This exposes an unpleasant fact which is that deep learning models do not process information in a manner that is similar to humans. This phenomenon undermines the practicality of many current deep learning models that are trained solely for accuracy and performance, but not for robustness against these types of attacks.

image

The research community is quite active in pursuing possible solutions to this problem. On the adversarial side, many attacking methods which utilize the vulnerabilities of trained deep neural networks have been proposed [2,3]. On the defense side, these attacking schemes are used to propose new training and design methodologies for deep neural networks in order to produce deep learning models that are relatively more robust against adversarial AI attacks.

As an example, adversarial training has been proposed as a possible solution to enhance robustness. Adversarial training involves training a deep neural network on a larger training dataset that includes both original and adversarially perturbed inputs [2]. However, due to a lack of understanding of the adversarial phenomena described above, none of the current solutions proposed in the research community are capable of addressing this problem in a generalizable sense across different domains.

Adversarial AI attacks can be divided into two categories:

  1. white-box attacks
  2. black-box attacks
Read More  Love In The Time Of Algorithms: Would You Let Your Artificial Intelligence Choose Your Partner?

Mathematically speaking, all deep neural networks are trained to optimize their behavior in relationship to a specific task, such as language translation or image classification. During training, this desired behavior is usually formulated as an optimization problem that minimizes a loss value according to a specific formula that measures deviations from the desired behavior.

The adversarial examples are inputs that do the opposite. They maximize this loss value and consequently maximize deviations from the desired behavior. Finding these adversarial examples requires knowledge of the inner workings of the deep neural network.

The strong assumption under the white-box attack framework is that the adversary has full knowledge of the inner workings of the deep neural network and can utilize this knowledge to design adversarial inputs. Under the black-box attack framework, the adversary has a limited knowledge of the architecture of the deep neural network and can only estimate the behavior of the model and devise adversarial examples based on its estimation.

Another type of attack, a poisoning attack, purely focuses on manipulating the training dataset so that any deep learning model trained dataset using it yields a sub-par performance during inference.

A New Understanding of Adversarial AI Attacks

At Modzy, we developed a new understanding of adversarial AI attacks on deep neural networks by utilizing the Lyapunov Theory of Robustness and Stability of Nonlinear Systems [4, 5]. Our robust solutions are based on this theory, which dates back further than a century and has been extensively used in the field of control theory to design automated systems such as aircrafts and automobile systems; the expectation is that these systems should be stable, robust, and able to maintain the desired performance in unknown environments.

Our robust deep learning models, tested against strong white-box attacks, are trained with a similar expectation, so that they can make correct predictions in unknown environments and under the possibility of adversarial attacks. We also train our deep learning models in a novel way by enhancing the well-known backpropagation algorithm commonly used across industry to train deep learning models. Our robust models are trained to rely on a holistic set of features learned from the input when making predictions.

For example, our image classifiers look at the context of the entire image before classifying an object. This means that modifying a few pixels will not affect the final classification decisions made by our models. Consequently, our robust models closely imitate how humans learn and make decisions.

Adversarial attacks on deep neural networks pose a great risk to the successful deployment of deep learning models in mission-critical environments. One challenging aspect of adversarial AI is the fact that these small adversarial perturbations, while capable of completely fooling the deep learning model, are imperceptible to the human eye. The increasing reliance on deep learning models in the field of artificial intelligence further points to the adverse impact that adversarial AI can have on our society.

We take this risk seriously and are actively developing new ways to enhance the defensive capabilities of our models. Our robust deep learning models guarantee high performance and resilience against adversarial AI and are trained to be deployed into unknown environments.

Read More  How The Philosophy Of Mind And Consciousness Has Affected AI Research

 

This article is republished from hackernoon.

Aelia Vita

Related Topics
  • Adversarial AI Attacks
  • AI
  • Artificial Intelligence
  • Neural Networks
You May Also Like
View Post
  • Artificial Intelligence
  • Software
  • Technology

Bard And ChatGPT — A Head To Head Comparison

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Platforms

Modernize Your Apps And Accelerate Business Growth With AI

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Technology

Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Try Bard And Share Your Feedback

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • Technology

Google Data Cloud & AI Summit : In Less Than 12 Hours From Now

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles

  • March 28, 2023
View Post
  • Artificial Intelligence
  • Tools

Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing

  • March 28, 2023
View Post
  • Artificial Intelligence
  • Design
  • Practices

How AI Can Improve Digital Security

  • March 27, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    DBS Singapore: The Best Boasting To Be The Best For So Long, Humbled By Hubris
    • March 31, 2023
  • 2
    Bard And ChatGPT — A Head To Head Comparison
    • March 31, 2023
  • 3
    Modernize Your Apps And Accelerate Business Growth With AI
    • March 31, 2023
  • 4
    Why Your Open Source Project Needs A Content Strategy
    • March 31, 2023
  • 5
    From Raw Data To Actionable Insights: The Power Of Data Aggregation
    • March 30, 2023
  • 6
    Effective Strategies To Closing The Data-Value Gap
    • March 30, 2023
  • 7
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 8
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 9
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 10
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Introducing GPT-4 in Azure OpenAI Service
    • March 21, 2023
  • 2
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023
  • 3
    How AI Can Improve Digital Security
    • March 27, 2023
  • 4
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 5
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.