When we think of artificial intelligence (AI) going rogue, prime examples from the movies include HAL 9000 from 2001: Space Odyssey and Skynet from The Terminator, which were mainframe computers that reacted to the real-world problems in unexpected ways.

From industrial manufacturing to autonomous vehicles, machine learning models are becoming increasingly embedded in our lives. Researchers are thus exploring pre-emptive ways to avoid harm from unexpected AI decisions when machine learning models are deployed to act in the real-world—an area of machine learning known as reinforcement learning (RL).

While deep RL has indeed been very successful in achieving state-of-the-art performance in curated academic environments, it has yet to be thoroughly tested in the presence of real-world complexities,” said Abhishek Gupta, a Scientist at the Singapore Institute of Manufacturing Technology (SIMTech) and one of the study’s senior authors.

The work, which was principally conducted by Nanyang Technological University (NTU) graduate student Xinghua Qu and jointly overseen by Gupta and NTU professor Yew-Soon Ong, focused on the performance of vision-based AI, which is likely to be critical for the safe use of AI in applications such as autonomous vehicles.

Using six Atari video games, including the classic Pong game, the group simulated visual perturbations by altering a small number of pixels in selected frames in the game environment. They then examined how well the algorithm performed in the perturbed environment by measuring the ‘accumulated reward,’ a barometer of how optimal an algorithm’s decisions are.

Stunningly, they found that a mere one-pixel change to input images was often enough to cause the accumulated reward to significantly plummet for all four algorithms tested, including widely-used algorithms such as Deep Q Networks. These results indicate that although RL models thrive in familiar, standardized environments, they would be poorly equipped to handle an environment that is highly variable, like roads and heavily populated areas, potentially to the detriment of safety.

“Most of the work has focused on achieving highly accurate AI or deep learning models,” Gupta said. “However, this vulnerability needs to be considered before these AI are put into operational use, to ensure the integrity and reliability of AI deployment.”

The research team is now investigating more efficient techniques for generating adversarial perturbations in real-time RL applications. “This constitutes a critical step of knowing your enemies before defeating them,” Gupta said.

The A*STAR-affiliated researcher contributing to this research is from the Singapore Institute of Manufacturing Technology (SIMTech).

 

By Abhishek Gupta

Scientist

Singapore Institute of Manufacturing Technology

Previous Getting To Know Looker: How LookML Can Simplify BI Workflows
Next The AI Foundation Raises $17 Million To Create Digital AI-powered Personas