Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • People
  • Robotics

Why Human-Like Robots Freak Real People Out

  • September 23, 2020
  • admin

New research digs into why robots repulse people more as their human likeness increases, known as “the uncanny valley.”

Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines—but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right.”

New insights into the cognitive mechanisms underlying this phenomenon clarify that uncanny valley effect.

Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a robot with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.

“We found that the opposite is true,” says first author Wang Shensheng, who did the work as a graduate student at Emory University and recently received his PhD in psychology.

“It’s not the first step of attributing a mind to an android but the next step of ‘dehumanizing’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”

The findings have implications for both the design of robots and for understanding how we perceive one another as humans.

“Robots are increasingly entering the social domain for everything from education to health care,” Wang says. “How we perceive them and relate to them is important both from the standpoint of engineers and psychologists.”

“At the core of this research is the question of what we perceive when we look at a face,” adds senior author Philippe Rochat, a professor of psychology and senior author of the study. “It’s probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships.”

Read More  Artificial Intelligence Puts Focus On The Life Of Insects

The research may help in unraveling the mechanisms involved in mind-blindness—the inability to distinguish between humans and machines—such as in cases of extreme autism or some psychotic disorders, Rochat says.

Anthropomorphizing, or projecting human qualities onto objects, is common. “We often see faces in a cloud for instance,” Wang says. “We also sometimes anthropomorphize machines that we’re trying to understand, like our cars or a computer.”

Naming one’s car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.

To tease apart the potential roles of mind-perception and dehumanization in the uncanny valley phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Researchers showed participants three types of images—human faces, mechanical-looking robot faces, and android faces that closely resembled humans—and asked to rate each for perceived animacy or “aliveness.” The researchers systematically manipulated the exposure times of the images, within milliseconds, as the participants rated their animacy.

The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.

A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.

Read More  Love In The Time Of Algorithms: Would You Let Your Artificial Intelligence Choose Your Partner?

“The whole process is complicated but it happens within the blink of an eye,” Wang says. “Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling.”

 

The research appears in Perception. Source: Emory University

admin

Related Topics
  • Android
  • Perception
  • Robotics
  • Robots
You May Also Like
View Post
  • Artificial Intelligence
  • Machine Learning
  • Robotics

Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions

  • March 23, 2023
View Post
  • People

Financial Crisis: It’s A Game & We’re All Being Played

  • March 17, 2023
View Post
  • Robotics

ABB To Expand Robotics Factory In US

  • March 16, 2023
View Post
  • Engineering
  • People
  • Technology

Linux Foundation Training & Certification & Cloud Native Computing Foundation Partner With Corise To Prepare 50,000 Professionals For The Certified Kubernetes Administrator Exam

  • March 16, 2023
View Post
  • Engineering
  • People
  • Software Engineering

A Guide To Managing Your Agile Engineering Team

  • March 15, 2023
View Post
  • Engineering
  • People

10 Ways Wikimedia Does Developer Advocacy

  • March 15, 2023
View Post
  • Robotics

Titanic Robots Make Farming More Sustainable

  • March 13, 2023
View Post
  • Artificial Intelligence
  • Data Science
  • Robotics
  • Technology

15 Things We Learned In 2022: Stuffed Puppies, Smart Birds And Self-Healing Robots

  • March 9, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Bard And ChatGPT — A Head To Head Comparison
    • March 31, 2023
  • 2
    Modernize Your Apps And Accelerate Business Growth With AI
    • March 31, 2023
  • 3
    Why Your Open Source Project Needs A Content Strategy
    • March 31, 2023
  • 4
    From Raw Data To Actionable Insights: The Power Of Data Aggregation
    • March 30, 2023
  • 5
    Effective Strategies To Closing The Data-Value Gap
    • March 30, 2023
  • 6
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 7
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 8
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 9
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023
  • 10
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Introducing GPT-4 in Azure OpenAI Service
    • March 21, 2023
  • 2
    How AI Can Improve Digital Security
    • March 27, 2023
  • 3
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 4
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • 5
    My First Pull Request At Age 14
    • March 24, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.