Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Machine Learning
  • Technology

What’s At Stake If AI Is Increasingly Being Used To Identify Our Emotions?

  • July 9, 2021
  • admin

Imagine you are in a job interview. As you answer the recruiter’s questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people’s knowledge or consent.

Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.

Many companies use ERT to test customer reactions to their products, from cereal to video games. But it can also be used in situations with much higher stakes, such as in hiring, by airport security to flag faces as revealing deception or fear, in border control, in policing to identify “dangerous people” or in education to monitor students’ engagement with their homework.

 

Shaky scientific ground

Fortunately, facial recognition technology is receiving public attention. The award-winning film Coded Bias, recently released on Netflix, documents the discovery that many facial recognition technologies do not accurately detect darker-skinned faces. And the research team managing ImageNet, one of the largest and most important datasets used to train facial recognition, was recently forced to blur 1.5 million images in response to privacy concerns.

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology companies, including Microsoft, Amazon and IBM, to halt sales. And the technology faces legal challenges regarding its use in policing in the UK. In the EU, a coalition of more than 40 civil society organisations have called for a ban on facial recognition technology entirely.

Read More  Expanding AI Technology For Unstructured Biomedical Text Beyond English

Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.

This is increasingly being challenged, however. Research in anthropology shows that emotions are expressed differently across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no scientific support for the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.

Also, like other forms of facial recognition technology, ERT is encoded with racial bias. A study has shown that systems consistently read black people’s faces as angrier than white people’s faces, regardless of the person’s expression. Although the study of racial bias in ERT is small, racial bias in other forms of facial recognition is well-documented.

There are two ways that this technology can hurt people, says AI researcher Deborah Raji in an interview with MIT Technology Review:

“One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them.”

So even if facial recognition technology can be de-biased and accurate for all people, it still may not be fair or just. We see these disparate effects when facial recognition technology is used in policing and judicial systems that are already discriminatory and harmful to people of colour. Technologies can be dangerous when they don’t work as they should. And they can also be dangerous when they work perfectly in an imperfect world.

Read More  Trends That Will Impact Data Analytics, AI, And Cloud In 2023

The challenges raised by facial recognition technologies – including ERT – do not have easy or clear answers. Solving the problems presented by ERT requires moving from AI ethics centred on abstract principles to AI ethics centred on practice and effects on people’s lives.

Image of a man looking into a phone.
AI can be racist.
HQuality/Shutterstock

When it comes to ERT, we need to collectively examine the controversial science of emotion built into these systems and analyse their potential for racial bias. And we need to ask ourselves: even if ERT could be engineered to accurately read everyone’s inner feelings, do we want such intimate surveillance in our lives? These are questions that require everyone’s deliberation, input and action.

 

Citizen science project

ERT has the potential to affect the lives of millions of people, yet there has been little public deliberation about how – and if – it should be used. This is why we have developed a citizen science project.

On our interactive website (which works best on a laptop, not a phone) you can try out a private and secure ERT for yourself, to see how it scans your face and interprets your emotions. You can also play games comparing human versus AI skills in emotion recognition and learn about the controversial science of emotion behind ERT.

Most importantly, you can contribute your perspectives and ideas to generate new knowledge about the potential impacts of ERT. As the computer scientist and digital activist Joy Buolamwini says: “If you have a face, you have a place in the conversation.”The Conversation

Alexa Hagerty, Research Associate of Anthropology, University of Cambridge and Alexandra Albert, Research Fellow in Citizen Social Science, UCL

This article is republished from The Conversation under a Creative Commons license.

Read More  Fight Fire With Fire: Using Good AI To Combat Bad AI
admin

Related Topics
  • AI
  • artificial intelligence (AI) system
  • Data Science
  • Emotion recognition technology (ERT)
  • ERT
  • Facial Recognition
You May Also Like
View Post
  • Artificial Intelligence
  • Machine Learning

6 ways Google AI Is Helping You Sleep Better

  • March 21, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

AI Could Make More Work For Us, Instead Of Simplifying Our Lives

  • March 21, 2023
View Post
  • Artificial Intelligence
  • Platforms

Microsoft To Showcase Purpose-Built AI Infrastructure At NVIDIA GTC

  • March 21, 2023
View Post
  • Artificial Intelligence
  • Engineering
  • Tools

The Next Generation Of AI For Developers And Google Workspace

  • March 21, 2023
View Post
  • Data
  • Platforms
  • Technology

How Osmo Is Digitizing Smell With Google Cloud AI Technology

  • March 20, 2023
View Post
  • Platforms
  • Technology

Building The Most Open And Innovative AI Ecosystem

  • March 20, 2023
View Post
  • Artificial Intelligence
  • Technology

Limits To Computing: A Computer Scientist Explains Why Even In The Age Of AI, Some Problems Are Just Too Difficult

  • March 17, 2023
View Post
  • Technology

We Cannot Even Agree On Dates…

  • March 17, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    6 ways Google AI Is Helping You Sleep Better
    • March 21, 2023
  • 2
    AI Could Make More Work For Us, Instead Of Simplifying Our Lives
    • March 21, 2023
  • 3
    Microsoft To Showcase Purpose-Built AI Infrastructure At NVIDIA GTC
    • March 21, 2023
  • 4
    The Next Generation Of AI For Developers And Google Workspace
    • March 21, 2023
  • 5
    Sumitovant More Than Doubles Its Research Output In Its Quest To Save Lives
    • March 21, 2023
  • 6
    How Osmo Is Digitizing Smell With Google Cloud AI Technology
    • March 20, 2023
  • 7
    Built With BigQuery: How Sift Delivers Fraud Detection Workflow Backtesting At Scale
    • March 20, 2023
  • 8
    Building The Most Open And Innovative AI Ecosystem
    • March 20, 2023
  • 9
    Understand And Trust Data With Dataplex Data Lineage
    • March 17, 2023
  • 10
    Limits To Computing: A Computer Scientist Explains Why Even In The Age Of AI, Some Problems Are Just Too Difficult
    • March 17, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    The Benefits And Core Processes Of Data Wrangling
    • March 17, 2023
  • 2
    We Cannot Even Agree On Dates…
    • March 17, 2023
  • 3
    Financial Crisis: It’s A Game & We’re All Being Played
    • March 17, 2023
  • 4
    Using ML To Predict The Weather And Climate Risk
    • March 16, 2023
  • 5
    Google Is A Leader In The 2023 Gartner® Magic Quadrant™ For Enterprise Conversational AI Platforms
    • March 16, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.