Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Data
  • Machine Learning

How To Prevent Discriminatory Outcomes In Machine Learning

  • June 1, 2019
  • admin

As machine learning (ML) systems continue to improve, its integration to systems making up the society becomes more seamless. Right now, ML is involved in making critical decisions such as court decisions and job hirings.

Without a doubt, using ML in these processes will lead to more efficiency. With a good design, ML systems can also eliminate the biases humans have when it comes to their decisions.

On the other extreme, this integration could end up really ugly. Trained with data with underlying biases on race, gender, or other factors, ML can amplify these biases and further perpetuate discrimination.

How do we then make sure that these systems will not end up violating our rights? World Economic Forum (WEF) answers this in their white paper.

The Challenges

On the nature of ML

ML is ubiquitous in our society, especially in developed countries like the United States and Europe. However, these systems are highly complex and proprietary. These factors render them as black box systems, with people not really aware of their inner workings. This strips transparency and auditability in these systems. Understandably, this could sow distrust in this technology.

On the data used to train ML systems

Data isn’t always widely available. Typically, corporations keep the data they collect in private. With this, entities with data and the expertise to harness it will have the advantage when it comes to developing ML systems.

As earlier mentioned, these data sets may have underlying biases and errors which may further entrench discrimination.

On the design of ML algorithms

Apart from the data used for training, the algorithms used to develop the ML system can also pose some risks for discrimination. These risks can be attributed to:

  • Wrong choice of algorithm
  • Building an algorithm with inadvertently discriminatory features
  • Absence of human oversight and involvement in the use of the ML system
  • Lack of understanding of how the algorithm works, leading to discrimination being overlooked
  • Unchecked and intentional discrimination
Read More  Advanced Materials In A Snap: Machine Learning Could Lop A Year Off Technology Design Cycle

The Responsibilities of Businesses

Given these challenges, what can businesses do to combat discrimination? WEF highlights four focal points:

 

Figure 1. Four central principles to combat bias in machine learning and uphold human rights and dignity, Adapted from “How to Prevent Discriminatory Outcomes in Machine Learning”, by World Economic Forum, March 2018, retrieved from https://www.weforum.org/

 

  • Active Inclusion: Business entities should actively ensure inclusivity in the development of ML applications.
  • Fairness: Fairness should be prioritized in the development of machine learning systems.
  • Right to understanding: Businesses should be able to disclose and communicate how ML is being used to make decisions that affects individual rights.
  • Access to Remedy: Platforms to remedy discriminatory outputs of ML systems such as checking mechanisms and reporting processes must be set in place.

Policies and regulations usually lag behind development. On the other hand, businesses are directly involved in the development of ML systems so they are always on track. With this, business entities have to make sure that they integrate these principles in order to not contribute to the culture of discrimination.

Given that they have access to massive amounts of data and the tools to develop ML systems, they also have a huge obligation in upholding human rights in the development and deployment of these.

admin

Related Topics
  • Algorithms
  • Discrimination
  • Policies
  • Training Data
You May Also Like
View Post
  • Engineering
  • Machine Learning

Peacock: Tackling ML Challenges By Accelerating Skills

  • March 23, 2023
View Post
  • Data
  • Machine Learning
  • Platforms

Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud

  • March 23, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Robotics

Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions

  • March 23, 2023
View Post
  • Data
  • Engineering

BigQuery Under The Hood: Behind The Serverless Storage And Query Optimizations That Supercharge Performance

  • March 22, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

6 ways Google AI Is Helping You Sleep Better

  • March 21, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

AI Could Make More Work For Us, Instead Of Simplifying Our Lives

  • March 21, 2023
View Post
  • Data
  • Design
  • Engineering
  • Tools

Sumitovant More Than Doubles Its Research Output In Its Quest To Save Lives

  • March 21, 2023
View Post
  • Data
  • Platforms
  • Technology

How Osmo Is Digitizing Smell With Google Cloud AI Technology

  • March 20, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Ditching Google: The 3 Search Engines That Use AI To Give Results That Are Meaningful
    • March 23, 2023
  • 2
    Peacock: Tackling ML Challenges By Accelerating Skills
    • March 23, 2023
  • 3
    Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud
    • March 23, 2023
  • 4
    Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions
    • March 23, 2023
  • 5
    The Technology Behind A Perfect Cup Of Coffee
    • March 22, 2023
  • 6
    BigQuery Under The Hood: Behind The Serverless Storage And Query Optimizations That Supercharge Performance
    • March 22, 2023
  • 7
    6 ways Google AI Is Helping You Sleep Better
    • March 21, 2023
  • 8
    AI Could Make More Work For Us, Instead Of Simplifying Our Lives
    • March 21, 2023
  • 9
    Microsoft To Showcase Purpose-Built AI Infrastructure At NVIDIA GTC
    • March 21, 2023
  • 10
    The Next Generation Of AI For Developers And Google Workspace
    • March 21, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    ABB To Expand Robotics Factory In US
    • March 16, 2023
  • 2
    Introducing Microsoft 365 Copilot: Your Copilot For Work
    • March 16, 2023
  • 3
    Linux Foundation Training & Certification & Cloud Native Computing Foundation Partner With Corise To Prepare 50,000 Professionals For The Certified Kubernetes Administrator Exam
    • March 16, 2023
  • 4
    Intel Contributes AI Acceleration to PyTorch 2.0
    • March 15, 2023
  • 5
    Sumitovant More Than Doubles Its Research Output In Its Quest To Save Lives
    • March 21, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.