Our use of Artificial Intelligence is growing along with advancements in the field. It has gone to the point that it is used in riskier areas such as hiring, criminal justice, and healthcare. This is with the hope the AI will provide less biased results compared to humans.

In their paper, Jake Silberg and James Manyika discusses AI bias: the source of it and how we can minimize it.

A double-edged sword

The data used to train AI is also the source of the bias. Here are some ways the underlying data set produces biases. Silberg and Manyika also cited some examples for each.

  • Embedded inequities: The data used may have been influenced by societal or historical inequities. For instance, an AI trained on news article data may pick up gender biases on the use of words in the society. AI used for hiring may end up favoring words such as “executed” and “captured” — words found in men’s applications.
  • Collection/Selection bias: Oversampling or undersampling groups may lead to biases.In criminal justice models, some neighborhoods may be oversampled since they are overpoliced. This leads to more crime recorded in the neighborhood in comparison to others. With this, policing will be intensified in the neighborhood even though they are already overpoliced.
  • User-generated bias: Similar to the embedded inequities, an AI using user-generated data for training might pick up on biases found in the society.
  • Correlation bias: A machine learning algorithm decide based on statistical correlations that will yield illegal or unacceptable outcomes.For instance, a mortgage lending model picked up that likelihood of defaulting increases with age. With this, it chose to reduce lending based on age. This is illegal age discrimination.

How to minimize bias

Silberg and Manyika gave six suggestions to minimize bias from AI:

  1. Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could exacerbate bias.
  2. Establish processes and practices.
  3. Engage in fact-based conversations about potential biases in human decisions.
  4. Fully explore how humans and machines can work best together.
  5. Invest more in bias research, make more data available for research (while respecting privacy), and adopt a multidisciplinary approach.
  6. Invest more in diversifying the AI field itself.
Previous Statistics for Dummies: Levels of Measurement
Next How Machine Learning Is Supercharging Content Management