Intelligence, Inside and Outside.

An Intelligence In Our Image: The Risks Of Bias And Errors In Artificial Intelligence

Right now, artificial intelligence (AI) and countless algorithms are integrated into our daily life. Because of the efficiency they bring into the table, the use of AI is only expected to widen. With humanity becoming more and more reliant on this technology, it is only natural to think about the implications. In contrast to the common impression that AI and algorithms are impartial and infallible, these technologies can fail miserably.

William Welser IV and Osonde Osoba’s An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence evaluates algorithms and AI — which they group together under the moniker, artificial agents — its shortcomings and how they can be combated.

Here are the key points in this report.

Algorithms: Definition and Evaluation

For so long algorithms are perceived only as some sort of a step-by-step procedure or code to process numbers or solve equations.

In the 1960s, however, algorithms became more than just static mathematical models. Algorithms were developed to learn. Instead of just crunching the numbers, the algorithms tune their behaviour based on what input id being fed to them in conjunction with performance metrics.

Algorithms that we experience right now are dynamic. They learn. They can adapt from experience. In other words, they have intelligence.

Right now, the revolution brought by Big Data lends itself to the development of learning algorithms. As much as the technology has grown over the years, it remains flawed.

The vulnerability of artificial agents lies on one thing: the input data used to train them — their data diet, so to speak.

Read More  PyCon 2019 | Machine Learning Model And Dataset Versioning Practices

The Problem in Focus: Factors

According to Osoba and Welser, an artificial agent is only as good as the data which trains it. Data diet is the first and main problem of AI. To remedy this, algorithms should be designed taking the bias into account.

The second problem is dealing with policy and social questions. To define truth and the guiding principles an artificial agent must abide by is a difficult task.

The third problem is dealing with fuzzy, non-binary problems.

Social norms, the government, and the law all require humans to make judgments based on their subjective perceptions and the awareness of the nuances of each situation they are presented with.

While an artificial agent will most definitely excel in black & white situations, our world is more of a spectrum of greys.

A technical standpoint

Aside from these three problems, Osoba and Welser also pinpointed technical factors which pose difficulty in developing algorithms:

  • Sample size disparity – machine learning algorithms are statistical in nature. With this, they heavily rely on estimation and probabilities. Given this, all AI decisions are subject to some level of error.Increasing the sample size is one of the ways errors can be minimized. However, there are minorities or marginalized groups which are inherently underrepresented in data sets. With this, biases become more pronounced and errors are more likely.
  • Hacked reward functions – artificial agents are often restrained with reward functions which quantify the reward and punishment they received from behaviours they take.Artificial agents are designed to maximize their reward functions. Artificial agents may end up gaming the system, leading to undesirable results.

    Osoba and Welser give a good example in their report. They gave a scenario where a cleaning robot is designed to minimize the amount of dirt it detects in its visual sensor to gain rewards. Gaming the system, the robot may end up shutting down its vision. Seeing no dirt at all, the reward is technically still maximized.

    A faulty reward function may be exploited not only by algorithms but also by humans who want to game the algorithms.

Poor reward functions will allow humans to eventually manipulate the algorithms in accord to the outcome they desire. In the context of critical situations such as intelligent credit scoring systems, this may turn out really ugly.

  • Cultural differences – differences in culture may lead to inequitable in action. Osoba and Welser mentioned a case of mass-flagging of accounts with non-Western names in social media platforms. The AI detected the non-typical names as false names.
  • Confounding covariates – To remove biases, developers might choose to hide sensitive information such as income in training their artificial agents.However, machine learning algorithms today have grown robust when it comes to identifying data. They may learn to infer sensitive variables from other strongly associated variables known as confounding covariates.

    In Osoba and Welser’s example, a machine learning algorithm trained with a data set where income data is removed may learn to infer income using the ZIP code as a substitute. With ZIP code being associated with income, the removal of bias remains unsuccessful.

Remedies

Osoba and Welse also mentioned remedies to ensure that artificial agents remain fair, accountable, and transparent:

  • Statistical and algorithmic approaches – these approaches will become a form of an audit: comparing the output of the algorithms with the expected equitable behaviour and applying the appropriate corrections.
  • Causal reasoning algorithms – Equipping machine learning algorithms with causal reasoning models will allow them to present a narrative detailing their reasoning for each decision taken. With such a narrative,  humans can decide on the validity of an algorithm’s reasoning.
  • Algorithmic literacy and transparency – Informing users about the possible inequitable decisions algorithms may make will eliminate the notion of the infallibility of these systems.
  • Personnel Approaches – exposing algorithm designers to social and public policy questions as well as diversifying the group of designers working on algorithms will help eliminate bias.

Eliminating bias in artificial agents will entail an infusion of technical and non-technical remedies.

As much as we should check the outputs of our intelligent systems, the algorithm designers, policy-makers, leaders and the general public —  first and foremost — should also keep their biases in check.

Read More  Artificial Intelligence Is Now Part Of Our Everyday Lives – And Its Growing Power Is A Double-Edged Sword

For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

Turing Test: Why It Still Matters

Next Post

How To Install Pandas (Python OpenSource Project)

Read next