Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Machine Learning
  • Technology

Use Of AI To Fight COVID-19 Risks Harming ‘Disadvantaged Groups’, Experts Warn

  • April 9, 2021
  • admin

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease.

In a health crisis of this magnitude, the stakes for fairness and equity are extremely high

Alexa Hagerty

This is according to researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) in two articles published in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain normalcy in 2021.

“Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic,” said Dr Stephen Cave, Director of CFI and lead author of one of the articles.

“The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology.”

In a further paper, co-authored by CFI’s Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale – predicting deterioration rates of patients who might need ventilation, for example – if it does so based on biased data.

Datasets used to “train” and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of “lower socioeconomic status”.

“COVID-19 has already had a disproportionate impact on vulnerable communities. We know these systems can discriminate, and any algorithmic bias in treating the disease could land a further brutal punch,” Hagerty said.

Read More  Darktrace Builds On AWS Collaboration With Improved Serverless Architecture And Cloud-Native Capabilities

In December, protests ensued when Stanford Medical Centre’s algorithm prioritized home-workers for vaccination over those on the Covid wards. “Algorithms are now used at a local, national and global scale to define vaccine allocation. In many cases, AI plays a central role in determining who is best placed to survive the pandemic,” said Hagerty.

“In a health crisis of this magnitude, the stakes for fairness and equity are extremely high.”

Along with colleagues, Hagerty highlights the well-established “discrimination creep” found in AI that uses “natural language processing” technology to pick up symptom profiles from medical records – reflecting and exacerbating biases against minorities already in the case notes.

They point out that some hospitals already use these technologies to extract diagnostic information from a range of records, and some are now using this AI to identify symptoms of COVID-19 infection.

Similarly, the use of track-and-trace apps creates the potential for biased datasets. The researchers write that, in the UK, over 20% of those aged over 15 lack essential digital skills, and up to 10% of some population “sub-groups” don’t own smartphones.

“Whether originating from medical records or everyday technologies, biased datasets applied in a one-size-fits-all manner to tackle COVID-19 could prove harmful for those already disadvantaged,” said Hagerty.

In the BMJ articles, the researchers point to examples such as the fact that a lack of data on skin colour makes it almost impossible for AI models to produce accurate large-scale computation of blood-oxygen levels. Or how an algorithmic tool used by the US prison system to calibrate reoffending – and proven to be racially biased – has been repurposed to manage its COVID-19 infection risk.

Read More  An Automated Way To Assemble Thousands Of Objects

The Leverhulme Centre for the Future of Intelligence recently launched the UK’s first Master’s course for ethics in AI. For Cave and colleagues, machine learning in the Covid era should be viewed through the prism of biomedical ethics – in particular the “four pillars”.

The first is beneficence. “Use of AI is intended to save lives, but that should not be used as a blanket justification to set otherwise unwelcome precedents, such as widespread use of facial recognition software,” said Cave.

In India, biometric identity programs can be linked to vaccination distribution, raising concerns for data privacy and security. Other vaccine allocation algorithms, including some used by the COVAX alliance, are driven by privately owned AI, says Hagerty. “Proprietary algorithms make it hard to look into the ‘black box’, and see how they determine vaccine priorities.”

The second is ‘non-maleficence’, or avoiding needless harm. A system programmed solely to preserve life will not consider rates of ‘long covid’, for example. Thirdly, human autonomy must be part of the calculation. Professionals need to trust technologies, and designers should consider how systems affect human behaviour – from personal precautions to treatment decisions.

Finally, data-driven AI must be underpinned by ideals of social justice. “We need to involve diverse communities, and consult a range of experts, from engineers to frontline medical teams. We must be open about the values and trade-offs inherent in these systems,” said Cave.

“AI has the potential to help us solve global problems, and the pandemic is unquestionably a major one. But relying on powerful AI in this time of crisis brings ethical challenges that must be considered to secure public trust.”

Read More  The First AI Model That Translates 100 Languages Without Relying On English Data

This article is republished from University of Cambridge Research.

admin

You May Also Like
View Post
  • Artificial Intelligence
  • Technology
  • Tools

Ditching Google: The 3 Search Engines That Use AI To Give Results That Are Meaningful

  • March 23, 2023
View Post
  • Engineering
  • Machine Learning

Peacock: Tackling ML Challenges By Accelerating Skills

  • March 23, 2023
View Post
  • Data
  • Machine Learning
  • Platforms

Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud

  • March 23, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Robotics

Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions

  • March 23, 2023
View Post
  • Technology

The Technology Behind A Perfect Cup Of Coffee

  • March 22, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

6 ways Google AI Is Helping You Sleep Better

  • March 21, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

AI Could Make More Work For Us, Instead Of Simplifying Our Lives

  • March 21, 2023
View Post
  • Artificial Intelligence
  • Platforms

Microsoft To Showcase Purpose-Built AI Infrastructure At NVIDIA GTC

  • March 21, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Ditching Google: The 3 Search Engines That Use AI To Give Results That Are Meaningful
    • March 23, 2023
  • 2
    Peacock: Tackling ML Challenges By Accelerating Skills
    • March 23, 2023
  • 3
    Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud
    • March 23, 2023
  • 4
    Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions
    • March 23, 2023
  • 5
    The Technology Behind A Perfect Cup Of Coffee
    • March 22, 2023
  • 6
    BigQuery Under The Hood: Behind The Serverless Storage And Query Optimizations That Supercharge Performance
    • March 22, 2023
  • 7
    6 ways Google AI Is Helping You Sleep Better
    • March 21, 2023
  • 8
    AI Could Make More Work For Us, Instead Of Simplifying Our Lives
    • March 21, 2023
  • 9
    Microsoft To Showcase Purpose-Built AI Infrastructure At NVIDIA GTC
    • March 21, 2023
  • 10
    The Next Generation Of AI For Developers And Google Workspace
    • March 21, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    ABB To Expand Robotics Factory In US
    • March 16, 2023
  • 2
    Introducing Microsoft 365 Copilot: Your Copilot For Work
    • March 16, 2023
  • 3
    Linux Foundation Training & Certification & Cloud Native Computing Foundation Partner With Corise To Prepare 50,000 Professionals For The Certified Kubernetes Administrator Exam
    • March 16, 2023
  • 4
    Intel Contributes AI Acceleration to PyTorch 2.0
    • March 15, 2023
  • 5
    Sumitovant More Than Doubles Its Research Output In Its Quest To Save Lives
    • March 21, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.