Liwaiwai Liwaiwai



Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Data
  • Machine Learning

AI Is Killing Choice And Chance – Which Means Changing What It Means To Be Human

  • March 19, 2021
  • admin

The history of humans’ use of technology has always been a history of coevolution. Philosophers from Rousseau to Heidegger to Carl Schmitt have argued that technology is never a neutral tool for achieving human ends. Technological innovations – from the most rudimentary to the most sophisticated – reshape people as they use these innovations to control their environment. Artificial intelligence is a new and powerful tool, and it, too, is altering humanity.

Writing and, later, the printing press made it possible to carefully record history and easily disseminate knowledge, but it eliminated centuries-old traditions of oral storytelling. Ubiquitous digital and phone cameras have changed how people experience and perceive events. Widely available GPS systems have meant that drivers rarely get lost, but a reliance on them has also atrophied their native capacity to orient themselves.

AI is no different. While the term AI conjures up anxieties about killer robots, unemployment or a massive surveillance state, there are other, deeper implications. As AI increasingly shapes the human experience, how does this change what it means to be human? Central to the problem is a person’s capacity to make choices, particularly judgments that have moral implications.

 

Taking over our lives?

AI is being used for wide and rapidly expanding purposes. It is being used to predict which television shows or movies individuals will want to watch based on past preferences and to make decisions about who can borrow money based on past performance and other proxies for the likelihood of repayment. It’s being used to detect fraudulent commercial transactions and identify malignant tumors. It’s being used for hiring and firing decisions in large chain stores and public school districts. And it’s being used in law enforcement – from assessing the chances of recidivism, to police force allocation, to the facial identification of criminal suspects.

Many of these applications present relatively obvious risks. If the algorithms used for loan approval, facial recognition and hiring are trained on biased data, thereby building biased models, they tend to perpetuate existing prejudices and inequalities. But researchers believe that cleaned-up data and more rigorous modeling would reduce and potentially eliminate algorithmic bias. It’s even possible that AI could make predictions that are fairer and less biased than those made by humans.

Where algorithmic bias is a technical issue that can be solved, at least in theory, the question of how AI alters the abilities that define human beings is more fundamental. We have been studying this question for the last few years as part of the Artificial Intelligence and Experience project at UMass Boston’s Applied Ethics Center.

 

Losing the ability to choose

Aristotle argued that the capacity for making practical judgments depends on regularly making them – on habit and practice. We see the emergence of machines as substitute judges in a variety of workaday contexts as a potential threat to people learning how to effectively exercise judgment themselves.

In the workplace, managers routinely make decisions about whom to hire or fire, which loan to approve and where to send police officers, to name a few. These are areas where algorithmic prescription is replacing human judgment, and so people who might have had the chance to develop practical judgment in these areas no longer will.

Recommendation engines, which are increasingly prevalent intermediaries in people’s consumption of culture, may serve to constrain choice and minimize serendipity. By presenting consumers with algorithmically curated choices of what to watch, read, stream and visit next, companies are replacing human taste with machine taste. In one sense, this is helpful. After all, the machines can survey a wider range of choices than any individual is likely to have the time or energy to do on her own.

A television remote control with button labelled Netflix, Hulu, Disney+ and Sling
Services that make recommendations based on preferences, like which movies to watch, reduce chance discoveries. AP Photo/Jenny Kane

At the same time, though, this curation is optimizing for what people are likely to prefer based on what they’ve preferred in the past. We think there is some risk that people’s options will be constrained by their pasts in a new and unanticipated way – a generalization of the “echo chamber” people are already seeing in social media.

The advent of potent predictive technologies seems likely to affect basic political institutions, too. The idea of human rights, for example, is grounded in the insight that human beings are majestic, unpredictable, self-governing agents whose freedoms must be guaranteed by the state. If humanity – or at least its decision-making – becomes more predictable, will political institutions continue to protect human rights in the same way?

 

Utterly predictable

As machine learning algorithms, a common form of “narrow” or “weak” AI, improve and as they train on more extensive data sets, larger parts of everyday life are likely to become utterly predictable. The predictions are going to get better and better, and they will ultimately make common experiences more efficient and more pleasant.

Algorithms could soon – if they don’t already – have a better idea about which show you’d like to watch next and which job candidate you should hire than you do. One day, humans may even find a way machines can make these decisions without some of the biases that humans typically display.

But to the extent that unpredictability is part of how people understand themselves and part of what people like about themselves, humanity is in the process of losing something significant. As they become more and more predictable, the creatures inhabiting the increasingly AI-mediated world will become less and less like us.The Conversation

Nir Eisikovits, Associate Professor of Philosophy and Director, Applied Ethics Center, University of Massachusetts Boston and Dan Feldman, Senior Research Fellow, Applied Ethics Center, University of Massachusetts Boston

This article is republished from The Conversation under a Creative Commons license.

admin

Related Topics
  • Algorithmic Bias
  • Artificial Intelligence
  • Choices
  • Ethics
  • Facial Recognition
  • Human Rights
  • Judgment
  • Machine Learning
  • Morals
  • Philosophy
  • Predictive analytics
You May Also Like
View Post
  • Artificial Intelligence

Microsoft‘s Big AI Ambitions Go Beyond Just OpenAI And ChatGPT

  • February 3, 2023
View Post
  • Artificial Intelligence
  • Technology

Deepfakes: Faces Created By AI Now Look More Real Than Genuine photos

  • February 3, 2023
View Post
  • Artificial Intelligence

GPT-3 In Your Pocket? Why Not!

  • February 3, 2023
View Post
  • Artificial Intelligence
  • Design
  • Engineering

Can AI Replace Cloud Architects?

  • February 2, 2023
View Post
  • Artificial Intelligence

Meet Aiko And Aiden: The World’s First AI Interns

  • February 2, 2023
View Post
  • Artificial Intelligence
  • Technology

Google Scrambles To Catch Up In The Wake Of OpenAI’s ChatGPT

  • January 31, 2023
View Post
  • Artificial Intelligence
  • Technology

9 Ways We Use AI In Our Products

  • January 31, 2023
View Post
  • Artificial Intelligence
  • Technology

7 Ways Google Is Using AI To Help Solve Society’s Challenges

  • January 30, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Microsoft‘s Big AI Ambitions Go Beyond Just OpenAI And ChatGPT
    • February 3, 2023
  • 2
    Deepfakes: Faces Created By AI Now Look More Real Than Genuine photos
    • February 3, 2023
  • 3
    GPT-3 In Your Pocket? Why Not!
    • February 3, 2023
  • 4
    Can AI Replace Cloud Architects?
    • February 2, 2023
  • 5
    Meet Aiko And Aiden: The World’s First AI Interns
    • February 2, 2023
  • 6
    Google Scrambles To Catch Up In The Wake Of OpenAI’s ChatGPT
    • January 31, 2023
  • 7
    9 Ways We Use AI In Our Products
    • January 31, 2023
  • 8
    Google Cloud Unveils New AI Tools for Retailers
    • January 31, 2023
  • 9
    7 Ways Google Is Using AI To Help Solve Society’s Challenges
    • January 30, 2023
  • 10
    The Ethics Of Machine Learning: Understanding The Role Of Developers And Designers
    • January 30, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    8 Best Human Behaviour Datasets For Machine Learning
    • January 30, 2023
  • 2
    Built With BigQuery: How To Accelerate Data-Centric AI Development With Google Cloud And Snorkel AI
    • January 29, 2023
  • 3
    What Kind Of Future Will AI Bring Enterprise IT?
    • January 29, 2023
  • 4
    Prompt Engineering For ChatGPT And Generative AI
    • January 29, 2023
  • 5
    AI Might Be Seemingly Everywhere, But There Are Still Plenty Of Things It Can’t Do—for now
    • January 27, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.