Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Architecture
  • Big Data
  • Data
  • People

From Nudge To Hypernudge: Big Data And Human Autonomy

  • October 7, 2020
  • admin

We produce data all the time. This is a not something new. Whenever a human being performs an action in the presence of another, there is a sense in which some new data is created. We learn more about people as we spend more time with them. We can observe them, and form models in our minds about why they do what they do, and the possible reasons they might have for doing so. With this data we might even gather new information about that person. Information, simply, is processed data, fit for use. With this information we might even start to predict their behaviour. On an inter-personal level this is hardly problematic. I might learn over time that my roommate really enjoys tea in the afternoon. Based on this data, I can predict that at three o’clock he will want tea, and I can make it for him. This satisfies his preferences and lets me off the hook for not doing the dishes.

The fact that we produce data, and can use it for our own purposes, is therefore not a novel or necessarily controversial claim. Digital technologies (such as Facebook, Google, etc.), however, complicate the simplistic model outlined above. These technologies are capable of tracking and storing our behaviour (to varying degrees of precision, but they are getting much better) and using this data to influence our decisions.

“Big Data” refers to this constellation of properties: it is the process of taking massive amounts of data and using computational power to extract meaningful patterns. Significantly, what differentiates Big Data from traditional data analysis is that the patterns extracted would have remained opaque without the resources provided by electronically powered systems.”

Big Data could therefore present a serious challenge to human decision-making. If the patterns extracted from the data we are producing is used in malicious ways, this could result in a decreased capacity for us to exercise our individual autonomy. But how might such data be used to influence our behaviour at all? To get a handle on this, we first need to understand the common cognitive biases and heuristics that we as humans display in a variety of informational contexts.

In their now-famous book, Nudge, Richard Thaler and Cass Sunstein argue for a kind of “libertarian-paternalism”, which presupposes that there are reliable ways in which we reason incorrectly about the world. The authors claim that there are certain choice architectures that are better or worse than others, and, specifically, that because of our cognitive limitations as human agents, we should adopt and implement designs that “nudge” our behaviour in desirable ways. From the placement of food at a buffet (first cheap carbohydrates, following by more expensive proteins), to “opt-out” models for retirement annuity contributions, nudges can be for better or worse. However, according to Thaler and Sunstein, for a “nudge” to be a nudge (and not manipulation) it;

“alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives” (2008: 6).

In essence, the aim of such an approach is to actively engineer the choices available to users so that the cognitive biases and fallacies that are part and parcel of what it means to be human are mitigated. Key to this approach is a distinction between two cognitive “systems” that are employed whenever we make decisions. Firstly, there is the Reflective System, which involves deliberation and conscious effort on the part of the agent. Examples of this could be deciding on a romantic partner or where to go on holiday. Secondly, there is the Automatic System, which is called on almost intuitively and instinctively, and is associated with behaviour such as smiling at a baby or avoiding an incoming projectile. Key to understanding this “systems” approach to cognition is the fact that we are impressively irrational and rely on a host of heuristics and biases when making decisions, especially when we make use of our Automatic System. Much of our decision-making occurs unconsciously, and so the choices made available to us (and the way that they are framed) can significantly influence what we end up endorsing.

Read More  How To Prevent Discriminatory Outcomes In Machine Learning

For an example of such a cognitive bias, Daniel Kahneman (author of the fantastic Thinking, Fast and Slow) outlines the “affect” heuristic. The affect heuristic is a cognitive shortcut which allows agents to efficiently solve problems by relying on their current mood. It allows people to judge the risk or benefits of a specific action by relying on which feelings are associated with that outcome, as opposed to engaging in temporally expensive reasoning. There are cases where this can be useful (better avoid this spider) or misleading (climate change does not produce an affective response in many, and so is thought by some to not be a serious issue). Other examples of such biases include the mood heuristic, the availability heuristic, anchoring, regression to the mean, the status quo heuristic, herd mentality, to name a few. Spelling out each of these, while useful, is beyond my purposes here. Simply, we are not perfectly rational.

As noted above, a key component of Nudge is that it does not forbid any options nor change economic incentives, and thus does not impact the autonomy of individuals (our ability to make informed decisions free of coercion). When nudging is combined with Big Data, however, an evil demon rears its head: Hypernudging (or manipulation). Coined by Karen Yeung, the effectiveness of hypernudging stems from its pact with Big Data: algorithmically driven systems harness the informationally rich reservoir of human online behaviour to “guide” our behaviour.

Such nudges are “highly potent, providing the data subject with a highly personalised choice environment”, and in this way come to regulate, by design, the choice architectures that are available to the agent. Big Data analytics therefore use this information in a highly dynamic and personalised way, as the data reservoir is constantly being updated each time a user performs a new interaction in an online environment. Worryingly, it is almost impossible to live an “offline” life, as data from phones, watches, fridges and even children’s toys is constantly being collected and analysed. This provides the Big Data Barons (such as Facebook, Google and Amazon) with a truly massive amount of data.

Read More  How Can AI Stop Crimes Before They Happen?

As noted above, the personalised nature of hypernudging is a key feature that distinguishes it from traditional forms of nudging. For example, speedbumps can be viewed as a kind of nudge, in that they modify behaviour in a way that promotes the value of safety. Drives are forced to slow down or risk damaging their suspension. The speed bump, however, is the same for everyone. It does not change shape based on who happens to be approaching it at what time of day. Hypernudges, by making use of our online habits, have the capacity to provide us each with our own personal speedbump. As Yeung notes, Big Data makes use of algorithmic processes capable of extracting patterns from data-sets that would not be possible with only human cognition,

“thereby conferring ‘salience’ on the highlighted data patterns, operating through the technique of ‘priming’, dynamically configuring the user’s informational choice context in ways intentionally designed to influence her decisions”

Such analysis could reliably predict, based on previously collected and sorted data, what time(s) of the day we are most likely to click on an advert, when we are more likely to feel depressed, and when we are more susceptible to being “primed” in one direction or another.

It is here that the issue of our autonomy becomes paramount. Hypernudging operates on the Automatic System outlined earlier. By making use of the broadly consistent ways in which human beings both form beliefs about the world and the shared kinds of cognitive biases we display, hypernudges exercise a kind of “soft power”, capable of intentionally “guiding” our behaviour along paths desired by the Big Data Barons (or any advertiser, should they be willing to pay). Because this coercion operates below the level of conscious awareness, hypernudges explicitly seek to bypass our rational decision-making processes (the Reflective System introduced earlier). In this way they exploit our shared irrationality and undermine our autonomy by using this information to change our economic incentives (which is another reason why they are not mere nudges).

Read More  BigQuery Under The Hood: Behind The Serverless Storage And Query Optimizations That Supercharge Performance

Given all of the above, what are we to do? Well, a first step in the right direction would be demanding greater ethical intelligence from the Big Data Barons. Especially when they engage in the explicit emotional manipulation of their users, as Facebook did in 2012. This might involve better control when it comes to who has access to information that reveals something private about us, and heightened sensitivity to when our information is being co-opted for malicious purposes. Such engagement with ethical thinking (and the regulatory guidelines that follow), however, should be done by experts, and not by other companies who have a vested interest in the business model as it currently operates. It might be strange to even mention this point, but as ridiculous as it sounds, Google recently offered to help others with the ethics of AI. I hope we never have to live in a world where Google gets to set the ethical standard for the development of anything.

In sum, hypernudging represents a serious threat to human autonomy. By making use of widely shared cognitive biases and Big Data analytics, this form of nudging takes manipulation to the next level, as it aims to circumvent our rational decision-making apparatus and instead operates at the level of our instinctive cognitive machinery. This state of affairs calls for not just greater ethical reflection, but the ethical implementation of principles that align our use of technology with socially beneficial goals.

 

This feature originally appeared in 3QuarksDaily.

admin

Related Topics
  • Algorithms
  • Amazon
  • Big Data
  • Cognitive Bias
  • Data
  • Facebook
  • Google
  • Human Autonomy
  • Hypernudging
  • Manipulation
You May Also Like
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • Technology

Google Data Cloud & AI Summit : In Less Than 12 Hours From Now

  • March 29, 2023
View Post
  • Data
  • Machine Learning
  • Platforms

Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud

  • March 23, 2023
View Post
  • Data
  • Engineering

BigQuery Under The Hood: Behind The Serverless Storage And Query Optimizations That Supercharge Performance

  • March 22, 2023
View Post
  • Data
  • Design
  • Engineering
  • Tools

Sumitovant More Than Doubles Its Research Output In Its Quest To Save Lives

  • March 21, 2023
View Post
  • Data
  • Platforms
  • Technology

How Osmo Is Digitizing Smell With Google Cloud AI Technology

  • March 20, 2023
View Post
  • Data
  • Engineering
  • Tools

Built With BigQuery: How Sift Delivers Fraud Detection Workflow Backtesting At Scale

  • March 20, 2023
View Post
  • Data

Understand And Trust Data With Dataplex Data Lineage

  • March 17, 2023
View Post
  • Big Data
  • Data

The Benefits And Core Processes Of Data Wrangling

  • March 17, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 2
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 3
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 4
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023
  • 5
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023
  • 6
    How AI Can Improve Digital Security
    • March 27, 2023
  • 7
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 8
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • 9
    My First Pull Request At Age 14
    • March 24, 2023
  • 10
    The 5 Podcasts To Check If You Want To Get Up To Speed On AI
    • March 24, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    GPT-4 : The Latest Milestone From OpenAI
    • March 24, 2023
  • 2
    Ditching Google: The 3 Search Engines That Use AI To Give Results That Are Meaningful
    • March 23, 2023
  • 3
    Peacock: Tackling ML Challenges By Accelerating Skills
    • March 23, 2023
  • 4
    Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud
    • March 23, 2023
  • 5
    Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions
    • March 23, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.