We produce data all the time. This is a not something new. Whenever a human being performs an action in the presence of another, there is a sense in which some new data is created. We learn more about people as we spend more time with them. We can observe them, and form models in our minds about why they do what they do, and the possible reasons they might have for doing so. With this data we might even gather new information about that person. Information, simply, is processed data, fit for use. With this information we might even start to predict their behaviour. On an inter-personal level this is hardly problematic. I might learn over time that my roommate really enjoys tea in the afternoon. Based on this data, I can predict that at three o’clock he will want tea, and I can make it for him. This satisfies his preferences and lets me off the hook for not doing the dishes.
The fact that we produce data, and can use it for our own purposes, is therefore not a novel or necessarily controversial claim. Digital technologies (such as Facebook, Google, etc.), however, complicate the simplistic model outlined above. These technologies are capable of tracking and storing our behaviour (to varying degrees of precision, but they are getting much better) and using this data to influence our decisions.
“Big Data” refers to this constellation of properties: it is the process of taking massive amounts of data and using computational power to extract meaningful patterns. Significantly, what differentiates Big Data from traditional data analysis is that the patterns extracted would have remained opaque without the resources provided by electronically powered systems.”
Big Data could therefore present a serious challenge to human decision-making. If the patterns extracted from the data we are producing is used in malicious ways, this could result in a decreased capacity for us to exercise our individual autonomy. But how might such data be used to influence our behaviour at all? To get a handle on this, we first need to understand the common cognitive biases and heuristics that we as humans display in a variety of informational contexts.
In their now-famous book, Nudge, Richard Thaler and Cass Sunstein argue for a kind of “libertarian-paternalism”, which presupposes that there are reliable ways in which we reason incorrectly about the world. The authors claim that there are certain choice architectures that are better or worse than others, and, specifically, that because of our cognitive limitations as human agents, we should adopt and implement designs that “nudge” our behaviour in desirable ways. From the placement of food at a buffet (first cheap carbohydrates, following by more expensive proteins), to “opt-out” models for retirement annuity contributions, nudges can be for better or worse. However, according to Thaler and Sunstein, for a “nudge” to be a nudge (and not manipulation) it;
“alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives” (2008: 6).
In essence, the aim of such an approach is to actively engineer the choices available to users so that the cognitive biases and fallacies that are part and parcel of what it means to be human are mitigated. Key to this approach is a distinction between two cognitive “systems” that are employed whenever we make decisions. Firstly, there is the Reflective System, which involves deliberation and conscious effort on the part of the agent. Examples of this could be deciding on a romantic partner or where to go on holiday. Secondly, there is the Automatic System, which is called on almost intuitively and instinctively, and is associated with behaviour such as smiling at a baby or avoiding an incoming projectile. Key to understanding this “systems” approach to cognition is the fact that we are impressively irrational and rely on a host of heuristics and biases when making decisions, especially when we make use of our Automatic System. Much of our decision-making occurs unconsciously, and so the choices made available to us (and the way that they are framed) can significantly influence what we end up endorsing.
For an example of such a cognitive bias, Daniel Kahneman (author of the fantastic Thinking, Fast and Slow) outlines the “affect” heuristic. The affect heuristic is a cognitive shortcut which allows agents to efficiently solve problems by relying on their current mood. It allows people to judge the risk or benefits of a specific action by relying on which feelings are associated with that outcome, as opposed to engaging in temporally expensive reasoning. There are cases where this can be useful (better avoid this spider) or misleading (climate change does not produce an affective response in many, and so is thought by some to not be a serious issue). Other examples of such biases include the mood heuristic, the availability heuristic, anchoring, regression to the mean, the status quo heuristic, herd mentality, to name a few. Spelling out each of these, while useful, is beyond my purposes here. Simply, we are not perfectly rational.
As noted above, a key component of Nudge is that it does not forbid any options nor change economic incentives, and thus does not impact the autonomy of individuals (our ability to make informed decisions free of coercion). When nudging is combined with Big Data, however, an evil demon rears its head: Hypernudging (or manipulation). Coined by Karen Yeung, the effectiveness of hypernudging stems from its pact with Big Data: algorithmically driven systems harness the informationally rich reservoir of human online behaviour to “guide” our behaviour.
Such nudges are “highly potent, providing the data subject with a highly personalised choice environment”, and in this way come to regulate, by design, the choice architectures that are available to the agent. Big Data analytics therefore use this information in a highly dynamic and personalised way, as the data reservoir is constantly being updated each time a user performs a new interaction in an online environment. Worryingly, it is almost impossible to live an “offline” life, as data from phones, watches, fridges and even children’s toys is constantly being collected and analysed. This provides the Big Data Barons (such as Facebook, Google and Amazon) with a truly massive amount of data.
As noted above, the personalised nature of hypernudging is a key feature that distinguishes it from traditional forms of nudging. For example, speedbumps can be viewed as a kind of nudge, in that they modify behaviour in a way that promotes the value of safety. Drives are forced to slow down or risk damaging their suspension. The speed bump, however, is the same for everyone. It does not change shape based on who happens to be approaching it at what time of day. Hypernudges, by making use of our online habits, have the capacity to provide us each with our own personal speedbump. As Yeung notes, Big Data makes use of algorithmic processes capable of extracting patterns from data-sets that would not be possible with only human cognition,
“thereby conferring ‘salience’ on the highlighted data patterns, operating through the technique of ‘priming’, dynamically configuring the user’s informational choice context in ways intentionally designed to influence her decisions”
Such analysis could reliably predict, based on previously collected and sorted data, what time(s) of the day we are most likely to click on an advert, when we are more likely to feel depressed, and when we are more susceptible to being “primed” in one direction or another.
It is here that the issue of our autonomy becomes paramount. Hypernudging operates on the Automatic System outlined earlier. By making use of the broadly consistent ways in which human beings both form beliefs about the world and the shared kinds of cognitive biases we display, hypernudges exercise a kind of “soft power”, capable of intentionally “guiding” our behaviour along paths desired by the Big Data Barons (or any advertiser, should they be willing to pay). Because this coercion operates below the level of conscious awareness, hypernudges explicitly seek to bypass our rational decision-making processes (the Reflective System introduced earlier). In this way they exploit our shared irrationality and undermine our autonomy by using this information to change our economic incentives (which is another reason why they are not mere nudges).
Given all of the above, what are we to do? Well, a first step in the right direction would be demanding greater ethical intelligence from the Big Data Barons. Especially when they engage in the explicit emotional manipulation of their users, as Facebook did in 2012. This might involve better control when it comes to who has access to information that reveals something private about us, and heightened sensitivity to when our information is being co-opted for malicious purposes. Such engagement with ethical thinking (and the regulatory guidelines that follow), however, should be done by experts, and not by other companies who have a vested interest in the business model as it currently operates. It might be strange to even mention this point, but as ridiculous as it sounds, Google recently offered to help others with the ethics of AI. I hope we never have to live in a world where Google gets to set the ethical standard for the development of anything.
In sum, hypernudging represents a serious threat to human autonomy. By making use of widely shared cognitive biases and Big Data analytics, this form of nudging takes manipulation to the next level, as it aims to circumvent our rational decision-making apparatus and instead operates at the level of our instinctive cognitive machinery. This state of affairs calls for not just greater ethical reflection, but the ethical implementation of principles that align our use of technology with socially beneficial goals.
This feature originally appeared in 3QuarksDaily.