Chapter 6: Exploiting vulnerabilities in decision-making

If you think of the stream of information that enters your mind, you first have to perceive it, and then you have to comprehend it. I’ve explained how weaknesses in both of these areas can be exploited. After perception and comprehension occur, we then need to engage in critical thinking, or what cognitive psychologists tend to call ‘judgement and decision-making’ which can also be exploited for commercial gain.1 To quote whistleblower Christopher Wylie from his book Mindf*ck:2

‘The goal in hacking is to find a weak point in a system and then exploit that vulnerability. In psychological warfare, the weak points are flaws in how people think. If you’re trying to hack a person’s mind, you need to identify cognitive biases and then exploit them.’
— Christopher Wylie (2020)

A cognitive bias is a mental shortcut that tends to cause a systematic error in judgement and decision-making. Humans fall foul of these biases rather predictably, which led economist Dan Ariely to describe human behaviour as ‘predictably irrational’.3 Despite their shortcomings, cognitive biases are also believed to provide benefits because they provide shortcuts, ways to avoid effortful work in order to save time and energy for other more important matters. Cognitive scientist Aaron Sloman describes this as ‘productive laziness’ and explains, ‘a chess champion who wins by working through all the possible sequences of moves several steps ahead and choosing the optimal one is not as intelligent as the player who avoids explicitly examining so many cases’.4 Sloman wrote this in 1988 – no doubt he would happily refer to the web instead of chess if he were to write it today. No sensible human would read every result on Google, or every product listing on Amazon before choosing which item to click. Shortcuts are necessary to cope, so today we rely on cognitive biases more than ever, because we simply cannot process all the information we receive in detail.

There are thousands of research papers and well over one hundred types of cognitive biases proposed.5 Research on cognitive biases started to become well known in the early 2000s, entering the realms of pop psychology, business and design textbooks. The tech industry latched onto this with a great deal of enthusiasm. Some authors were very direct about the purpose of their work. In the introduction of his book Influence, Robert Cialdini refers to his area of work as ‘the psychology of compliance’ (that is, submission to demands of others) and he describes his key principles as ‘six universal weapons of influence’.6 In the book Hooked, the author Nir Eyal promotes a ‘habit-forming’ behavioural model that is nearly identical to Natasha Dow Schüll’s model of ‘ludic loops’ – except Dow Schüll describes her model as ‘addiction by design’ and presents harrowing accounts of lives destroyed by gambling.7 Eyal is careful to avoid the word ‘addiction’, but the connection is obvious.

Today, numerous websites and blogs provide guides on how to exploit cognitive biases for profit; for example, the company Convertize provides a library of cognitive biases that it cheerfully recommends as ‘A/B Testing Ideas Based On Neuromarketing’, without any mention of negative consequences for the end user, such as being tricked or trapped into unwanted transactions or contracts.8

There’s also lots of content available about cognitive biases and persuasion that proposes use in a non-exploitative manner – but it’s a very short hop from ‘use this bias to persuade in a transparent and helpful way’ to ‘use this bias to see what happens in your next A/B test’. After all, as soon as a design is tested and has statistical evidence proving it to be more profitable than the other designs, it’s very likely to be adopted by the business with little further discussion, regardless of whether users truly understand the consequences of their actions.

Default effect

The default effect is a psychological phenomenon where people tend to stick with the status quo and choose the option presented to them as the default. It’s a bias that’s been studied in many different contexts, from consumer decisions to public policy. Businesses know that people are more likely to stick with the default option, so they often define the default to be favourable to the business in some way, typically through a preselected checkbox or radio button.

One of the most famous studies on the default effect was carried out by researchers Eric J Johnson and Daniel Goldstein in the 2003 paper ‘Do Defaults Save Lives?’.9 They looked at organ donation consent rates in different countries, and they compared the countries in which users are opted out by default (shown on the left) versus countries in which users are opted in by default (shown on the right).

Chart showing the effective organ donation consent rates in several European countries. Where people are opted out by default, the consent rates vary from 4.25% (Denmark) to 27.5% (Netherlands); where people are opted in by default, consent rates are much higher, ranging from 85.9% (Sweden) to 99.98% (Austria).
Effective consent rates by country, from Johnson & Goldstein (2003).

As you can see, the difference in consent rates was enormous. A number of things are believed to drive the power of the default effect:

  • Awareness: for a user to change the default, they first have to become aware that it is possible to do so. (This harks back to the earlier section on exploitation of perceptual vulnerabilities.)
  • Effort: for the user to change from the default, they have to do something; in this case it involves finding and completing the correct government form. It is possible that citizens might intend to change their choice from the default, but not have time or energy to do so.
  • Authority bias and social proof: the default effect can be combined with other cognitive biases. For example, the default may be presented as the correct thing to do by a figure of authority (a doctor, for example). Alternatively, it may be portrayed as the thing that everyone else is doing (social proof). These are both known to be powerful cognitive biases in their own right.

In the book Misbehaving Richard Thaler did some follow up research, looking at true organ donation rates as opposed to presumed consent rates.10 He found that while presuming consent may appear to work on paper, when people die in hospitals the staff will typically ask the family whether the organs should be donated. At that point the presumed consent frequently gets discarded as there is no record of the individual’s actual choice. Thaler concluded that ‘mandated consent’ was a better policy, forcing citizens to make an explicit choice when they renew their driving licence.

The default effect has also been studied in the context of privacy and cookie consent dialogs. A large-scale study conducted by SERNAC, the Chilean consumer protection agency, provides compelling evidence.11 Over 70,000 participants were presented with different cookie consent interfaces. In one of the interfaces, participants were presented with cookie tracking as opted-in by default, while another presented it as opted-out by default. The opted-out version increased the rate of users rejecting cookies by 86 percentage points.

As you can see from the evidence, the default effect is easy to employ and is very powerful. It is often used by businesses in an exploitative way: to presume user consent for decisions where users might prefer to opt out, if they only knew the true nature of the decision they were being presented with, and were given an explicit choice.

Anchoring and framing

The anchoring effect cognitive bias is a psychological phenomenon where individuals rely too heavily on the first piece of information they receive (the anchor) when making decisions. For example, Tversky and Kahneman (1974) conducted a study in which participants were asked to estimate the percentage of African countries in the United Nations.12 They were first given a random percentage (an anchor), then asked if their estimate would be higher or lower, and then finally asked to provide their estimated figure. The results showed that the estimates of participants were significantly influenced by the anchor they were given: those given a higher anchor estimated a higher number, and those given a lower anchor estimated a lower number. This insight is frequently used by marketers in an exploitative manner when pricing consumer products – for example, where an initial price is created to be artificially high so that a discount can be presented, giving a sense of value for money.

Framing is a similar cognitive bias where individuals rely too heavily on the way information is presented rather than on the underlying facts. In 1981, Tversky and Kahneman carried out an experiment in which they gave participants a scenario relating to a hypothetical disease, and were given two treatment programmes to choose from.13 Depending on their experimental group, the outcomes of the treatment programmes were framed either positively: ‘X people will be saved’; or negatively: ‘Y people will die’. They found that the framing had a pronounced effect on participants’ choices, even though the underlying facts were identical in both cases.

In the book Predictably Irrational, Dan Ariely reported a study that demonstrates the manipulative power of this type of cognitive bias.14 He created two different fictional designs of The Economist magazine’s subscription page, and presented them to 200 students (100 per design), asking them to pick their preferred subscription type. Unknown to the participants, one of the designs contained a trick (design A, below), intended to get participants to perceive the combined print and web subscription as better value. It involved providing an extra ‘decoy’ subscription: the print magazine on its own for the same price as the print and web subscription. As you can see in the figure below, the presence of the decoy print subscription in design A caused the print and web subscription to be selected much more frequently (84% selected) than when it was omitted in design B (32% selected).

In design A, the options presented to the user were: an Economist.com subscription at a cost of $59 (selected by 16/100 participants); a print subscription at $125 (selected by no one); and a print and web subscription at $125 (selected by 84/100). In design B, the print-only subscription was removed, and 68/100 chose the web subscription at $59 and 32/100 the print and web subscription and $125.
Dan Ariely’s Economist magazine study, where the presence of a decoy option influenced participants’ decision-making.

Social proof

The social proof cognitive bias is a phenomenon in which individuals are likely to conform to the behaviour of others. It’s also known as the ‘bandwagon effect’, ‘groupthink’ or the ‘herd effect’. To put it another way, if we see that numerous other people perceive something as valuable, we are likely to believe that they are correct. This is a shortcut that allows us to avoid the hard work of carrying out a critical evaluation of our own.

In 2014, a group of researchers working with HMRC tested the impact of social proof in a large-scale experiment.15 They designed five different tax bill reminder letters, each with a different message, shown in the table below. They sent these letters to a random selection of 100,000 UK taxpayers, and tracked the response rate (which they measured as a successful payment of the tax bill within 23 days).

Table showing the response rates to different messages in tax bill reminder letters.           Row 1 is ‘Nine out of ten people pay their tax on time’; response rate 1.3%.           Row 2 is ‘Nine out of ten people in the UK pay their tax on time’; response rate 2.1%.           Row 3 is ‘Nine out of ten people in the UK pay their tax on time. You are currently in the very small minority of people who have not paid us yet.’; response rate 5.1%.           Row 4 is ‘Paying tax means we all gain from vital public services like the NHS, roads, and schools’; response rate 1.6%,           Row 5 is ‘Not paying tax means we all lose out on vital public services like the NHS, roads, and schools’; response rate 1.6%.
Findings from HMRC tax letter study (Hallsworth et al., 2017).

As you can see, messages 1, 2 and 3 used different styles of social proof, while messages 4 and 5 did not. Message 3 employed the most aggressive social proof phrasing and it was by far the best performing. This was a big win for HMRC, and timely tax payments benefit the country as a whole. Of course, there’s nothing exploitative about this example – accurate and true social proof information is constructive and helpful. However, it can become exploitative when the information is tampered with in some way, and the user is purposefully not informed about what’s going on.

Online, social proof is typically presented as reviews, case studies, testimonials and data (ratings or ‘likes’). For example, consider a testimonial. If it is completely fabricated by the company, then that’s just false advertising – fraud, plain and simple. Similarly, if it’s provided by a real user but they were paid to write something positive, then that’s fraudulent too.

But what if it’s real, and the user was paid to give an honest and unbiased review? Incentivisation creates a grey area in which exploitative practices can be hidden. For example, what kind of payment was the reviewer given? Was the payment proportional to the service provided? Did the company imply that future employment as a reviewer might be conditional on a positive review this time? Did the reviewer give a positive review because of the incentive, even though they were not asked to? We all know from personal experience that if we receive a gift or a big discount we will be less critical of its shortcomings than if we had paid for it ourselves at full price. So, incentivised reviews should always be labelled with a disclosure – the user needs to be told that the review was paid for. However, the problem with disclosures is that they can be ambiguous. Take this Amazon UK review for an airfryer:16

UK review from Steven E (labelled ‘VINE VOICE’) on 17 July 2021. Five stars awarded, with the comment ‘Brilliant’ and label ‘Verified Purchase’.
Screenshot of a review on Amazon UK, featuring the label ‘VINE VOICE’

Next to the reviewer’s name is the label ‘VINE VOICE’. The user cannot click the label or hover over the label to reveal more information – and it’s not explained on the page. If the user searches for ‘vine voice’ in the product search box at the top of the page, nothing relevant appears in the search results. Buried deep in the Amazon UK website is a ‘help library’. From there, the user can search for ‘vine voice’ and find an explanation: that reviews with this label are paid reviews, because the reviewers were given the products for free. This is quite evidently not an adequate disclosure.

There are other ways that social proof can be manipulated. In the early days of the mobile app stores, a company called Appsfire pioneered a clever approach in a product for app developers called AppBooster.17 It involved showing users a ‘fake’ review page in which a rating and review were requested. If users gave a thumbs up with their review, they were asked to submit it to the App Store. If users gave a thumbs down their review was transferred into an email support thread hidden away from the public – although none of this was explained to the user. You can see the steps below.

On the ‘thumbs up’ route, the user is invited to review the app.
The AppBooster ‘thumbs up’ user experience
On the ‘thumbs down’ route, no invitation to review is given.
The AppBooster ‘thumbs down’ user experience

As you can see, AppBooster was dishonest about the true purpose of the ‘thumbs up’ and ‘thumbs down’ buttons. A more honest approach would be to let users decide for themselves whether they want to leave a public App Store review or email the developer privately.

Today, this sort of manipulative technique is forbidden in the Apple and Google app stores, so it’s not seen so often. Other approaches to manipulating social proof include delaying the publication of negative reviews (holding them in a queue longer than positive reviews), or simply showing them less prominently.

Scarcity effect

Scarcity is a cognitive bias that describes the tendency for people to place greater value on resources they believe to be in limited supply. It typically influences decision-making by increasing impulsiveness and risk-taking, as people feel a sense of urgency to acquire the resource before it runs out.

One of the first and most famous studies on scarcity involves cookies – the delicious baked kind, not browser cookies. In 1975, researchers Worchel, Lee and Adwole recruited 146 undergraduate students and carried out a series of experiments.18 Participants were shown a jar of either ten cookies or two cookies, and were asked to rate how much they wanted to eat them. The results showed that participants in the two-cookie condition rated the cookies as more desirable and more attractive compared to those in the ten-cookie condition.

Then, to make matters more exciting, the researchers engaged in some theatrics during the experiment. An actor entered the room with another jar of either two or ten cookies. The actor explained that they needed to swap their jar with the one the participant was already looking at. This served to draw attention to the difference in the number of cookies, before and after. In the conditions where the number of cookies was reduced, participants rated those cookies as even more attractive. This just goes to show that scarcity is effective, and the effectiveness is intensified when a person’s attention is drawn to the scarcity.

In the real world, scarcity is a fact of life, and it can be very helpful to provide scarcity information to users. For example, if a user has specific dates they need to take as annual leave, it is important for them to know if their desired travel tickets are close to selling out; if they are, they’d better book them immediately or they’ll miss their chance.

While honest and true messages are entirely acceptable, the scarcity effect is so powerful that it leads businesses to create fake scarcity, or to manipulate the concept of scarcity using ambiguous language, categories and user interfaces. We’ll look into this further in part 3 of the book, on types of deceptive pattern.

Sunk cost fallacy

The sunk cost fallacy is a phenomenon where individuals continue investing resources in an endeavour simply because they have already ‘sunk’ a significant amount of effort or resources in it. Research conducted by Arkes and Blumer in 1985 showed that individuals are more likely to persist in a task when they have invested in it, even if the investment is irretrievable and continuing the task is not rational.19 In one experiment, they gave 61 participants the following scenario. Before reading beyond the excerpt below, consider how you’d respond to this scenario.

Assume that you have spent $100 on a ticket for a weekend ski trip to Michigan. Several weeks later you buy a $50 ticket for a weekend ski trip to Wisconsin. You think you will enjoy the Wisconsin ski trip more than the Michigan ski trip. As you are putting your just-purchased Wisconsin ski trip ticket in your wallet, you notice that the Michigan ski trip and the Wisconsin ski trip are for the same weekend! It’s too late to sell either ticket, and you cannot return either one. You must use one ticket and not the other. Which ski trip will you go on?

Given the fact that all the money is now spent and cannot be retrieved, it would be irrational for you to consider the cost of the trips in making a choice. You’ve already worked out that you’ll enjoy the Wisconsin trip so the logical choice would be Wisconsin. But did the participants in the study all pick that option? No. In fact only 46% of the respondents did. The sunk cost of the Michigan trip influenced the majority of respondents (54%).

The sunk cost fallacy is often employed in deceptive patterns by drawing users in with an attractive offer, using up their time, attention and energy going through a long-winded series of steps only to finally reveal the truth that the offer is less attractive than initially stated: the price is higher, for instance, or the terms less favourable. This will be explained further in part 3.

Reciprocity bias

The reciprocity cognitive bias is a phenomenon in which people tend to feel obligated to return favours to others after they have been given something. It is sometimes believed to be a form of social currency, as people may feel obligated to respond to a favour with a favour of their own. In 2013, the UK government ran a large A/B test with over 1 million website visitors, in which they tested eight different designs.20 When people had finished renewing their vehicle tax on the gov.uk website, they were taken to a variant of this page:

Screenshot of variant 1 used by GOV.UK. The page title is ‘Thank you’, with a request below to ‘Please join the NHS Organ Donor Register’ followed by a ‘Join’ button and link to ‘find out more’.
A variant of the UK government vehicle tax completion page, as used in an A/B test.

They tested eight different variants of this page. The one you can see above is the control (1) and the most effective variant (7) is shown below. The two pages are identical, apart from the message in the version below: ‘If you needed an organ transplant would you have one? If so please help others.’

Screenshot of variant 7 used by GOV.UK. The page is identical to version 1, but with the addition of text: ‘If you needed an organ transplant would you have one? If so please help others.’
Another variant of the UK government vehicle tax completion page, featuring a persuasive element regarding the NHS Organ Donor Register.

You might expect the effect to be small, because the text looks so unremarkable – but you’d be wrong. With the first design (1), 2.3% of people went on to register as organ donors. With the second design (7), 3.2% went on to register as organ donors. That’s one percentage point higher – or to put it another way, about one-third bigger than the control condition.

In its report, the BIT (the UK government’s Behavioural Insights Team) refer to this design as tapping into the ‘reciprocity’ bias.21 In this case, it is applied in an honest and transparent manner, but it would be deceptive if it were based on lies or misleading statements, and it’s easy to imagine it being used for nefarious ends.

Buy the book...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.