Chapter 16: Misdirection

Misdirection, like most types of deception, has been practised throughout human history. Whether by a pickpocket, a stage magician, or through the design of a user interface, the principles remain the same:1

‘Simply stated, misdirection is the psychological technique used to lead or manipulate the spectators’ and volunteers’ eyes and minds to see what the magician wants them to. Their attention is focused in one direction while the trick is worked elsewhere. Misdirection is not pointing and saying “Look at that!” and then doing something sneaky in the opposite direction. That is a crude form of misdirection that does not work well, nor does it leave a good impression with the audience. The techniques used by good magicians are subtle and sophisticated. So much so that the people in the audience never know they have been manipulated.’
— Eddie Joseph (1992) How to Pick Pockets for Fun and Profit

The confirmshaming deceptive pattern

The term ‘confirmshaming’ was popularised by an anonymous blogger who started the confirmshaming tumblr blog in 2016.2

Confirmshaming is the use of emotional manipulation to misdirect users, and to push them into opting into something (or opting out of something else).3 For example, the option to decline may be worded in such a way as to shame you into compliance – you feel so bad about saying no, you end up choosing yes. The most common use of confirmshaming is in mailing list dialogs that pop up when you arrive on a site or via some other trigger.

Confirmshaming by Sears

Here, the retailer Sears uses emotional manipulation and wordplay by labelling the marketing email opt out button, ‘No thanks, I hate free money’. This is an archetypal example of confirmshaming. After all, Sears is not offering free money. It is inviting the user to subscribe to a mailing list that will give them a $10 discount on a purchase with Sears.4

Screenshot of Sears’ website offering a $10 discount if a user signs up for marketing. The option to accept the offer is a button labelled ‘Sign me up’, and to decline is a button labelled ‘No thanks, I hate free money’.
Confirmshaming user interface from Sears in 2017.

Confirmshaming by MyMedic

This example was discovered by Per Axbom. He referred to it as ‘the worst example of #confirmshaming I’ve been subjected to.’5 MyMedic sells first aid packs and medical supplies. In asking permission for its website to send you notifications, the opt-out link label is presented as ‘no, I don’t want to stay alive’. This is particularly troubling given that some of its target customers are people likely to be exposed to the trauma of accidents and death in their work.6

Screenshot of MyMedic website notifications interface. Under the text ‘MyMedic would like to send you notifications. You’ll be notified about the latest tips, sales, and discounts, so you and everyone you know can stay alive’ is a blue button labelled ‘Allow’ (to the right). To the left is the option to decline: grey text reading ‘no, I don’t want to stay alive’.
Confirmshaming user interface from MyMedic in August 2021.

The visual interference deceptive pattern

This deceptive pattern involves hiding content that a user might reasonably expect to be shown on the page. There are several ways to do this.

Visual interference by Trello: pushing users into the expensive ‘Business Class’ subscription

In January 2021, an anonymous twitter user (@ohhellohellohii) pointed out a deceptive pattern being used by Trello in its sign-up user journey.7 If you’re not familiar with Trello, it’s a collaboration tool that lets teams of people view ‘cards’ of information on a digital board, often used by creative teams. At a glance, a Trello board shows you what’s being worked on and who’s doing the work.8

Screenshot from Twitter showing a tweet from user @ohhellohellohii, who tweets ‘@darkpatterns this one nearly got me. @trello really wants you to use their free trial… the start without is juuust below your view’. Below the tweet is a screenshot of the Trello website showing (on the left) the sign-up form with Business Class subscription, and (on the right) the same page but revealing the link to sign up without subscribing to Business Class below what appears to be the bottom of the page.
Twitter user @ohhellohellohii complains to Trello about their use of visual interference.

Trello is well known for having a free-to-use plan, which gives people the chance to trial and adopt the platform with a fairly generous allowance of projects and storage space. It’s one of the reasons why Trello became popular: people started using it for free, grew to love it, and then upgraded to paid plans. In 2017, Trello was bought by tech giant Atlassian for $425 million. However, in January 2021, the Trello product team used a tweaked sign-up experience that appeared to be an attempt to deliver more fee-paying customers on their most expensive plan, ‘Business Class’.

Having clicked an innocent looking ‘Sign up’ button, users were shown a comparison table with three plans: ‘Free Team’, ‘Standard Team’ and ‘Business Class Team’. Instead of giving users the means to pick just one, there was a huge green button that said ‘Start 30-day free trial’. To all intents and purposes, it appeared there was no other option. However, if users had the presence of mind to scroll down past what seemed to be the bottom of the page, they would find a small grey box labelled ‘Start without Business Class’ (pictured below).9

Screenshot from Trello website showing the sign-up link ‘Start without Business Class’ below what appears to be the bottom of the page.
Close up of Trello screenshot provided by Twitter user @ohhellohellohii.

There were a number of related tricks at work here. Let’s take them one by one.

First, there was the issue of hiding a button on the canvas below the bottom of the viewport (aka ‘below the fold’). If users’ browser windows were too small, they wouldn’t see the ‘Start without Business Class’ button at all. This is about user expectations. They’d have no reason to expect such an important button to be hidden way down there, because users trust businesses to build products in a predictable way.

Trello also used other visual tricks. The boundary of the white box appeared to signify the end of the main content area, and it’s a common convention to only place ancillary footer text below this sort of visual divider (like the copyright message and the legalese). In this example, the ‘Start without Business Class’ button was outside the visual bounds of the main content area, employing visual interference and violating user expectations again.

Finally, we have the difference in visual prominence of the buttons themselves. The ‘Start 30-day free trial’ button was colourful and high contrast, whereas the ‘Start without Business Class’ button was muted and low contrast. In fact, it didn’t look like a button at all, and it certainly didn’t invite users to click it.

At this point, it’s also worth saying that if a business takes advantage of visual perception, then it unfairly targets people with visual impairments since they don’t have the visual acuity necessary to perceive small or low-contrast text. However, it should be noted that people with serious visual impairments may employ assistive technologies like Apple VoiceOver, a screen reader application that reads pages aloud using a voice synthesiser, and therefore visual deception does not occur.10

It’s not clear how long Trello kept the offending page live. It may have been an A/B test that was shown to a limited number of people before being discarded. As I write, the sign-up process is now an altogether more honest design:11

Screenshot from Trello website now more clearly telling users they can opt out of the trial of Trello Premium by using a visible button clearly labelled ‘Skip’.
A more honest upsell page by Trello, where the option to opt out of the premium trial (‘Skip’) is shown relatively prominently.

Visual interference by YouTube: a near-invisible close button

‘Freemium’ is a portmanteau, a word that smashes two terms together to create a new, if somewhat clumsy term. If a service is freemium – a combination of ‘free’ and ‘premium’ – then it’s offering a two-tier pricing strategy to consumers. One approach to freemium involves letting users have a free account indefinitely with no contract, but then to persuade those users to upgrade and pay for a premium account that has extra features. It’s become a commonplace online commercial strategy, because – well, who doesn’t want something for free? By having a huge free user-base, the business gets an audience on whom they can test various persuasive tactics.

In January 2021, Twitter user @bigslabomeat pointed out that YouTube was deploying a deceptive pattern that would get users to sign up for a premium free trial.12 As you’ll see, users weren’t given an obvious means to continue with the free tier product. Rather, they needed to take notice of a tiny, low contrast X at the top right of the page, and deduce that tapping the X would then effectively reject the offer of the free trial. A classic example of visual interference.13

Screenshot from Twitter showing a tweet from user @bigolslabomeat, who tweets ‘Getting desperate now? This came up when I opened the @YouTube app. I don’t want premium, I don’t want a trial. I’ve said that at least a hundred times so far, now this without even a close button. Talk about @darkpatterns’. Beneath is a screenshot from YouTube telling users ‘Don’t miss the perks of Premium’ with only a blue button labelled ‘1 month free’. The photo used by YouTube, of a young woman looking at YouTube on a device, is positioned such that her dark hair obscures the X used to close the pop-up (top right) almost completely.
Screenshot of YouTube employing visual interference. The close button X is visually obscured by the model’s hair.

Visual interference by Tesla: non-refundable accidental app purchases

At the end of 2019, Tesla introduced a new feature to its mobile app. Put simply, the updated app let Tesla car owners buy upgrades for their vehicles, such as an autopilot that would unlock ‘Full Self-Driving’ capabilities.14 At over $4,000, these were significant add-ons.

After this feature was introduced, a number of Tesla car owners made it known that they’d made a new feature purchase by mistake, and Tesla was refusing to provide any refunds. Journalist Ted Stein provided an analysis, describing the nature of the techniques used.15 On the payment screen, the wording ‘Upgrades cannot be refunded’ appeared in small, very low contrast dark grey text on a black background, effectively hiding it from users.16

Screenshot from Tesla’s mobile app. Beneath the form asking users to purchase new upgrades is some small dark grey text informing users that ‘Upgrades cannot be refunded’. Against the black background, the text is barely visible.
Tesla’s mobile app refusing refunds on accidental purchases (Stein, 2020).

Tesla customer (and well-known author) Nassim Nicholas Taleb ran into this problem in January 2020 and he asked for a refund. He received a denial from Tesla’s customer support, which he published on Twitter.17 They told him ‘there are not refunds available for software purchases. This would be similar to the situation of paying for an addition to a house, deciding you don’t like it, and then requesting a refund from the contractor.’ Taleb replied:

‘The purchase was non-intentional. I unintentionally hit the buy button while the app was in my pocket and do not know of any app that makes you do a purchase of $4,333 without confirmation/password or something of the sort. […] Even Amazon makes it hard to buy a $6.99 Kindle books has reversed purchases made in error […] I did not TRY your software, and I DID NOT USE your stupid software […] You have a flawed app.’

If everything that Taleb asserts in his email is true, then as well as visual interference (shown in the figure above), Tesla also appears to have used the ‘obstruction’ deceptive pattern, making it hard to cancel.

At some point after this complaint was made, Tesla reversed its position and added a 48-hour cancellation window for this type of in-app purchase. This is shown in the figure below, though you may need a magnifying glass to see it, as it is written in dark grey text on a black background.18

The trick wording deceptive pattern

The trick wording deceptive pattern is employed to confuse or mislead users into taking actions that they would not have taken had they fully understood the situation at the time. This manipulation is achieved through ambiguous phrasing, double negatives, or strategically placed information within sentences or user interfaces.

Trick wording by the Trump presidential campaign

In this example, the Trump campaign used a variety of different deceptive patterns, trick wording being just one of them. In March 2021, I was contacted by Shane Goldmacher, a reporter from the New York Times. He’d found some examples of serious deceptive patterns in the Trump campaign donation portal and he wanted to talk through his findings before going public with his conclusions. Goldmacher’s article provides a detailed account of his findings, and I’ll summarise it here.19 Users were typically driven to the donation portal via email campaigns. Over time, the Trump campaign dialled up the severity of deceptive patterns. They started by using a preselected checkbox that turned a one-off donation into a recurring donation, as shown below:20

Screenshot from Trump 2020 presidential campaign website. Over a photograph featuring President Trump and Vice President Pence is a request for donations to the campaign. As well as suggested donation amounts ranging from $25 to $2,800 is a checkbox labelled ‘Make this a monthly recurring donation’. The checkbox is ticked by default.
Trump campaign deceptive pattern version 1, featuring a preselected checkbox for ‘Make this a monthly recurring donation’ (in this case the user has entered $150 as their chosen donation amount).

This approach makes use of the default effect cognitive bias, which can be considered a variation of the sneaking deceptive pattern. There are lots of reasons why this is a powerful tool. First, there’s simply the matter of awareness – users have to notice the box, then they have to read what’s in the box, and then they have to work out what it all means. If the user doesn’t invest time and effort in those interactions, they’ll scroll past the preselected checkbox completely unaware of the implications.

There are other, more subtle psychological effects. For example, the preselected checkbox may cause some people to feel social pressure. In other words, they may feel that other people like them would leave the box ticked, so they should too (the social proof cognitive bias).

In any case, the Trump campaign discovered that these preselected checkboxes worked well, so a few weeks later they decided to add another preselected checkbox next to it. In Goldmacher’s investigation he discovered that they called this the ‘money bomb’, which shows that they knew the implications of what they were doing. Version 2 is shown below. The first preselected checkbox worked the same way as in version 1, creating a recurring payment every month. The second preselected checkbox served to take an additional sum of money – the same amount again, for ‘Trump’s birthday’.

Screenshot from Trump 2020 presidential campaign website. Over a photograph featuring President Trump and Vice President Pence is a request for donations to the campaign. As well as suggested donation amounts ranging from $25 to $2,800 is a pre-ticked checkbox labelled ‘Make this a monthly recurring donation’. There is a second pre-ticked checkbox to take the same amount again on the occasion of Trump’s birthday.
Trump campaign deceptive pattern version 2, with two preselected checkboxes.

This design was so effective that they decided to dial up the severity even more. In the example above, you can see the top checkbox sets up a monthly donation. They decided to make this weekly, as you can see below. And the second checkbox was updated to indicate a donation of $100 extra (even if the user’s chosen recurring donation value was much lower than that).

Screenshot from Trump 2020 presidential campaign website. The pre-ticked checkboxes now request ‘a weekly recurring donation’ and an additional donation of $100.
Trump campaign deceptive pattern version 3, using two preselected checkboxes.

Believe it or not, they took it to yet another level in version 4 of this design. They introduced trick wording and visual interference to make the purpose of these preselected checkboxes less obvious. As you can see below, the bold text doesn’t refer to the charges at all; that information is shown below in a thinner, less obvious font that might easily be skipped over by the reader.

Screenshot from Trump 2020 presidential campaign website. The pre-ticked checkboxes contain long labels in bold text with some in capital letters for further emphasis. The text concerning the ‘weekly recurring donation’ and ‘addition $100’ come at the end of these labels in a smaller text size.
Trump campaign deceptive pattern version 4, featuring two preselected checkboxes with severe visual interference.

One of the most remarkable aspects of Goldmacher’s investigation was his discovery of the timeline involved. He proved the points at which these deceptive activities were introduced, and was able to show the data against the refund rates demanded from Trump campaign subscribers compared with subscribers to the Biden campaign. Overall, the Trump campaign had to give $122 million in refunds while the Biden campaign only gave $21 million.

alt="Chart showing a timeline for refunds to the Biden and Trump presidential campaigns of 2020, for the whole of that year. Refunds to Biden donors remain at a steady level of between 2 and 3% throughout the year. Refunds to Trump donors start at around 1% in January and begin to climb slowly from March 2020, when ‘the pre-filled checkbox first appeared on Trump’s online donation form’. By June, the second pre-filled checkbox had been added, and the refunds begin to rise rapidly to around 6% in August. ‘By September, the Trump operation began to have online donations recur weekly by default’ and refunds continued to rise, reaching 10% by December 2020."
Chart by Eleanor Lutz and Rachel Shorey from the New York Times article ‘How Trump Steered Supporters Into Unwitting Donations’.

It’s safe to say that only a small proportion of people who’d been caught out by these deceptive patterns would have gone through the process of realising, deciding to take action, investing the time to take action, and successfully getting a refund. The rest would have simply been charged, suffered a financial loss, and carried on with their lives. Goldmacher interviewed one donor, a 78-year-old Californian, Victor Amelino, who’d made an online donation for $990 that recurred seven more times before he realised. In total, he contributed almost $8,000 to the Trump campaign. ‘Bandits!’ he said. ‘I’m retired. I can’t afford to pay all that damn money.’

Ryanair used a combination of deceptive patterns that included trick wording for a few years, from roughly 2010 to 2013. The screenshot below sums it up perfectly.21 Here, the airline Ryanair makes it look like travel insurance is mandatory when you’re buying a flight. However, there’s a hidden way of opting out. In the dropdown box labelled ‘Please select a country of residence’, the user can find the option, ‘Don’t insure me’ listed between the two countries Denmark and Finland. Many users won’t expect this highly unusual approach to opting out, and may end up buying insurance unaware they had a choice not to. Arguably this could be described as using the visual interference deceptive pattern as well as trick wording, since the page layout and form field style contribute to the nature of the misdirection.

Screenshot from Ryanair’s website showing a form adding travel insurance to the cost of a flight. In the dropdown for users to select their country of residence, the option ‘Don’t Insure Me’ has been listed between Denmark and Finland in the alphabetical list of countries.
Trick wording used by Ryanair, making it hard to opt out of an insurance upsell.

In 2015, this deceptive pattern led to Ryanair being fined €850,000 by the Italian competition authority (ACGM).22 Despite this fine, Ryanair have continued to use various deceptive patterns, leading the Norwegian Consumer Council to send them a letter in 2022, asking them to stop.23

Pressured selling

Pressured selling involves putting pressure on users to complete a purchase by employing a combination of deceptive tricks and cognitive biases, including scarcity and anchoring.

Pressured selling by Booking.com

Pressured selling and other deceptive patterns were used so extensively by hotel booking platforms in the 2010s that the entire industry ended up being scrutinised by regulators in a number of different jurisdictions. For example, in 2017 the UK’s Competition and Markets Authority (CMA) carried out an investigation into Booking.com, Hotels.com, Expedia, ebookers.com, Agoda and Trivago, among others.24 The outcome of this was a new set of strict guidelines to ensure that they stayed on the right side of the existing legislation.25 This is interesting because it shows how there can be a bit of a gap between laws and how they get interpreted by businesses. These new guidelines were effectively intended to plug that gap and make it very clear what is and what isn’t allowed.

Let’s take a look at Booking.com in 2017 to see the kinds of pressured selling techniques it was using at the time. The screenshot below is from an article by software developer Roman Cheplyaka titled ‘How Booking.com manipulates you’:26

Screenshot from Booking.com showing the options for a double room in a hotel. The text includes various ways users were manipulated into booking, such as a notice in white text on a red background that ‘Someone just booked this’; a notice in red text on a pale background that ‘This is the cheapest price you’ve seen’, even though this is the only price the user has seen; prices for the booking that have been struck through, indicating that the price has been reduced.
Screenshot from Booking.com in 2017, featuring various forms of pressure selling.

This is what you would have seen on a Booking.com hotel page in 2017. Starting from the top left, it says in the red box with a small alarm clock icon, ‘Someone just booked this’. According to Cheplyaka, this is animated: ‘it pops up one or two seconds later, making it seem like a realtime notification – an impression reinforced by the alarm clock icon. To be clear, it is not realtime, and there is no reason to delay its display other than to trick you’. Cheplyaka subsequently found that if he hovered his cursor over the sentence long enough, it revealed another sentence, in this case ‘Last booked: 4 hours ago’. So, the word ‘just’ and the timed appearance was really stretching the truth.

Also, if we look at the rest of the sentence more closely, what does it really mean? ‘Someone just booked this’. Does it mean someone just booked this room within your date range, thereby reducing the availability of similar rooms for you? Or did they book it on completely different dates, thereby making the warning completely irrelevant? We don’t know for sure, but after their investigation, the CMA announced that Booking.com and others agreed to:

‘not giving a false impression of the availability or popularity of a hotel or rushing customers into making a booking decision based on incomplete information. For example, when highlighting that other customers are looking at the same hotel as you, making it clear they may be searching for different dates’ (emphasis added)27

We can’t be sure, but it sounds like the hotel booking sites the CMA investigated were caught red-handed doing this – after all, why else would this new rule have been created? There are a few other statements in this screenshot that could be using the same trick: ‘In high demand – only 3 rooms left on our site!’; ‘33 other people looking now, according to our Booking.com travel scientists’; and ‘Last chance! Only 1 room left on our site!’

Let’s move on. If you gaze across to the right, you’ll see a box with the text ‘Jackpot! This is the cheapest price you’ve seen in London for your dates!’ Cheplyaka points out that this statement is tautological. It’s the first price the user has been shown, so it’s probably being considered a sample of one item. By being the only item you’ve seen in London for your dates, it is the cheapest. It’s also the most expensive.

Now let’s look at the struck out prices. For all the rooms, it looks like they’ve been discounted twice. For example, the top room started at $189, then down to $175, and now it’s at $158. But Cheplyaka found that if he hovered his cursor over the price long enough, this appeared:

Screenshot from Booking.com showing text that appears when the user hovers their mouse over struck-through prices indicating a discount. The text states that the apparent discounts are not based on the booking dates specified by the user, but on a month-long window either side of the booking dates.
A screenshot of Booking.com from 2017 in which key information is hidden and is only revealed when the user happens to hover their cursor over a small text link.

It’s very wordy and you’d be forgiven for not reading the whole thing, not least because if you move your mouse by a fraction, it’ll disappear. It explains that the statement is once again not based on your specific dates! It’s based on a month-long window around your dates. So if you didn’t see this pop-up, you’d be none the wiser and think you’re eligible for a special discount on your dates when, in fact, you’re not.

As well as pressured selling, the deceptive patterns we’ve looked at here can also be described as trick wording (described in the previous section). Deceptive patterns often overlap like this. It’s reminiscent of evil gods and genies from folk stories that use wordplay and pedantry to inflict horrible consequences on those who beg wishes from them. When the Greek goddess Eos asked Zeus to give her lover Tithonus immortality, he granted her wish, noting that she forgot to ask for eternal youth. Tithonus lived forever, but became so old and wrinkled that Eos eventually abandoned him. The lesson here is that if you want immortality, don’t ask Zeus – or any popular hotel booking websites for that matter.

Buy the book...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.