Chapter 20: Forced action

Forced action is a category of deceptive pattern in which a business offers users something they want – but forces them to do something in return. This is a problem when the forced action runs contrary to a reasonable user’s expectations, or when it contradicts laws or regulations.

One of the most well-known and amusingly named types of forced action is ‘privacy zuckering’, named, of course, after Mark Zuckerberg.1 The user is tantalised by a service or product and in the process of trying to get it, they are tricked into sharing personal data with the business, and also tricked into giving the business permission to use that data for profit-making endeavours – like selling it, sharing it or using it for targeted advertising.

The issue here isn’t that data sharing, data sales or targeted advertising are necessarily bad – because they are legitimate business models when done correctly. The issue is the lack of the user’s consent for this to happen. It doesn’t count as consent if the user has been tricked or coerced. Consent must be ‘freely given, specific, informed and unambiguous’ – the exact language used, in fact, in the EU’s GDPR.

Here’s an example of forced action, observed by security researcher Brian Krebs.2 When a user installs Skype on their iPad, they are taken through a series of log-in steps. One of the steps requires the user to upload their personal address book from their iPad to Skype (a division of Microsoft). There is no option to decline (shown below), and the page does not explain that the next step (the iOS permissions dialog) will actually give them the choice to decline, and that declining will not have an effect on their ability to use Skype.3

Screenshot of the forced action deceptive pattern in the Skype iPad app (2022).

If we look at a subsequent step (below), we can see that the designers certainly know how to design a clear opt-out when they want to.4 The options ‘Yes, contribute’ and ‘No, do not contribute’ are equally weighted, obvious and easy to understand. This further highlights the forced action and coercive wording on the ‘Find Contacts Easily’ step (above).

Screenshot of a dialog box in the Skype iPad app asking the user for permission to share diagnostic and usage data. As well as two links, ‘Privacy & Cookies’ and ‘Learn more’, there are two blue buttons with white text, clearly labelled with ‘Yes, contribute’ and ‘No, do not contribute’.
Screenshot of a different Skype iPad app dialog, in which the means to opt out is easy (2022).

So why is contact sharing something that users may want to opt out of? This is essentially a question about the right to privacy. The book Privacy’s Blueprint by Woodrow Hartzog (2018) covers this,5 including an analysis of the overlap between deceptive patterns and privacy, which Hartzog refers to as ‘the problem of extracted consent’, also known as ‘consent washing’ (Wylie, 2019).6

One of the issues is that it’s not just about the privacy of the user – it’s about the privacy of the people in their address book too. The contacts themselves may not want to give their permission. Their existence in the address book may be confidential (perhaps the user is a journalist or a lawyer), their labelling in the address book may be confidential (‘Alex, my secret lover’), and the graph data (that is, the connection between the address book holder and other contacts) may be confidential too. Finally, there is also the matter of what Microsoft intends to do with the data once it is uploaded. The page says ‘Find Contacts Easily’, which sounds pleasant enough, but there’s also a ton of information in the ‘Privacy & Cookies’ and ‘Learn more’ pages. It is hard for a user to get to the bottom of what exactly is going to happen to their address book if they continue through this process. Such a concern is not unfounded; in 2019, Microsoft was criticised for exposing users contacts to the general public in a now defunct feature called ‘People You May Know’.7

Forced enrolment

Forced enrolment is a type of forced action that requires a user to register on a website or app before they’re allowed to do the thing they set out to do. Sometimes enrolment is inherently needed as part of the service. For example, Facebook wouldn’t be able to show you information about your friends and interests unless it knows who you are, and that requires a user account. However, some services don’t inherently require you to register an account. For example, e-commerce sites could let you check out as a guest, but they often don’t provide this option. This is because forcing you to register means they can capture your contact information and payment details – all of which is extremely valuable in turning you into a repeat customer.

Similarly, forced enrolment gives the business a choke point: a route all users are forced through. This gives them the opportunity to deploy other deceptive patterns to great effect. For example, businesses can use forced enrolment to extract consent from the user for marketing purposes, for ad retargeting purposes, they can sell or share personal data with third parties, and they can send the user data they get from this into ‘lookalike audience’ marketing tools which make it possible to target non-customers who look like their existing customers (for example, Google,8 and Facebook9).

Forced enrolment by LinkedIn

This case study of forced enrolment by LinkedIn involves a combination of different deceptive patterns. LinkedIn provides a personalised service. Like all other forms of social media and platforms that store personal data, this inherently requires users to register and sign in – it simply wouldn’t work otherwise. So it’s not at fault for employing forced enrolment in itself. However, in the earlier days of LinkedIn, it used the enrolment process to force various other actions on users.

In 2015, a class action lawsuit brought to light the deceptive patterns being used by LinkedIn. In a nutshell, LinkedIn was using deceptive patterns to trick users, getting them to upload contacts’ email addresses and agreeing to send numerous emails to those individuals, inviting them to join LinkedIn. Some versions of those emails were presented as though they’d been written by the user.

Under California law, this was deemed illegal. LinkedIn was instructed to pay a settlement of $13 million10. Dan Schlosser included a detailed walkthrough of the deceptive patterns used in this case, in his 2015 article ‘LinkedIn Dark Patterns’.11 A point worth noting in particular is the second step in the sign-up process:

Screenshot of LinkedIn’s add connections step from 2015. In a blue box on a white background is a field requiring the user’s email address. Directly below this is a ‘Continue’ button, and beneath that is smaller text stating ‘We’ll import your address book to suggest connections and help you manage your contacts’. Outside the blue box is a text link ‘Skip this step’.
A deceptive pattern used in LinkedIn’s ‘Add connections’ step during the forced enrolment process (Schlosser, 2015).

Here, users were asked to enter their email address. As this is a normal request for most online services, it’s unlikely to have been closely scrutinised by users – typing our email address into a field is something we all do, day in, day out. However, in the words of Dan Schlosser, ‘It’s really a lie. This page is not for “adding your email address”, it’s for linking address books.’ If the user then went on to complete this step without pressing the small, easy to miss ‘Skip this step’ link, LinkedIn then gained access to the user’s email contacts via an OAuth dialog.

Having extracted all the email addresses, LinkedIn then sent out numerous emails to those contracts, inviting them to join the platform. Overall, this forced enrolment can be considered to be a form of ‘friend spam’ where the product asks for your social media or email credentials for an allegedly benign purpose (e.g. finding friends who are already using that service), but then goes on to publish content or send out bulk messages using your account, typically impersonating you as the sender.12

Buy the book...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.