Chapter 25: Our attempts so far have not been successful

Codes of ethics

Today, the ACM,1 AIGA,2 APA,3 UXPA4 and other industry bodies all have codes of ethics that – either directly or indirectly – forbid deceptive patterns. These codes of ethics provide standards to aim for, but we are nowhere near meeting them. Codes of ethics are generally ignored by the tech industry, despite our best efforts.

In Europe, article 5 of the Unfair Commercial Practices Directive (UCPD) states that a commercial practice is unfair if it distorts the behaviour of consumers and is ‘contrary to the requirements of professional diligence’.5 The UCPD Guidance goes on to state that the notion of professional diligence ‘may include principles derived from national and international standards and codes of conduct’.6 This means that codes of ethics may become a powerful instrument in preventing deceptive patterns in the EU. However, this hasn’t been applied in practice yet, so it’s hard to say how it will play out. If one day there is a judicial decision that interprets codes of ethics as ‘the requirements of professional diligence’ under UCPD article 5 then it will be a game changer and they’ll suddenly become enormously important in the fight against deceptive patterns.

Today, businesses often use codes of ethics as a form of ‘ethics washing’. It’s commonplace to see the message ‘We care about your privacy’ on a user interface, immediately before they trick you into letting them track you and sell your personal data.

Education

Education plays a vital role in raising awareness and equipping people with the knowledge to recognize and push back against the use of deceptive patterns. For example, in higher education for design and HCI, it’s commonplace to be taught about user-centred design, persuasion and design ethics. They are standard options in higher education courses, and they have been for years. Given the proliferation of deceptive patterns today, it’s safe to say that it hasn’t stopped them from happening. The economic incentives for businesses to continue to use deceptive patterns are just too strong. In other words, education is necessary but not sufficient. Of course we need education, but we need something more if we’re going to solve the problem of deceptive patterns.

Bright patterns

A number of voices have suggested responding to the problem of deceptive patterns with bright patterns7 or fair patterns8 – design patterns that are fair towards users. The idea is to fight against deceptive patterns by creating the opposite type of pattern, and then sharing these recommendations as widely as possible.

The sad fact is that we already have a lot of materials that teach designers and business owners how to engage in user-centred or human-centred design processes that result in helpful, usable and useful design patterns that assist users in achieving their goals. Hundreds of university courses, bootcamps and textbooks teach the concepts. There’s even an ISO standard on this topic.9 On their own, bright patterns are really just yet more educational materials that appeal to the reader’s moral code to do the right thing, which so far hasn’t worked.

It’s tempting to respond to this by suggesting that perhaps bright patterns should be mandatory. The problem here is the almost infinite variety of design possibilities for any given problem. Consider all the possible configurations of words, images, layouts, buttons and interactive components a design team might want to use – and then also consider all the possible goals they have been asked to bring together. They have their business objectives, various internal stakeholders asking for things, and of course they have to make the thing useful, usable and appealing for end users, otherwise it won’t get successfully adopted and used. Then, once the product goes live, it gets iterated. Data is collected from research and analytics, giving clues as to how to make the product more performant. Designs evolve. They’re improved, added to, tweaked and trimmed. Design in the digital age is never done, and innovation is an ongoing process.

If you forced the tech industry to use a mandatory bright pattern, you’d stop all of that from happening. You’d kill innovation and improvement overnight. So a regulatorily mandated bright pattern should only be deployed in a very narrow situation – a key juncture when there’s a very high risk of harm to users. In fact, this is not a new idea. If you think back to your most recent major financial product (e.g. an investment, loan or mortgage), you were probably given a standardised document that was designed according to regulatory requirements. These documents might not look like much, but they’re intended to be bright patterns. The business uses them because they’re legally required to, and they serve to help prevent the business from bamboozling you into signing a contract that’s against your best interests.

So bright patterns aren’t quite as transformative a concept as they initially seem. They’re a useful educational tool and they’re already mandatory in certain narrow situations, but to stop deceptive patterns we need to go deeper, and look more closely at the business processes and practices that cause them to occur.

Naming and shaming

Naming and shaming is useful because it can lead to legal consequences. For example, if hundreds of users complain about a provider, this can draw attention from consumer protection groups, regulators and law firms, which can lead to enforcement actions or class action lawsuits.

One of the shortcomings of naming and shaming is that many users don’t do it, so the number of complaints can be far smaller than the number of people who have suffered negative consequences. Deceptive patterns are usually designed to be subtle. This means many users don’t even know they’ve suffered negative consequences (perhaps a few dollars for an add-on they didn’t intend to buy), so they’re not aware they’ve got anything to complain about. In other words, a very carefully designed deceptive pattern may never get named and shamed, because it was so carefully hidden. Also, not everyone wants to speak out publicly – some people are shy or introverted. They might blame themselves for ‘being stupid’ for being taken in and may feel a sense of shame or embarrassment. If they complain privately to the business, the world never finds out about it (unless the business is forced to reveal it in a legal case). Other people might intend to name and shame, but are unable to find the time. If the consequences are minor (e.g. just a few dollars lost), they might notice and feel irked but not enough to justify the effort of complaining.

All of this means that naming and shaming seems to be effective only for the most noticeable deceptive patterns. It’s reasonable to assume that many deceptive patterns never get named and shamed. In summary, naming and shaming is useful, but it’s not powerful enough – we need something more.

Industry self-regulation

Supporters of industry self-regulation tend to claim that it is faster and more flexible than government regulation, takes advantage of contemporary industry expertise, and reduces administrative burden on governments. Anyone who has been on the receiving end of tedious government bureaucracy knows what bad regulations feel like, so this perspective has some appeal. However, the idea of self-regulation is popular among industry lobbyists because it leaves the door open for superficial gestures and performative compliance, while continuing with whatever profitable practices that went before.

A good example of this is in the European Internet Advertising Bureau’s Transparency and Consent Framework (TCF), introduced in 2017.10 IAB Europe is an industry body that’s made up of hundreds of registered companies, advertising vendors and consent management platforms (CMPs) who stand to profit from the ability to track users and show targeted advertisements. At the time, the advertising industry was facing a huge challenge in working out how to deal with the new ePrivacy Directive and GDPR. Put simply, these laws were going to negatively impact profitability in this industry because they required users to explicitly opt in to tracking.11

In response, IAB Europe conceived the TCF: a voluntary, industry standard for various things relating to ad-tech, including user consent. This was implemented by numerous CMPs who took the TCF requirements and implemented them in their own user interfaces as ‘consent as a service’ solutions. These were used at great scale by thousands of website and app owners, seeking to ensure legal compliance.

So how did the CMPs manage to get users to opt in to tracking under the TCF? Simple: they used deceptive patterns extensively. In a research paper on CMPs, Cristiana Santos et al. (2021) explain the motivation:12

‘The primary service offered by CMPs is to ensure legal compliance […] However, the advertising industry is also incentivised to strive for maximum consent rates. […] For example, Quantcast describes their tool as able to “Protect and maximize ad revenue while supporting compliance with data protection laws” […]. OneTrust advertises that its CMP can “optimize consent rates while ensuring compliance”, and “leverage A/B testing to maximize engagement, opt-ins and ad revenue”.’

You can see the deceptive patterns at work in the series of steps shown below, captured by Soe et al. (2020) in their research paper ‘Circumvention by design’.13 The example depicts a typical cookie wall user interface designed to comply with the TCF v1.0 (the first version of the standard). The option to ‘agree’ to tracking is a prominent one-click action on every step; while if a user wants to disagree, they have to click ‘Learn More’ on the first step, then ‘Manage partners’ on the second step, and then on the third step they have to scroll through a long list of partners, clicking to expand each one and then clicking to opt out of each of them, one by one.

Screenshot of the huffpost.com cookie consent user interface, under the heading ‘Your data, your experience’. The user is given information on allowing the company to use cookies to access their device and data to provide personalised advertisements. At the bottom of the screen are a prominent green button with white text saying ‘I agree’ and a white button (the same as the background colour) saying ‘Learn More’.
Step 1 of 3. The huffpost.com cookie consent user interface, provided by Yahoo’s CMP. It employs various deceptive patterns while being compliant with IAB Europe’s TCF 1.0 standard (Soe et al., 2020).
Screenshot of the huffpost.com cookie consent user interface, under the heading ‘How Verizon Media and its partners collect and use data’. The user is shown text stating that ‘To continue to use Yahoo and other Verizon Media sites and apps, we need you to let us set cookies and similar technologies to collect your data.’ At the bottom of the screen are shown a prominent green button labelled ‘I agree’ and a white button labelled ‘Manage partners’.
Step 2 of 3. Having clicked ‘Learn More’, the user is still not able to directly reject consent, while they can easily accept it in one click via a big green button labelled ‘I agree’. (Soe et al., 2020).
Screenshot of the huffpost.com cookie consent interface, under the heading ‘See how partners use your data’. Three tabs labelled ‘Foundational partners’, ‘IAB partners’ and ‘Google’ allow users to show lists of partner organizations that will use the data collected. One partner (out of hundreds present in the interface but not shown in the screenshot) is named, with a toggle to indicate the user’s preference. Below the list (which the scrollbar indicates is very long) is the text ‘To continue to the site, select “I agree” to allow Verizon Media and its partners to set cookies and similar technologies to use your data. Your partner settings will be saved’, and a button labelled ‘I agree’.
Step 3 of 3. To opt out, the user must scroll through hundreds of partners, clicking each one to expand it, then clicking a toggle to opt out, one by one. All the while, they are encouraged to change their mind and skip the ordeal by consenting via the large green ‘I agree’ button at the bottom of the window (Soe et al., 2020).

If you think this example is outrageous, you’re not alone. It was eventually found to breach GDPR by the Belgian Data Protection Authority, leading to IAB Europe being fined €250,000 and being required to delete any illegally gathered data.14 Consumer rights organisation NOYB also filed over 700 complaints regarding TCF and similar designs.

In response, IAB Europe updated the TCF to version 2, in an effort to be more compliant. Even today, privacy researchers are still finding deceptive patterns in CMPs user interfaces under version 2.15 To quote Pat Walshe, data protection officer of Brave Browser, ‘Having the IAB in charge of ad standards aka the TCF is like having Dracula in charge of the national blood bank’.16

In January 2023, the European Data Protection Board responded to the complaints with a draft decision, largely in support of them, which is good news in the fight against deceptive patterns in privacy. Ala Krinickytė, data protection lawyer at NOYB, said: ‘We are very happy that the authorities agreed on the minimum threshold for protections against abusive banners. Cookie banners became the poster child of the GDPR being undermined. We need authorities to take urgent action, to ensure citizens’ trust in European privacy laws.’17

In summary, this case study provides a perfect example of why voluntary standards and self-regulation often don’t lead to effective outcomes – the incentives simply aren’t there. Self-regulation is a way to let an industry carry on doing the same profitable thing as before while pretending to adhere to some new rules.

Buy the book...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.