Chapter 29: Enforcement challenges
When you look at the pervasiveness of deceptive patterns today, it may seem like today’s laws and regulations aren’t working, but that’s not strictly true. They’re kind of working. The current state of affairs is a bit like a spluttering kitchen tap with low water pressure – a lot of waiting around and frustration. We are at a transitory point where we have old laws that weren’t written with deceptive patterns in mind, and new laws coming in that haven’t fully bedded in yet.
The problem with enforcing consumer law is that it’s slow and expensive, even at the best of times. Deceptive patterns are often designed to be subtle and intricate, making them difficult to identify and prove in a legal context. Consumer protection law is complex, and the wording of the law can be ambiguous, leading to battles over interpretation. Then, of course, you’ve got the differences between the member states: the EU is made up of countries that may have different enforcement strategies; while the US is made up of states that may create their own laws. This creates even more complexity. Let’s look at the reasons for slowness more closely.
Resource constraints
To effectively monitor and enforce existing laws, agencies require a combination of technically proficient staff, efficient systems and sufficient personnel. To do this, they need money. Budget allocation comes from politicians who have to make difficult choices about where public money goes. Sometimes agencies just don’t get the money they need to do their jobs properly.
Question of motivation
Another barrier to effective enforcement of regulations is the lack of motivation in some agencies. For instance, some critics accuse the Irish Data Protection Commissioner of being slow to enforce GDPR. In a 2019 Politico article titled ‘How one country blocks the world on data privacy’ Nicholas Vinocur scathingly commented, ‘Ireland has a long history of catering to the very companies it is supposed to oversee, having wooed top Silicon Valley firms to the Emerald Isle with promises of low taxes, open access to top officials, and help securing funds to build glittering new headquarters’.1
In its defence, the Irish Data Protection Commissioner has stepped up the pace recently, fining Meta €390 million in January 20232 and then €1.2 billion just a few months later in May.3 Still, it seems that under-budgeting an agency can potentially be politically motivated. After all, it’s understandable that a government might want to attract large international businesses by making the environment more appealing.
Nature of principle-based laws
Consumer protection laws are generally principle-based. This characteristic is a double-edged sword. While it lends these laws the flexibility to adapt to new practices, making them somewhat future-proof, it also inherently slows their responsiveness. Each case has to be meticulously worked through the legal system, a time-consuming process that allows companies to exploit existing loopholes.
Forbidden practices and penalties
In the EU, forbidden practices were added to the Unfair Commercial Practices Directive to ban outright certain deceptive patterns (aka a blacklist). This is helpful, but it has only been updated once since it was created in 2005, and it doesn’t provide any real detail about punitive measures.4 This raises the obvious question of whether such an approach would work better if there was a faster way to update the list of forbidden practices, and stronger punitive measures if those practices are used (for example, higher fines).
What it’s like being an expert witness in lawsuits
Here’s a summary of my experience as an expert witness on deceptive patterns in various legal cases in the United States.
First, I get contacted by a law firm. The initial conversations can be quite cryptic – they often don’t like putting anything in writing in case a poorly worded off-the-cuff statement gets used against them in the future. Once they’ve established trust, they tell me what the case is about.
Many of these sorts of law firms spend their time hunting for weaknesses in legal armour, targeting high-value tech companies and searching endlessly for ways in which to find a case that can lead to a payday for them – and relatively little money for the individual plaintiffs. This is rather different to the Hollywood image of class action suits where the story starts with a group of wronged individuals and a plucky lawyer steps up to help them. Still, even though the class action model has downsides, the lure of a big payday for the lawyers attracts many energetic and capable firms to fight on behalf of users, and the threat of these sorts of lawsuits can deter businesses from breaking the law.
In the initial call, I’m typically asked to give my opinion on a few screenshots of a user journey: ‘Are there any deceptive patterns at play here?’, for instance. When my answer is no, the conversation is over and they either go off looking for a different expert or a different case. When my answer is yes or maybe, I get engaged by the law firm. Every case I’ve worked on so far has started with a preliminary analysis where I capture extensive screenshots of the user journey and look closely at every step using an expert evaluation method.
I typically use a mystery shopper method in which I’ll define a persona with certain characteristics and a goal in mind. For example, if it’s a sports ticket sales e-commerce journey then the characteristics and goals will relate to sport events. Then I document the steps such a user is likely to go through, taking a screenshot at each step. To use academic HCI terminology, this is a type of lightweight persona-based5 ‘cognitive walkthrough’ method,6 though instead of aiming to evaluate usability, it aims to identify the presence of deceptive patterns, the mechanics of how they work, and the ways in which a reasonable user may experience negative consequences as a result.
In my work I usually capture high-resolution full page screenshots (Firefox has this capability built in7) and screen-recording video clips (when animations and transitions are important) and put them all into a visual database tool (there are many alternatives available, including NocoDB,8 Baserow,9 and Airtable.10) This allows me to store the screenshots along with metadata like date, sequence, user journey, device, and so on. This sort of fastidious approach to documentation is vital because cases can go on for months, or even years (I am working on one case that started in 2019 and is still ongoing in 2023). The work often involves working for a few days, then stopping for a few weeks or months, only to suddenly get a phone call and be back on the clock again. Sometimes I have to go back through the materials with a different frame of analysis, and I wouldn’t be able to do that without a readily searchable and filterable database.
This part of the work can be extremely laborious. Since websites and apps are changed frequently, deceptive patterns may be altered, removed or replaced over time. This makes the work almost like archeological excavation – it requires me to locate and identify older versions that are not currently live. Sometimes I start with little more than screenshots from consumer complaints in blog posts or social media. Sometimes I can use the Internet Archive Wayback Machine for evidence, though it only covers the public-facing World Wide Web – it can’t index authenticated experiences (if a user has to proceed through registration, login or payment, the Internet Archive is blocked). Sometimes the user journey is different on iOS, Android and desktop native apps. Sometimes websites present information in different ways depending on the viewport size, thanks to the CSS and JavaScript layout rules of the page. Tools like Browserstack Live can be useful for this, allowing you to remotely use and take screenshots from a very wide range of real devices and browsers.11 This creates a multiplying effect when I may end up having to look at the same user journey over and over again, at different points in time, for different devices and viewport sizes.
Then there is branching and business logic within the user journey itself. For example, I’ve worked on a few cases where the user is presented with questions or choices, and depending on their responses they are guided towards different products or services. In these situations, the expert may need to reverse engineer how the system works by going through the journey over and over again and documenting the behaviour. Cases that involve complex algorithms (such as personalisation and recommendation systems) typically involve other experts who have experience writing that sort of software.
During a case, the expert can suggest what documents to request from the defendant (for example, during document discovery). In my experience it’s useful to ask for feature documentation, analytics reports, A/B test documentation and qualitative user research reports. In some cases, there’s an opportunity to suggest who to depose (obtain a sworn testimony) from the defendant’s organisation. This can involve looking through the organisational chart and proposing who might be the best source of information when questioned under oath. Lawyers aren’t always familiar with how decision-making happens in tech companies. For example, it may look like a data analyst is a good person to depose since they work closely with interesting data, but in reality they’re usually far removed from the strategic decision-making. A product manager in the relevant area is usually a better bet, since they are close enough to the details while still being heavily involved in strategic decision-making.
In my experience working as an expert witness, defending companies will cooperate to the extent they are legally required, but no further. For example, in one case, I asked for analytics data relating to traffic from one page to another: I received a spreadsheet containing a single number. In another case, I asked for information about A/B tests performed on a section of a website within a given time period, and received several megabytes of JSON metadata – useless because they were not human-readable and didn’t provide any images or descriptions of the user interface designs that were A/B tested. Of course these sorts of issues can be sorted out through further dialogue, but it’s time consuming and costly.
After writing an initial analysis and sharing it with legal counsel, it’s normal for the expert to submit a signed declaration. Following that, they may be asked to testify in court, where they get cross-examined by the defendant’s counsel.
To summarise, expert witness work in this area is very labour-intensive. In fact, the same applies to most legislation and regulations around the world that relate to deceptive patterns – it takes a lot of people, effort and time to carry out this kind of lawsuit.
Technology as a tool
It is hoped that some of the labour of research and analysis will become partially automated or streamlined using technology – a new area known as ‘EnfTech’ (a portmanteau of enforcement and tech). It’s not widely available yet, but here are some of the things that EnfTech should be able to help with.
- Crawling the web to find evidence in source code: In the same way that search engines use bots to spider the web and create an index, similar bots can be written to scan the web and find website source code that exhibits characteristics of deceptive patterns. This can then be added to a shortlist of potential cases that can be vetted by human investigators.
- Scanning social media and review sites to find complaints: Lots of digital products cannot be seen by web crawlers, and require accounts to access. Luckily, users often complain publicly on social media or review sites. Various products already exist for brands to track social media sentiment, like Brandwatch12 or Mentionlytics,13 so it’s easy to imagine this sort of technology being applied for tracking complaints about deceptive patterns.
- Evidence archival: Once a business is considered worthy of investigation, an agency can use an automated tool to create accounts, manage account states, and take screenshots or capture source code for future reference. These sorts of tools are frequently used internally in businesses for QA and documentation (Selenium14 for instance, or Air/shots15).
- Issuing automated warnings: Instead of having a human writing manual emails or letters, a bot can be used to write them and to locate the relevant contact details.
A good example of new EnfTech in practice is NOYB’s WeComply.16 WeComply works by automatically sending GDPR complaints, providing a step-by-step guide on how to remedy the issue, and if the recipient company chooses not to take action, then it files a complaint with the relevant authority. In 2021, noyb.eu sent over 500 draft complaints to companies who use unlawful cookie banners (under GDPR).17
GDPR violations are a particular kind of highly structured problem that are well suited to automation. Deceptive patterns in general are much broader and harder to pin down – one business’s approach to deception will be different to another’s. This means EnfTech won't always be able to provide such a high level of automation, but any amount of streamlining is still valuable.
Despite the promise of EnfTech, it doesn’t actually change the legislative dynamic that puts an enormous burden of work on the enforcer. In a report arguing for legal reform in the EU, the European Consumer Organisation (BEUC) suggests that we need new rules ‘alleviating the burden of proof for plaintiffs and enforcement authorities’.18 I couldn’t agree more.