Chapter 12: Drawing a line between persuasion and manipulation
I’m often asked by designers if there’s an easy way to distinguish between exploitative practices and honest persuasion. Is there a line we can mark with police tape like in a movie? ‘Do not cross! All designs beyond this point are deceptive or manipulative!’
Unfortunately, it’s all quite complicated. If you’re looking for an easy answer, it’s probably a good idea to just avoid creating anything that looks similar to the patterns described in the book, and to find out what the law is in your jurisdiction. Another approach is to look at the outcome of harm. If a design creates negative consequences for users, this is a problem and you can work backwards from there, investigating the causes. This perspective is practical for enforcers and investigators, but it’s not so useful for designers who want to do the right thing before any harm occurs.
Direct deception is relatively easy to characterise. If a design contains an outright lie – a false claim that’s just not true – that’s deception, plain and simple. So there’s one line you quite obviously shouldn’t cross, and unless you live on the moon you probably have some very long-standing consumer laws in your jurisdiction that forbid this. But there’s another type of deception, known as ‘indirect deception’, which occurs when a design misleads users into holding false beliefs without explicitly lying to them (perhaps by omitting pertinent information or by using ambiguous language). Most deceptive patterns are like this. Indirect deception is not as easy to draw a line around – some examples are worse than others so there’s a whole range of severity to consider. Then you’ve also got the broader concept of manipulation; it’s possible for a design to influence or coerce a user without deceiving them. For example, if you use harsh emotional manipulation to steer the user into making a certain choice, you’re coercing them, and you may be leading to an outcome of harm – yet it’s not deception.1
If we zoom out into the world of philosophy and ethics, we’ll start to see how complicated it gets. In the 2015 paper ‘Fifty Shades of Manipulation’, Cass Sunstein gives an analysis of this.2 In his words:
‘an action does not count as manipulative merely because it is an effort to alter people’s behavior. If you are a passenger in a car, and you warn the driver that he is about to get into a crash, you are not engaged in manipulation. The same is true if you remind someone that a bill is due. A calorie label and an energy efficiency label are not ordinarily counted as forms of manipulation. So long as a private or public institution is informing people, or “just providing the facts,” it is hard to complain of manipulation. There is also a large difference between persuading people and manipulating them. With (non-manipulative) persuasion, people are given facts and reasons, presented in a sufficiently fair and neutral way; manipulation is something different. It is often thought that when people are being manipulated, they are treated as “puppets on a string.” Almost no one wants to be someone else’s puppet (at least without consent), [...] the idea of “manipulation” can be applied to many kinds of behavior; but it is not entirely clear that it is a unitary concept, or that we can identify necessary and sufficient conditions. Manipulation takes multiple forms. It has at least fifty shades, and some people wonder if they are tightly identified with one another.’
Sunstein goes on to argue that the problem is multidimensional, in the sense that there’s a few different things we need to consider at once. He explains that explicit user consent can make manipulation more acceptable (e.g. ‘Help me give up smoking. You have my permission to try anything!’). He also explains that transparency can help too – if you clearly inform the user that you are trying to persuade them in a certain way towards a certain outcome, this reduces the possibility of being covert.
In summary, it turns out that there is no single line that we can mark as ‘do not cross’. If the world of persuasion, manipulation and deception was a planet, it’s perhaps more accurate to say that we know there are some territories we shouldn’t go into at all, and other territories that bring varying levels of risk of something harmful or illegal.
I prefer to keep things simple, though. If you work with digital products, here’s my advice for you: don’t use false claims and know your local laws. Steer well clear of anything that looks like the deceptive or manipulative patterns detailed in this book. Be mindful that good intentions don’t absolve you from the responsibility of preventing outcomes of harm, so you should carry out research to anticipate and map these potential outcomes. If you find negative outcomes, make changes to prevent them. If you do that, you’re probably doing a good job and you can let the philosophers, ethicists and legislators worry about the bigger picture.