Bookmark: "Dark Patterns at Scale"

In addition to quantifying the depressing amount of manipulative and deceptive dark patters on the web, this paper describes an interesting methodology, using a bot that is able to detect dark patterns.

The key theoretical contribution is an empirically grounded categorisation of 15 types of dark patterns in seven categories: sneaking, urgency, misdirection, social proof, scarcity, obstruction, and forced action.

What is so upsetting about this paper's findings is not the fact that dark patterns are rather common, but their scale and sophistication. Yes, they also uncovered a range of very blunt implementations like a simple piece of JavaScript that suggests when a product was "last bought" based on a randomly generated number, but the most significant insight is how an entire range of companies is offering "dark patterns as a service" (DPaaS?) - companies that have turned the psychological research about manipulating users into a business.

In a Tweet, co-author Arvind Narayanan expresses an interesting thought:

Traditionally the law has been the main line of defense against deception. But maybe technical measures can also be effective. Many browsers now come built-in with protections against creepy online tracking; why not dark patterns as well?

While bot-based detection of potentially illegal dark patterns will definitely be of interest for legal tech, maybe it could be turned against those who use them - by blocking such elements or entire sites, degrade their search engine rankings, even flag them as "malicious sites" in browsers? The fight against deceptive dark patterns has only just started - as it appears not just in lawmaking processes, but through potential means to fight back in an automated manner.