In addition to quantifying the depressing amount of manipulative and deceptive dark patters on the web, this paper describes an interesting methodology, using a bot that is able to detect dark patterns.
The key theoretical contribution is an empirically grounded categorisation of 15 types of dark patterns in seven categories: sneaking, urgency, misdirection, social proof, scarcity, obstruction, and forced action.
In a Tweet, co-author Arvind Narayanan expresses an interesting thought:
Traditionally the law has been the main line of defense against deception. But maybe technical measures can also be effective. Many browsers now come built-in with protections against creepy online tracking; why not dark patterns as well?
While bot-based detection of potentially illegal dark patterns will definitely be of interest for legal tech, maybe it could be turned against those who use them - by blocking such elements or entire sites, degrade their search engine rankings, even flag them as "malicious sites" in browsers? The fight against deceptive dark patterns has only just started - as it appears not just in lawmaking processes, but through potential means to fight back in an automated manner.