Robin Christopherson just wrote an important article on the AbilityNet blog: “AI is making CAPTCHA increasingly cruel for disabled users”. It must have been 10 years that I have repeatedly argued that the challenge of telling apart bots from humans should not be the job of the user, so his text resonates with me a lot.
Just recently, my online bank introduced a Google reCAPTCHA, turning me into an unpaid laborer for Google to train their AI every time I log in – stealing my time as I spend up to a minute clicking on pictures of cars, traffic lights, or storefronts. This is such terrible UX, I have seriously considered switching to another bank.
Working mostly with small-to-medium-scale sites and services (and still getting my share of bots and spam to deal with), I can only imagine the pain of big online service providers. Yet, CAPTCHAs are equally disruptive to the user experience as are the mandatory EU cookie banners or the obnoxious “Subscribe our newsletter” popups. All of those are really, really bad design to start with! As in: terribly bad design!
They are even worse (i.e. worse than terribly bad) design from an inclusion and accessibility perspective. As Robin Christopherson demoed in his talk at beyond tellerrand Berlin 2017, CAPTCHAs are even more painful to users who rely on screen readers. And if, indeed, today it is possible to use Google’s AI to beat their own CAPTCHAs, these are set to become ever more difficult to solve. This arms race between bots and services leads to ethically, and even legally, questionable design. As Christopherson writes:
So long as websites want to keep the bots from registering spam accounts or posting bogus comments, there will need to be some way for developers to detect and deflect their attempts. The use of CAPTCHA challenges, however, is not and has never been a fair (or even legal) one. It discriminates and disenfranchises millions of users every day.
Just to add – and this is the mindset of an “inclusive designer” – these are not only difficult to solve for users with vision impairments or dyslexia, aka. humans commonly considered to rely on “accessibility features”. The exclusion introduced through CAPTCHAs can be as straightforward as users simply lacking the understanding for what to do – e.g. understanding that “mark all images of traffic lights” includes those tiles only showing the fraction of the back of a traffic light is much less obvious than many in the tech industry may think.
Good design does not put a burden on humans just for the sake of solving a technological problem they have nothing to do with. For ethical designers, it has to be an imperative to sit down and look for solutions that do not exclude and discriminate. Even more so, users should never be tricked/forced into doing unpaid work (no matter the beneficiary) while visiting a website. And last but not least, any bot-detecting solutions should be part of user testing – if the test users (and please, select a broad range of abilities) complain about it being a pain, go back to the drawing board.
Seriously – along with cookie banners, CAPTCHAs are the most annoying feature of the web today (and a means of exclusion on top). There must be ways for technologists and designers to come up with inclusively designed Turing tests for everyday use?! Robin Christopherson’s text is a great starting point to think about the human context.
- Some Google competitors even turned CAPTCHAs into a revenue-generating service: "hCaptcha is a drop-in replacement for reCAPTCHA that earns website owners money and helps companies get their data labeled"; talk about people farming! ↩
- NB. While cookie banners are a (no matter how pointless) legal requirement, there is never a reason to push a newsletter ad in your user's face! ↩
- In addition to AbilityNet blog post, the reasoning behind the ever more difficult CAPTCHAs is described well in a recent article on The Verge Real World AI Issue. ↩