Bookmark: "Better personalized recommendations through transparency and content design"

Sebastian Greger


Ryan Bigge identifies a recent trend to cute-ify algorithmic opacity by simply summarizing it to the user as “magic sauce” or similar:

But magic sauce isn’t just black boxing. It’s taking something serious and saying, “Don’t worry your pretty little head about it.” To be fair, Google isn’t the only culprit. As Jesse Barron noted in his wonderful 2016 article The Babysitters Club, “We’re in the middle of a decade of post-dignity design, whose dogma is cuteness.”

The copywriter who has the task to summarize a complex algorithm into a one-liner half the size of an SMS has all my sympathy. But this is problematic long before the UI text is designed. Hiding algorithmic complexity into user-facing “magic” is not only patronizing (and sometimes outright childish), but dangerous: in a world influenced by algorithms everywhere, more transparency is needed, not less.

The article assembles some examples of “consise transparency” and discusses ways to be more responsible. It ends with paraphrased quotes of Josh Lovejoy's “three truths”:

  1. Machine learning won’t figure out what problems to solve.
  2. If the goals of an AI system are opaque, user trust will be affected.
  3. Every facet of machine learning is fuelled by human judgement, so it must be multi-disciplinary.

I'm Sebastian, Sociologist and Interaction Designer. This journal is mostly about bringing toge­ther social science and design for inclusive, privacy-focused, and sustainable "human-first" digital strategies. I also tend to a "digital garden" with carefully curated resources.

My monthly email newsletter has all of the above, and there are of course also an RSS feed and Twitter.