The week in quotes
(2019W02)

As I constantly encounter interesting texts, podcasts and events, I thought to experiment with something new on this blog in 2019: I aim to jot down some inspiring quotes I encounter and post them along with references to their source. A list of mini-bookmarks of sort, or a Saturday reading list; capturing interesting thoughts to return to later – the provocative, the surprising, the wise.

Let’s see where this may lead… (and if I feel like doing this regularly)


Chris Palmer in “The State Of Software Security In 2019”:

Dealing with the toxicity and abuse of social media is a long-term, multi-pronged effort, but 1 thing that we can immediately do as engineers, PMs, designers, and managers is to push back on ‘engagement’ as the primary or only metric for ‘success’. It’s game-able and heavily gamed, and does not remotely capture the real experiences of real people on social media. People’s experiences are often profoundly awful, and we as software developers are responsible for dealing with the consequences of what we’ve built.


Tim Kadlec, on the ethical dimensions of web perfomance (both socially and ecologically):

When you stop to consider all the implications of poor performance, it’s hard not to come to the conclusion that poor performance is an ethical issue.


The EFF’s guide on e-mail privacy:

At what privacy cost do “insightful analytics” come at? Nothing about counting the number of visitors coming to your site via email is inherently bad. But do you really need to store exactly who clicked which link from which email?


Erika Hall on the difference between research questions and interview questions

I’ve seen a lot of executives who freak out that a round of interviews only included 12 people, and yet are willing to base major decisions on analytics from a few hundred active users.

A lot of bad research results from a mismatch between question and method, usually because people spend more time worrying about the activity (surveys, testing, interviews) than about forming a good question. Even more bad research is designed specifically to provide support for an existing solution.


Justin E. H. Smith in an essay, titled “It’s all over”:

But human subjects are vanishingly small beneath the tsunami of likes, views, clicks, and other metrics that is currently transforming selves into financialised vectors of data. This financialisation is complete, one might suppose, when the algorithms make the leap from machines originally meant only to assist human subjects, into the way these human subjects constitute themselves and think about themselves, their tastes and values, and their relations with others.

[…] the tech companies’ transformation of individuals into data sets has effectively moneyballed the entirety of human social reality.


via Swissmiss:

“The quieter you become, the more you can hear.
— Ram Dass


Michael Bolton in a Twitter thread, criticizing an (undisclosed) “white paper” on software development:

In most user stories, nobody is ever interrupted, distracted, naive, confused, under pressure, impatient, disabled, outside of wireless access. Nobody makes human mistakes. Nobody closes the damned laptop lid. The characters in user stories might as well be drones, robots.


A weekly collection of inspiring, surprising or otherwise noteworthy texts, talks and podcasts. Usually around my core topics of usability, ethics, and digital society. Previous issues in the archive.

Copy of this note syndicated on Twitter (3 likes, 1 retweets)
Responding with a post on your own blog? Submit the URL as webmention (?)