Bookmark: "When Trading Track Records Means Less Privacy"

Sebastian Greger


This analysis of the recent Strava incident (where a global heatmap of GPS workout tracks revealed plenty of potentially secret military sides worldwide) by the EFF is a descriptive explanation of why “anonymization” is not the silver bullet when it comes to dealing with personal data:

Though the revealed information itself was anonymized—meaning map viewers could not easily determine identities of Strava customers with the map alone—when read collectively, the information resulted in a serious breach of privacy.

The blog post broadens the discussion to include non-military users, showing how “social features” of Strava have revealed personal information before: as technology develops, and algorithmic de-anonymisation becomes increasingly easy, the idea that removing personal identifiers from a piece of data makes it “anonymous” is often a false belief.

Often, our understanding of “anonymous” is wrong—invasive database cross-referencing can reveal all sorts of private information, dispelling any efforts at meaningful online anonymity.

This is why, for example when “preparing for the GDPR”, it is so important to understand that just anonymising data does not mean it is no longer personal data and that more often than not it needs to be treated with similar care as data that carries individual identifiers (and this applies even more so to “pseudonymised” data—-an important consideration too easily forgotten when “just tracking our users based on a randomly assigned ID”).

I'm Sebastian, Sociologist and Interaction Designer. This journal is mostly about bringing toge­ther social science and design for inclusive, privacy-focused, and sustainable "human-first" digital strategies. I also tend to a "digital garden" with carefully curated resources.

My monthly email newsletter has all of the above, and there are of course also an RSS feed and Twitter.