“Keep the human in the loop, and in control” – what better call for designers and technologists to critically assess the impact of pushing the boundaries. The opening keynote at the World Usability Day Berlin 2017 event (this year on the topic of AI), presented by Dr. Dr. Norbert A. Streitz, founder and scientific director of the Smart Future Initiative (SFI), was a great sample of critical enthusiasm for technology – an approach I share, yet sometimes struggle to justify as people see it as being against technological advances in general.
The core message of the talk was that, while acknowledging the potential and value of AI, its shortcomings and risks must not be forgotten. Instead, they can be drivers for designing better technology. Right up front, Streitz pointed out that critical reflection of technology doesn’t mean that everything is bad; his aim is not to call for a paradigm shift from the current state, but merely for a critical perspective: “new ideas based on reviewed or even revised paradigms”.
And a great reflection it was; even more so as delivered by such a seasoned expert in the field. The one-hour talk first took a dive into the history of visions of the future that did not materialize (sci-fi visions of cities and transportation), along with realizations in the past that we are still struggling with today (in 1900, there were 34,000 electrical cars registered in the USA, only a small share of cars running on fossil fuel).
Visions vs. reality
As today’s hypes predict a future of smart ecosystems, disappearing computers etc., we tend to forget that progress was often much slower than predicted in the past. He used the example of how much longer it took for a computer to beat a chess champion than anticipated, or Buzz Aldrin’s quote “You promised us Mars colonies, instead we got Facebook” to make his point.
The current “smart everything” paradigm, where devices and algorithms humans are increasingly “removed from being the ‘operator’ (and thus in control)”, needs to be redefined, Streitz said, listing feasibility problems, missing transparency, dependency implications and – ultimately – the loss of control. His perspective is that we need to redefine in order to determine: “What kind of ‘smart world’ do we want to live in?”.
And he highlighted once more that this kind of critical debate doesn’t mean a “back to nature and only to nature” mindset, yet clearly the opposite to a naive approach of “behaving like ‘Johnny Look-in-the-air'”.
Three problem sets
Streitz presented three problem sets with the current “Smart Everything” paradigm:
A: inappropriate, insufficient, error prone behaviour
B: rigidity (i.e. unability to handle non-standard input, lack of context awareness)
C: missing transparency / traceability (“algorithmic responsibility”)
“How bad is the situation?”, Streitz asked, and compared the situation with that of the sorcerer’s apprentice by Goethe (“Zauberlehrling”), who first trains his broom to do chores on his behalf and then fails to control the broom as it turns out he has not trained him well enough to do a good job. But he also sees hope: it might well be that we have reached the peak of exaggerated expectations, with even Elon Musk and other AI enthusiasts calling for regulation today.
“Smartness/AI should not be ‘first-class citizens'”, he presented his counter proposal of “keeping the human in the loop and in control”: system-oriented, importunate, people-/citizen-oriented, empowering smartness. When thinking of “smart spaces” as cooperative spaces, they should make people smarter, allow them to make use of the data (for example reframing the common “smart city” from “from smart-only cities to cooperative, self-aware cities”). This approach, according to Streitz, requires interdisciplinarity both in approaches and in teams – the goal should be to “reconcile people and technology”.
The keynote then specifically addressed the broad privacy implications of AI. Privacy is deteriorating and turning into a privilege, but current tech-related privacy debates prominently focus on social media. These issues will become even more important in smart urban environments, where it is not possible to establish fake identities or avoid exposure (personally, I believe that is already close to impossible in social media as well, partly thanks to AI).
Streitz’ proposal to set boundaries to urban and domestic spies that come in form of ubiquitous sensors is privacy-by-design combined with regulations: making privacy a first-order design objective, and establish privacy-as-design as a competitive advantage for solutions and products from Europe. He does not share the perspective that EU businesses are going to suffer from tight regulation, but that privacy and security will turn into a unique selling proposition, providing added value – a focus that can provide new opportunities for EU-based companies.
No easy solutions for a complex world
In the Q&A, I took the opportunity to ask about what visions the speaker has for individuals’ consent in a smart environment – the challenge being that upcoming legislation will require operators to acquire consent before processing personal information. Streitz’ vision hovers around the idea of a “centralised clearing house” for consent, where individuals can pre-define what information they are ready to provide based on a system of privacy levels. At the same time, he also stressed: “Why should there be an easy solution when we have such a complex world”?
Another question was about the drivers in politics/economics for this redefinition? The presenter highlighted the right of citizens to get information from companies on how their algorithms work – a strong force to rethink when wanting to register products in Europe.
The third question from the audience wondered whether it is already too late, whether we as society/people have forgotten how to make decisions about data and given way to algorithms – followed by the question what should be done in eduction and UIs alike, to ensure the awareness/necessity for people to be in the loop? Streitz’ answer was a call for more wide and open information spaces, outside of today’s echo chambers and recommendation loops. Exposing people to ideas that are different are one important way to ensure a critical literacy for AI.
All in all, this was a delightful opening for the World Usability Day Berlin 2017. It was great to set the tone for the day by starting with giving voice to somebody who sees all the potential in emerging technologies, yet reminds everybody that critical engagement with these is necessary if we want to stay in control, keep the human in control – or “in the loop”, as Streitz put it.