Research for design

As a sociologist, I am deeply fascinated by the variety of human lifeworlds. Hence, discovering the most suitable methods to research technology use is one of my core interests – always on the lookout for creative ways to find out how individuals interpret the worth of using devices or services.


From my first-hand experience, this lightweight research process described by Rian Van Der Merwe is incredibly efficient. In the article it is summarized as chosing:

  • The RITE testing method (Rapid Iterative Testing and Evaluation)
  • The right remote usability testing tools
  • The right fidelity
  • The right people to talk to
  • The right way to analyze and communicate results
  • The right way to put it all together

Being able to run a large-scale, representative, extensive UX study is an inspiring undertaking. But so is a rapid iterative process like RITE that naturally includes user research in the product development process: verifying one’s work as you go takes the guesswork out of a project and provides valuable information to identify issues and opportunities.

As the author states, each form of research process has a context where it fits best – and this is a highly suitable one for small teams, with small products and small budgets.

Comment or share |

Using the “Cognitive bias codex” for design concept evaluation

Cognitive bias – the tendency of the human brain to interpret information based on unrecognised irrational factors – is a phenomenon that has been fascinating me for well over a decade. There is no more efficient way to improve the quality of a design concept than by doing a heuristic evaluation on potential cognitive biases […]

Read more |



Erika Hall presents why purely user-centred design is out-of-date, explains how data-driven design is actually bias-driven design, and debunks the myth of the genius designer working chiefly based on intuition. And offers her own alternative approach:

We need evidence-based design. Because what we are doing first and foremost is designing. It doesn’t matter how much research we do, or what method we use. There is no one right answer. It matters that we have sufficient evidence to support our choices and decisions, however we get that evidence.

From my point of view, the idea of doing “just enough research” proclaimed by Hall is a question of choosing the right methods for the task (often a mix of approaches, and sometimes even just using light versions of them).

And, ultimately, it is not just important what research we do to build up evidence for design, but also how to anchor the knowledge within the design team; ideally the designer-researcher is an integral member of it:

It doesn’t matter how much research you do if the people who have acquired the most knowledge write a report and move on.

Comment or share |


Based on 15 years of research, the INUSE research group at Aalto University created this “Co-design journey planner” that helps identify the most suitable research methods based on range of project variables:

The problem is not finding just any approach, but sorting out which approach might suit you. […] Using the wrong method is a waste of money and resources.

The application promises to generate a suitable set of recommendations based on nine main codesign approaches.

Comment or share |


The blog at Experientia points to this convenient tool providing quick access to design research methods, divided into six project phases:

This online repository is a necessarily unfinished and evolving resource for Participatory Design Techniques. These techniques help evolve a project lifecycle through participation of multiple stakeholders including potential users or audiences, partners

Not all methods are documented in detail, which makes this not the most complete source for teaching purposes, but it definitely is a handy resource for inspiration or to (re)discover some forgotten or new method for a case at hand.

The application was produced by CFC Medialab in conjunction with Professor Suzanne Stein of OCAD University.

Comment or share |


An exhaustive list of (commercial) tools for remote user/UX research, compiled in five categories:

  • self-moderated tools (users execute tasks on their own, recorded for later analysis)
  • mobile tools (a short but growing category of options)
  • automated tools (providing enhanced analytics while test subjects use a site)
  • moderated tools (remotely facilitated testing sessions)
  • surveys (for self-reported feedback)

Comment or share |

If you prefer, there is also an RSS feed