Menu

A call for more messiness in usability testing

By: Laura Parraga-Gonzalez
Date: November, 2017
File under: Articles

Sometimes it’s the people you don’t observe and the questions you decide to not ask that give you the best opportunities to learn. When we allow for overflow, instead of containment, we admit surprising results that can inspire and guide us to create better designs.

Coming from anthropology, fieldwork to me is inherently messy. When we understand contexts, people and the things that surround them not primarily as an object to be explained but a contingent window into complexity (Candea 2007), immersing ourselves in the lives of others means allowing for contradiction, tensions and uncertainty to inform our understanding of the world.

Understandably, in the fast-moving day-to-day world of business with its constantly changing requirements and strict time constraints, often the common reflex is to order, streamline and make neat, in order to give well-informed answers and the build confidence to move forward. While this desire to order is justified and has its legitimate place in usability testing, I also maintain that there’s value in doing the opposite of what our reflexes tell us to do. For example:

The challenge is to find out just how ‘wrong’ the ‘wrong’ kinds of people use a piece of technology.

People use technology in unexpected ways all the time

Interacting with many different users in a range of contexts – doing online banking, shopping at a supermarket, buying insurance, selecting an energy provider – we see how people do stuff in their own, unique ways all the time. They hack the technologies they have at hand and in many, messy ways fight with, reject and (re)appropriate them.

In a recent project, we explored how people organised and managed their finances with the help of digital technologies. One participant explained to us that she had set up various Gmail accounts for each of her financial ‘buckets’, for example her bank accounts and investment products. In each of those accounts, and neatly separated from each other, she would receive all relevant correspondence. That way, when tax time came, she was able to effectively manage, print and send off documents in a structured and organised way.

In another case, two user researchers at Mozilla describe how they were tasked with research to improve and optimise users’ browser workflows but found that participants’ workflows were not as straightforward, deterministic, and reducible as we anticipated, and thus hard to improve at all. Instead of using features that had been designed to help them save articles to read later, they observed that users were simply doing things like keeping multiple tabs open in their browsers.

“Participants’ workflows were not as straightforward, deterministic, and reducible as we anticipated.”

At the end of their article, the two authors argue that their informants had found and designed their own solutions and thus already optimised the processes that the researchers came in to improve. For users, they argue, it simply felt effective to use several browser tabs.

Is technology a help or hindrance?

I would go a step further to argue that complex, multilayered and seemingly “ineffective” use of technology is not only unproblematic from a user perspective but the actual point. It is through this messy kind of use that people can grasp, understand and make sense of things.

The ‘ineffective’ use of technology is not only unproblematic from a user perspective but the actual point.

Dawn Nafus, anthropologist at Intel, and her ethnographic work on the Quantified Self movement (QS) provides an enlightening account of what it means to make sense of things through use. Her informants were using their phones, smartwatches and various other devices to self-track all aspects of their lives, such as the hours of sleep they got in a day.

Importantly, for QSers this was not simply about looking at a bunch of abstract figures and assessing (or letting the technology assess) what they meant; nor were they ceding complete authority to the supposed objectivity of the data. Instead, they were using it as a technology of noticing and self-reflection. In their quest for heightened self-awareness, features like shortcuts and automation, a seemingly effective use of technology, were not only undesired but often detrimental to the users’ goals. As one informant explained: This glucose monitor will automatically upload my glucose levels, but I had to go back to doing it manually. When it’s all automatic, you aren’t really aware of what it is saying.

In this case, entering the information manually is an integral part of the process of understanding and making sense of the data. And this is not only the case for QSers. Often, people comparing and researching products online make decisions about what to buy by juggling different devices, opening various tabs and making lists on paper. When decision-making is contingent on looking in various places and engaging with many levels of details and devices, a machine spitting out an answer may not be helpful at all.

When decision-making is contingent on looking in various places and engaging with many levels of detail and devices, a machine spitting out an answer may not be helpful at all.

When we fall back into the reflex of ordering, streamlining and making neat, in the end, what we design for might not align at all with the reality of use. Providing automation and offering shortcuts when it’s the journey, not arriving at the destination that counts, is an awkwardly misplaced attempt at adding value. Reducing complexity sometimes means reducing the very value a technology provides.

How can we allow for a bit more messiness in our work?

Recruitment is an often-underestimated step in usability testing. Whether it’s seen as a more negligible part of the research, or fully defined before questions and research details have been discussed, we are already shaping our possibilities to learn through our decisions about who we should hear and observe.

Broadening the recruitment specification is often easier said than done, as there are often entire business cases behind defining who is a relevant user for a particular product, and who is not. But why not allow for those two extra participants who have not been envisioned at all? In the end, thinking about who we want to hear and observe and asking ourselves why it’s them and not others might already point us to some interesting insights.

Another way to ensure that we leave enough space for exploring the more surprising and contradictory aspects of use, is to approach usability testing in the most exploratory way possible. By including introductory questions aimed at gathering a holistic idea about who is sitting in front of you, and allowing off-script questions, unexpected answers and unaddressed needs to influence our research results, we can let surprising and inspiring insights define the solutions.

Especially in usability testing, where we are constantly looking out for usability ‘issues’, our tendency is to override the flaws of people’s inefficient use before we’ve had time to look behind them. Introducing more messiness also challenges the very idea of what an ‘issue’ is. It requires accepting the flaws and using them as a starting point to challenge our own view of what a product should and shouldn’t do.

Introducing more messiness also challenges the very idea of what an ‘issue’ is.

When we pay attention to unexpected details, instead of doing away with them, we can uncover what is really at stake for people when they engage in meticulous calculations, research and tracking processes. In seeing how people use devices and solutions differently than envisioned, prescribed and intended we can recognise the actual value that these technologies have for people, and design to enhance this value.

 

References

1. Nafus, D., & Sherman, J. (2014). Big Data, Big Questions | This One Does Not Go Up To 11: The Quantified Self Movement as an Alternative Big Data Practice. International Journal of Communication, 8, 11. Retrieved from http://ijoc.org/index.php/ijoc/article/view/2170, 02/10/17.

2. Candea, M. (2007). Arbitrary locations: in defense of the bounded field-site. Journal of the Royal Anthropological Institute, 13: 167–184. Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9655.2007.00419.x/abstract, 02/10/17.

3. Selman, B., & Petrie, G. (2017). In Praise of Theory in Design Research: How Levi-Strauss Redefined Workflow. Epic Perspectives. Retrieved from: https://www.epicpeople.org/theory-in-design-research, 02/10/17.

Back to articles
Share this

Get in touch

Find us at:

Level 24, 570 Bourke Street,
Melbourne VIC 3000
(03) 9684 3470

Email: info@u1group.com

Online enquiry