Since 2001, U1 Group has been using Evaluative research to assist our clients and partners to deliver the best possible experience design. In this article I will introduce Evaluative research and its application to the experience design process. Before we go into the details here’s an overview of what to expect:
- Evaluative research informs experience design decisions
- Research can complement the agile approach to experience design and development
- Benchmarking experience design is key to assessing performance over time
- Expert opinion and user research can be an effective combination
- It does not have to be pixel perfect to be tested
What is evaluative research?
Evaluative research is by definition, a research activity that evaluates a product, service or even an idea. It’s an area of experience research that aims to validate and test products and services at various stages of design and development, which in my case are typically web based, digital and highly interactive sites or applications. More commonly though, researchers such as myself, refer to this type of activity as user research or even usability testing. In this article I’m going to discuss how these research tools can be used in order to inform experience design decisions.
How do you do it?
The type of Evaluative research methods we use most often at U1 Group involve user-based research activities such as task based testing (both moderated and online) and exploratory walkthroughs where we are interested in evaluating the overall experience rather than specific tasks. These are fundamental tools in any user researcher’s tool kit, and while they may appear to be quite simple in their application, they’re incredibly versatile and can be used to evaluate a product across all stages of the experience design process. Interestingly, while Evaluative research is not Generative by definition, insights and findings that emerge throughout a project often generate new questions and hypotheses, which can be used to direct future generative research.
The agile approach and evaluative research
The bulk of clients that we work with at U1 Group have adopted an agile approach to the design and delivery of their products. The role of Evaluative research in this process is regularly questioned, as it is perceived to move too slowly for work that is progressing rapidly in sprints, somewhat of a “tortoise and the hare” scenario. This does not have to be the case, it is possible to employ a “lean” approach to Evaluative research such that it comfortably fits within any agile framework, permitting iteration with insight rather than relying upon instinct. An approach that U1 Group, and others, has been able to successfully implement with client organisations.
Whilst Evaluative research plays a key role in assessing improvement via iteration, it can also assist in establishing performance against key parameters, or attributes, that can be revisited over time, and even against performance of competitors or an industry standard. It goes without saying that assessing performance is impossible to achieve without some form of objective benchmark as a point of comparison. We have worked with many clients to establish a benchmark for the experience design of a product or service that is meaningful to their organisation (e.g. customer loyalty, satisfaction, ease of use or task completion to name a few). Through establishing a benchmark(s) and taking a measurement at a point in time, our clients have been able to successfully answer questions about the performance of their experience design at regular intervals via sampling their customers qualitatively (i.e. face-to-face) or quantitatively. Not only is it possible to identify whether performance is improving or declining but also the reasons driving this change.
User driven vs expert opinion
Not all Evaluative research methods involve user research. Heuristic, expert and comparative reviews are also fundamental Evaluative research tools that provide great insight and design direction. They’re particularly useful where time or budget preclude user research, but they’re also often used prior to conducting user research activities, to inform direction.
These research activities are sometimes considered simpler than user-based research, but the opposite is actually true. User research provides rich insights directly from the user’s mouth. In comparison though, a heuristic, expert or comparative review relies heavily on the researcher’s ability to draw on their user research experience. Knowledge of web design, visual design and interaction design, as well as information architecture and even web accessibility is also important. So in fact, these research activities require significant skill and expertise and therefore, provide great value.
We have worked with many project teams who have found a blend of user driven insight and expert opinion effective in terms of time and budget. An expert review of some initial concepts that draws upon a researcher’s experience of observing hundreds, if not thousands, of user interactions can identify common usability issues, or “low hanging fruit”, that can be easily addressed. Subsequent user research then permits participants to provide higher value feedback as simple interface issues have already been ironed out permitting greater engagement with the experience under investigation.
When to test, what to test?
It really is possible to evaluate a product at any point in the design lifecycle, using any level of fidelity it’s presented in. Whether it be paper based wireframes, high fidelity interactive prototypes or WCAG compliant HTML single journey flows, in the context of Evaluative research an experienced researcher knows, they’re all still test materials. While the research approach may differ, each format provides an opportunity to gather unique insights and refine subsequent stages of design.
Time and time again I see design teams scrambling until the last minute, trying to cram as many design elements as possible into test materials. There’s a misconception that incomplete designs will adversely affect research findings. Of course, it’s important to ensure the product you’re testing incorporates all elements of the design required to meet the evaluation objectives, but it doesn’t need to be complete and pixel perfect. A strong researcher is able to successfully conduct a research session regardless of the fidelity of the test materials.
So, when to test and what to test? It really does depend on the questions you’re seeking to answer. If navigation and content needs to be validated, evaluating wireframes is more effective than evaluating a semi-functional prototype. Whereas questions around look and feel, branding and engagement usually requires high fidelity visual designs.
Whether you’re conducting research in house or engaging an external agency, keep an open mind to new research methods and flexibility in designing a research plan outside the square. The good old six-person user-based evaluation will always have its place in our toolkit, but an experienced researcher can usually come up with a creative research approach to fit within your parameters if given the opportunity. Who knows what you’ll discover along the way!