Skip to content

Contact us


Phone (02) 9261 4624 Fax (03) 9684 3434 Level 1, 26-28
Wentworth Ave
Surry Hills
NSW 2010
Sydney map


Phone (03) 9684 3470 Fax (03) 9684 3434 Ground Floor
119 Ferrars St
South Melbourne
VIC 3205
Melbourne map

Get in touch

At U1, we love hearing from people. So whether you have a query about the work we do, who's on our team or when's the next training event, contact us via email or phone.

Cognitive bias and UX research

Alyce Former UX Consultant 25th Sep, 2014

Cognitive bias is essentially an unconscious thinking error or judgment that a human may make when processing information. We usually don’t even know we are prone to the effects of cognitive bias; they are really just part and parcel of being a sentient human being. The ‘bandwagon effect’ is one of the more-well known cognitive bias you might encounter in every day life, but there are many more subtle bias that you might be surprised to learn you are affected by, every day.

Cognitive bias has the ability to impact the outcomes of UX research in many ways, namely due to the wide range of observer effects that have been identified in cognitive science and social psychology. Biases may skew the reliability of evidence, rationality of approach or analysis, may affect probability or impact the decision making at any point in a project.  Cognitive bias has the ability to affect both the researcher and the research participant, so it is important that research questions and approach are crafted appropriately.

In this article, we aim to uncover some of the more common cognitive bias we should all be aware of when undertaking any UX research. We also look at how the effects of these bias can be mitigated to enhance the reliability and significance of research results.

Confirmation bias

Confirmation bias is one of the more common cognitive biases we see when undertaking research and performing analysis. It is the tendency to seek out information that confirms a preconceived belief or hypotheses.

This cognitive bias can affect the way research is undertaken, by asking questions that may be more likely to receive a positive or affirmative action. Questions are often ‘closed’, too (we call these “chocolate fudge brownie sundae” questions - who would ever say no?).

Confirmation bias also has the potential to affect analysis – by choosing to place greater emphasis on evidence that supports a particular hypothesis or idea, especially when the evidence is ambiguous.

To overcome confirmation bias:

  • remain open and consider all evidence equally
  • avoid asking questions that will give you answers you want to hear
  • ask open-ended questions and just let your research subject give their honest opinion
  • enlist multiple researchers to work on your project (a fresh pair of eyes can help pick up themes or results that may otherwise be missed).

Experimenter's bias

Somewhat similar to confirmation bias, experimenter’s bias is a subjective bias towards the result that is expected by the researcher (experimenter). This bias may be introduced into a research project in many ways (for example, through subtle facial expressions, posture or movements). As a result, the subject may subconsciously amend their responses to meet the experimenter’s expectations. Studies show that experimenter’s bias also works on animals, not only humans.

The impact of the experimenter’s bias strongly reiterates how important moderation and interview techniques are for UX research. They are skilled arts. Too often, we see sessions undertaken by researchers poorly versed in experimenter’s bias whom accidentally communicate expectations through body language and facial expressions. The perfect “poker face” is required to avoid altering the participant’s behavior and research outcome.

Anchoring (or focalism) bias

Anchoring bias is the human tendency to “rely too heavily on the first piece of information offered when making decisions”. In short, we tend to prefer the first thing we are shown when asked to make a decision, such as choose between a set of options. Anchoring can also prevent us from contrasting or comparing a higher number of objects, as there can be a tendency to fixate on a value or object that we then compare to everything else.

Anchoring particularly comes into effect in UX research when more than one version of a product or concept is shown to research participants, such as in A/B testing. Participants may often be more inclined to prefer the first version they are shown, which can skew the research results significantly if efforts are not made to prevent this bias.

To further complicate the issue, we are often inclined to select a ‘middle’ option, or have a central bias, if there are many options presented to us in any vertical or horizontal order.

Typically, anchoring can be prevented by alternating the order in which participants are shown versions. For example, in an A/B study with 6 participants:

  • participants 1, 3 and 5 are shown version A first
  • while participants 2, 4 and 6 are shown version B first.

The central bias can be mitigated by only presenting objects one at a time, rather than along a vertical or horizontal axis.

Clustering illusion and insensitivity to sample size

The clustering illusion is the tendency for a researcher to wrongly interpret inevitable clusters or streaks of data in small sample sizes to be statistically significant. This is caused by the bias towards under-predicting the amount of variability likely to appear in a small sample size.

On the other hand, somewhat related but on the opposite end of the bias spectrum, is the insensitivity to sample size bias. This occurs when we judge the probability of uncovering a sample statistic without taking into consideration the sample size. Variation is more likely in a small sample size, however this is often not considered to be the case.

Whilst both these biases typically apply to random sample groups, we can also see these biases across the targeted research we conduct.

Sample sizes are a cause of much contention across the UX research space. We often see companies making key decisions based on research undertaken on a very small sample size. Similarly, we see other companies become very precious about sample size, and negate responses from smaller research projects.

We believe the key to sample size, in a practical sense, comes down to statistical significance. Certainty and confidence in the statistical significance of data, particularly for key decisions such as for those ‘go or no-go’ questions, is crucial.

Selection bias

On the topic of statistical significance, one other bias that is important to consider is selection bias. This is when an error occurs in choosing the type or number of individuals to participate in a study. It is important that selection bias is taken into consideration when analysis is undertaken.

Selection bias exists in a few forms, the most relevant to UX research being:

  • Sampling bias: when a non-random sample is selected, meaning some participant types are more likely to be included than others
  • Time: when a study is ended at a point in time when the results support a desired or expected outcome (related to confirmation bias).

Whilst these bias may not necessarily mean the results and analysis are wrong, it is important that these bias are recognised in analysis. Particular demographics may have different opinions that do not necessarily represent that of the general population.

And last, but not least…

Bias blind spot

By reading this article, you might think you’ve become well acquainted with your own biases and do not need to counter for them. But this is not true. The bias blind spot is an issue that has come about from studies consistently showing that people believed themselves to be  much less biased than they actually were. According to Pronin and Kugler (2007), “since biases operate unconsciously, these introspections are not informative, but people wrongly treat them as reliable indication that they themselves, unlike other people, are immune to bias.”

Consequently, don’t assume you are any less prone to bias than anyone else. When we evaluate ourselves for bias, we tend to use an introspective method. When we evaluate others for bias, we tend to use more stringent criteria and overt methods that may be more critical. The moral of this story is to always ask a colleague to assess your method and analysis for bias!

About The Author

Alyce Former UX Consultant

As a card carrying member of Generation Y, Alyce has been immersed in technology ever since she was born. An avid digital user and advocate for good experience, Alyce has a great eye for technology’s changing landscape, and brings a fresh, innovative perspective to research projects that no one else can.


Visit our website if you'd like to learn more about U1 Group     U1 Group