In 2009, Bob Bailey and Cari Wolfson published a paper entitled “FirstClick Usability Testing: A new methodology for predicting users’ success on tasks”. They’d analyzed 12 scenario-based user tests and concluded that the first click people make is a strong leading indicator of their ultimate success on a given task.
Their results were so compelling that we got all excited and created Chalkmark, a tool especially for first click usability testing.
It occurred to me recently that we’ve never revisited the original premise for ourselves in any meaningful way. And then one day I realized that, as if by magic, we’re sitting on quite possibly the world’s biggest database of tree test results. I wondered: can we use these results to back up Bob and Cari’s findings (and thus the relevance of Chalkmark)?
Hell yes we can.
So we’ve analyzed tree testing data from millions of responses in Treejack, and we're thrilled (relieved) that it confirmed the findings from the 2009 paper — convincingly.
What the original study found
Bob and Cari analyzed data from twelve usability studies on websites and products ‘with varying amounts and types of content, a range of subject matter complexity, and distinct user interfaces’.
They found that people were about twice as likely to complete a task successfully if they got their first click right, than if they got it wrong:
If the first click was correct, the chances of getting the entire scenario correct was 87%
If the first click was incorrect, the chances of eventually getting the scenario correct was only 46%
What our analysis of tree testing data has found
We analyzed millions of tree testing responses in our database.
We've found that people who get the first click correct are almost three times as likely to complete a task successfully:
If the first click was correct, the chances of getting the entire scenario correct was 70%
If the first click was incorrect, the chances of eventually getting the scenario correct was 24%
To give you another perspective on the same data, here's the inverse:
If the first click was correct, the chances of getting the entire scenario incorrect was 30%
If the first click was incorrect, the chances of getting the whole scenario incorrect was 76%
How Treejack measures first clicks and task success
Bob and Cari proved the usefulness of the methodology by linking two key metrics in scenario-based usability studies: first clicks and task success. Chalkmark doesn't measure task success — it's up to the researcher to determine as they're setting up the study what constitutes 'success', and then to interpret the results accordingly. Treejack does measure task success — and first clicks.
In a tree test, participants are asked to complete a task by clicking though a text-only version of a website hierarchy, and then clicking 'I'd find it here' when they've chosen an answer. Each task in a tree test has a pre-determined correct answer — as was the case in Bob and Cari's usability studies — and every click is recorded, so we can see participant paths in detail.
Thus, every single time a person completes an individual Treejack task, we record both their first click and whether they are successful or not. When we came to test the 'correct first click leads to task success' hypothesis, we could therefore mine data from millions of task.
To illustrate this, have a look at the results for one task.
The overall Task result, you see a score for success and directness, and a breakdown of whether each Success, Fail, or Skip was direct (they went straight to an answer), or indirect (they went back up the tree before they selected an answer):
In the pietree for the same task, you can look in more detail at how many people went the wrong way from a label (each label representing one page of your website):
In the First Click tab, you get a percentage breakdown of which label people clicked first to complete the task:
And in the Paths tab, you can view individual participant paths in detail (including first clicks), and can filter the table by direct and indirect success, fails, and skips (this table is only displaying direct success and direct fail paths):
How to get busy with first click testing
This analysis reinforces something we already knew: that first clicks matter. It is worth your time to get that first impression right.
You have plenty of options for measuring the link between first clicks and task success in your scenario-based usability tests. From simply noting where your participants go during observations, to gathering quantitative first click data via online tools, you'll win either way. And if you want to add the latter to your research, Chalkmark can give you first click data on wireframes and landing pages, and Treejack on your information architecture.
To finish, here's a few invaluable insights from other researchers on getting the most from first click testing:
- Jeff Sauro details a useful approach to running a first click test, and shares the findings from a test he ran on 13 people.
- An article on Neoinsight describes three common usability problems that first click testing can solve.
- Gianna LaPin describes a first click test she ran on Netflix, VUDU, and Hulu Plus.