Planning a tree test: Part II

Guest Writer: Dave O'Brien

Planning_a_tree_test_part2.png

In the first part of this series on planning a tree test, we looked at how many rounds of testing you should do, as well as which parts of the tree to test and who should participate in your test.

In part two, we’ll discuss when tree testing should be done, how to divvy out responsibilities among your team, and some tips for testing online and in person.

When will we test?

The experts say that we should do usability testing early and often. Tree testing is no different – indeed, it was created to let us test very early (before we even have a website coded) and very often (because it’s both cheap and easy to run a tree test).

In general, we can start testing as soon as we have a structure to test – either a text dump of our existing site’s information architecture (IA), or the new IA ideas we’ve been playing with. We’ll also need time to create “find it” tasks that exercise our structure(s), and time for the overhead of setting up the tree tests.

Here’s a typical high-level timeline for three rounds of tree testing (testing the existing tree, testing our new trees, then testing our even-better-with-revisions “final” tree):

Time required Activity Details
(varies) Earlier IA work 1. User research (surveys, contextual inquiry etc.)

2. Content inventory/auditing
1 week Round 1 1. Open card sort

2. Baseline tree test (existing site)

3 days Create new trees Try alternative groupings and terms
1 week Round 2 1. Test new trees against each other

2. Compare to existing tree's results

3. Pick best tree and revise

1 week Round 3 1. Test revised tree

2. Revise and finalize based on results

 

If we’re just planning one or two rounds of testing, it should be easy to take this and cut it down to what is needed.

Who will do what?

We may be tree testing by ourselves, with a colleague, or we may have an entire project team at our disposal. Regardless, the following roles naturally emerge, so it’s a good idea to be clear about who’s covering what:

Role Responsibilities
Sponsor Pays for the testing and represents our interests among upper management
Project leader Makes the final decisions on scheduling, goals, etc.
Information architect Designs the trees and tasks, analyzes the results, ultimately responsible for the final site structure
Recruiter Finds the participants, either using customer lists, web ads or customer panels
Worker bee Handles the details (writing out cards, entering data in an online tool etc.)

How will we handle problems?

Anyone who has done user research is familiar with Murphy's Law – the participant doesn’t show up on time, the test equipment fails (it was working this morning!) – and the list goes on.

Testing online

For online testing, the main things to cover are:

  • Piloting our test to shake out mistakes in the tree and the task wording
  • Providing a contact for participants who encounter problems
  • Alerting our support channels that we’re doing a study, in case participants call to check that it’s legitimate.

Running a pilot also helps us spot technical problems with the study. By this, we mean issues that are caused by the supporting hardware (computers, networks, etc.) and software (operating systems, browsers, etc.), not by the test content.

Here are a few technical gotchas to be wary of:

  • Spam blockers (for email invitations)
    If we’re emailing invitations, be careful that they are not triggering the spam blockers built into the recipients’ email systems. While we can never be sure what will trigger a spam block, the fact that we’re asking our invitees to click a link in the email (and are offering some kind of reward to participate) makes this something to check with popular email systems such as Gmail, Yahoo Mail, Outlook.com, and so on. We may want to set up accounts with the most popular email providers just so we can send to our own addresses as a spam test.
  • Computers versus phones/tablets
    Most people may be doing the study on a conventional computer, but some will try doing it on a tablet or even on a smartphone.
    Make sure that the testing tool works on these devices. If it doesn’t support a particular platform, we should mention that in the invitation (or web-ad explanation page).
  • Old (or odd) web browsers
    Most testing tools will work in any up-to-date browser that supports web standards (Chrome, Firefox, Internet Explorer, Edge, etc.).
    Old versions of browsers (Internet Explorer in particular) are still commonly found in large organizations, so if we’re targeting these users, we need to check with the tool vendor to see if they support them (or, failing that, if the tool warns the user to try a newer browser).
    Mobile browsers seem to vary more than their desktop counterparts when it comes to handling online tests, so we try the test on a handful of the most popular browsers on iOS and Android.
  • Firewalls
    Most home firewalls (usually built into routers or anti-virus software) are unlikely to interfere with a tree test, but some corporate firewalls might. Certain large organizations have very strict firewalls that have (in our experience) played merry hell with our studies.
    If we're running a study that targets users in specific large organizations, and we get reports from those users that they can’t do the study, we check with the tool vendor to see if they can help solve the problem. If they can’t (and these problems can be hard to pinpoint), we may need to ask users in that organization to try the study from a different location (e.g. from home).

Testing in person

For these sessions, the main recruiting challenge is getting the right participant in the right room at the right time. Mostly that’s a matter of good recruiting, though life does happen – traffic is heavy, kids get sick, etc.

Here are a few tips that should help:

  • When we book someone for a session, we make it clear to them that it's a one-on-one study. People are less likely to no-show for these than they are for group sessions.
  • We tell them how much time we will need (20-30 minutes for the tree test and subsequent discussion, more if we’re doing additional activities with them).
  • We tell them that they’ll be reading text (from cards or a computer screen), so they should bring reading glasses if they need them.
  • We make sure we're clear about the time and place, of course, and we send a follow-up email with this info, directions and where to park, and our contact number.
  • We get their mobile number so we can contact them if something changes on short notice.
  • We contact them the day before the study to remind them of their upcoming participation. If they can't make it, we may be able to reschedule.
  • When they arrive, we make sure they know where to go. Instructions in the email are good, but signage (or meeting them in the lobby) is better.
  • If they haven't arrived by 5 minutes after the session's start time, we call them on their mobile phone to see if they're still coming and if they need help finding the venue.
  • If we are visiting them, we find out where to park, and we don’t go alone (for both our security and theirs). Oh, and we take slippers (in case we need to remove our shoes). 

With all this in mind, you should now be well-prepared to hit the ground and start tree testing.

Want to learn more about tree testing?

If you'd like to understand more about tree testing including analyzing your results, revising and retesting and lots more, check out my community wiki — Tree testing for websites. 

Guest Writer: Dave O'Brien
  • Guest Writer: Dave O'Brien
  • Dave has been researching, designing, and testing user interfaces for 20 years in Toronto, San Diego, and Wellington, most recently with Optimal Experience. He is also the creator of Treejack, Optimal Workshop's tree-testing app, author of an upcoming book on tree testing, and dabbler in Android apps.

Blogs you might also enjoy