Picking the right tool for the job

Life as a UX professional is busy. One day you're trawling through pages of content, then you're doing wireframes, next you're user testing and not to mention presenting results to clients.

We built our tools to help our consultants work more effectively. We have been asked about when the best time is to use which tool. So here are our answers. In general it pays to keep in mind that our tools can be used for generative purposes or evaluative purposes.

We have categorised our recommendations into how you may use our tools depending on what you're doing. They include:

  • Information architecture (IA) overhaul
  • Intranet
  • Navigation
  • Redesigns
  • A/B split testing

Information architecture overhaul

Primary recommendation/s: Start with OptimalSort then run a Treejack study. In some cases, the order maybe reversed.

When questions loom about how to best organise and structure content, we normally use OptimalSort as a tool to understand how people think about information. Sometimes the process of running the card sort can be as useful as the results themselves.

More recently we have also been using Treejack as a way to challenge whether the existing IA (if there is one) really needs to change. A quick Treejack survey will provide empirical evidence on whether people are truly struggling to find information. If participants perform well, the issue may not be a bad IA but perhaps bad user interface or interaction design. Go easy on the IA. It's not always the culprit.

OptimalSort helps you facilitate a ground zero understanding of how people think about the information that you're responsible for categorising. If you already have candidate structures (or it has been decided for you), your best bet is to skip OptimalSort and do several rounds of tree testing instead with Treejack.


Primary recommendation/s: Treejack and Chalkmark

Intranets more than most websites stand to benefit tremendously from better information architecture. However, it is common to be faced with numerous constraints - a difficult content management system (or no/too many content management systems!), unchangeable mandates from certain business units, content owners writing for print.. and the list goes on.

Obviously what we would recommend depends what you're working on within the intranet. Refer to the relevant recommendation here - eg, Navigation or IA overhaul. Typically however, the best bang for buck appears to come from involving as many people as possible. With this objective, Treejack and Chalkmark are great because it is easy for people to take part, the analysis is simple and we give you good head start with reporting the results.

Pick from any of these suggestions:

  • Run a Treejack survey on the current intranet information architecture as a baseline for performance and link that to productivity. Eg, if you score 60% overall, what does the 40% failure cost the business in lost time?
  • Make that change that business unit x really wants but before you deploy it, run a Chalkmark survey to see if anyone would click in the right place.  Use the results to constructively agree on a new solution.
  • Run an "open" Chalkmark survey to see what type of content people are attracted to. Mock up your most popular landing pages with proposed content and set open ended tasks for your participants. For example, "Click on the first thing you're likely to read on this page. or "Imagine you've just started at ACME Inc. What grabs your attention?"
  • Run a card sort with OptimalSort but invite key stakeholders to take part as a group. Book a meeting room and get key people from different business units try to agree on how to organise the cards. You could either do it with real cards, or load it into OptimalSort and get them to do it on screen (this way they have to fight for the mouse!).


Primary recommendation/s: Chalkmark

If you have settled on what the labels are for your navigation, you should test the effectiveness of your proposed design. In our experience, visual design and interaction design can have a significant impact on a user's ability to find information. It pays to know if it's the user interface or the information architecture (or in this case the label) that is responsible for poor find-ability.

You can set up several tests:

  • Test navigation in the full context of the page. Where possible, it's best to have the full context of the page available to a participant. Create a standard set of "find it" tasks and see if people would use the navigation, and if they do, whether they're using the right labels. Run a small survey, make one or two changes then run another. That way you know what is affecting success and you're not dealing with too many variables.
  • Test navigation by itself. Occasionally it's useful to test just the navigation itself without the distractions of other elements on the page. Take a screenshot of the navigation and set the same "find it" type tasks. Results will clearly show whether people are clicking where you expect them to. You can always make it easier or harder by having the navigation expanded in the right areas in the screenshot you're using.
  • Test interaction. To test a workflow simply set multiple tasks in Chalkmark. For example, if I want to know if users are getting to the right pages to get customer support, I would set the same task three times but with different screenshots. Screenshot #1 would be the top level navigation. Screenshot #2 would be the page where they should have clicked. Screenshot #3 would be where they should have clicked in Screenshot #2.
  • Test navigation variants. Create two separate surveys, each with the same tasks. In survey #1, use design version 1 and in survey #2, use design version 2. Use the timing information and results from Chalkmark to conclude which version works better.


Primary recommendation/s: All three tools in the following order OptimalSort, Treejack, Chalkmark.

Redesigns of information intensive websites typically involve an information overhaul of some sort. In this scenario, we typically use the whole hog of tools in small but frequent iterations. The order can vary depending on which stage of the project you are in. Here is how they are normally used:

  • Definition phase: Use OptimalSort and in person card sorting combined to get good user research in the context of the card sorting exercise. Involving stakeholders in the card sort is incredibly valuable. Consider using screen sharing tools combined with OptimalSort for remote participants. We recommend you sit and watch at least three people do the card sort and get their thoughts before you send the card sort far and wide. You could also use Chalkmark on the existing website and moderate the sessions. If you are short on participants, combine the card sort with the Chalkmark survey. If you are moderating, you can always just ask participants to tell you what their goals may be for a given page and see where they would click. Many of these tests will help shape and define the user requirements. Create a baseline score for the current user experience by running Treejack and Chalkmark surveys. This will be invaluable once the website has been deployed because the same surveys can be run and compared to this baseline score. To do this, simply create a set of standard user tasks that you expect will remain constant for some time to come. Load these tasks into Treejack and Chalkmark and run the test with at least 30 people. Compose a success score for the overall survey/s and for individual tasks.
  • Design phase: Test, refine, test, refine, test refine. We tend to find that getting out of the abstract as soon as possible and refining it through rapid iterations of testing works remarkably well. Don't over-engineer your IA. You don't need to get it right first time. The time spent trying to get it right is better spent testing and refining it based on real feedback. Pull together your IA and put it into Treejack and run a few people through it. Create a new variation of your IA and do it again. Tweak a label, try a structure that is broader or deeper. This process will give you much greater confidence and iron out bad choices much earlier on.  If your designers have already started wireframing the screens - test them in Chalkmark. Doing so forces them to think about the key tasks that users will want to accomplish on any given page - that in itself is invaluable. Upload a screenshot of your paper or wireframe mockup. Don't wait till your design gets to a high fidelity stage! Test, refine, test, refine! If you have not done any split or A/B testing - do it. It's simple and can save weeks of meetings that try to decide which option is better. See the A/B section below.
  • Develop phase: Test the prototypes while they're being released. Here, you would benefit from good old fashioned user testing. Again, make the most of remote screen sharing tools like Webex, Gotomeeting or Adobe Connect and get regular feedback. You may also choose to focus on areas of the website that you have concerns over. Get some data to facilitate a constructive conversation. We have seen numerous Chalkmark surveys created to accomplish this.
  • Deploy phase: Run the same Treejack and Chalkmark surveys described in the Define phase. Use the same set of standard tasks but with the new IA and web pages. Hopefully, your results will show a considerable improvement! If it hasn't already been defined, set measurable and objective criteria to define the quality of user experience of the new website. There are numerous ways to do this with plenty of online tools that can help. We use a simple online (and free) tool called 4Q. This has its drawbacks as it cannot be customised in any way and the experience is less than ideal - but it provides invaluable raw data on the experience over time.

A/B split testing

Primary recommendation/s: Chalkmark or Treejack depending on what you want to test. Chalkmark for user interface variations and Treejack for information architecture variations.

A/B testing or split testing is most commonly done with websites that are live on the internet. It's a simple but powerful concept. Visitor 1 comes to your website and sees Homepage-Design-A.html. Visitor 2 comes to your website and sees Homepage-Design-B.html. Of course it can be any page, not just your home page or a website.

While there are numerous tools that can help you do A/B testing once you have a live website, there are few that help you do A/B testing of yet-to-be-built prototypes.

This is where Chalkmark and Treejack excel. Before you even think about the user interface or interaction model you can be testing your Information Architecture designs. Customers tell us about the endless meetings where Person X, Y or Z will be advocating for a particular design or label. The same applies to user interfaces, or even what colour something should be! It can be incredibly hard to come to a consensus quickly in these situations. Enter A/B testing.

To set up a A/B test in Chalkmark or Treejack, simply create two surveys. Survey 1 uses design A (be that a home page interface design or an information architecture design). Survey 2 uses design B. Send participant 1 to survey 1 and participant 2 to survey 2. Tasks need to be worded identically for both surveys. So long as you have an equal participation across both surveys you can begin comparing results.

Published on Dec 16, 2009
  • Sam Ng
  • Sam Ng is one of Optimal Workshop's founders.

Blogs you might also enjoy