“Dear UX Agony Aunt,
I am working with a client designing an internal web application that helps users select appropriate color formulas to paint cars with. The majority of users work within auto-body shops and paint cars for a living. Testing seems like it must be done at their work location because the software is integrated with very expensive tech devices that detect color.
These users are globally distributed, often not very tech friendly and the tech at their work locations doesn't support remote testing. To top that off, many of them make over $125,000 a year, so it can be expensive to offer them incentives large enough for them to justify contributing their time!
As much as I would love to jet-set around the world, I don't have the time or budget. I need to be doing more frequent and lightweight usability testing with them but am seriously struggling how to do that. I’ve identified a good 50 users in my local area already (150 mile radius), but I’m really starting to worry about tainting my results by only pulling from a small local pool. Can you help me UX Agony Aunt?
Rounding up participants to test your website or application is a perennial problem for most UX research, though the picture that you’ve painted certainly sounds particularly tricky!
A couple months ago we were fortunate to have the master of usability testing, Steve Krug, present his insightful talk “You’re Not Doing Usability Testing? Are You… Nuts?” at Optimal Workshop HQ. He put forward that as designers and researchers we should be regularly carrying out usability testing, but that testing with large groups or targeting specific users is not actually as important as you might think. I definitely recommended having a listen to the presentation, as well as reading his how-to book on conducting fast and effective usability tests — "Rocket Surgery Made Easy" — if you haven't done so already.
I've paraphrased below a few of the key points that he makes. Perhaps these suggestions could help to address some of the challenges that you're facing in your work?
It’s okay to start testing with users who aren't representative users
Specialist or domain knowledge is important, especially when it comes to testing technical tools such as the auto-body color detection app that you've described, but many of the most important usability problems (ie. issues with navigation, page layout, visual hierarchy etc) will be picked up by anybody using the tool.
If you start off testing with non-representative users, it's fine to feed them knowledge that they need to complete a particular task, for example certain terms or processes that they might not be familiar with. Use your judgement as to whether the places that these users trip up or get stuck are due to a lack of specialist knowledge, or whether there's room for improvement in the way the app works. Don't forget that some of your target users may be less experienced than others, so getting an “outsider's” perspective can be especially powerful for checking whether language is clear and processes are as intuitive as possible.
I've even heard of researchers asking non-representative users to play certain personas during usability tests, in order to put the participant in a particular mindset and to give more weight to the test results in the eyes of stakeholders. This may be something that you wish to consider, particularly if your client recognizes the difficulties of recruiting target users for your research on this project.
I know you mentioned that the software is integrated with expensive devices that detect color. If you're not able to get hold of one of these devices for testing purposes, would it be possible to emulate the experience in some way? For example, set up a scenario so the participants follow a hypothetical chain of events that a painter might go through before using the app. You could then have these participants test a prototype of the app interface with any of the necessary data pre-filled.
It’s actually best to test with only a few users at a time
Another recommendation from Steve Krug's presentation and his book is to focus the number of participants in each testing round to only three users. If you try to test with more than three people in each round, you’ll likely only increase the chances of coming across the same issues over and over again. In other words, you’ll start to get diminishing returns for the additional time and effort that you put in. Even with only a handful of users, it's likely that they'll encounter the most critical issues in whatever you're testing during these initial rounds
In addition to these points, Krug outlines several other reasons in “Rocket Surgery Made Easy” why, after many years of testing, he has settled on three as the ideal number of participants per testing round. He touches on the continuing debate around this number (for example, Nielsen Norman Group’s manifesto that testing with five users will find 85% of the problems), but perhaps most pertinently, he flips the argument around to focus on what you can practically achieve with the insights that you gain from the testing process. Rather than aiming to “uncover most of the problems”, in his view, it’s smarter to “uncover as many problems as you can fix”.
In fact, even with only three users, you'll probably be able to find more problems that you can fix within each round. That's why he also recommends distilling your observations down to the top three problems from each participant and then from that list, to choose three that you'll focus on fixing. The faster that you complete each round of testing and the associated improvements, the more rounds of testing you'll be able to fit in with an increasingly improved product.
It’s smart to carry out the cheapest tests first
This one I've actually borrowed from Erika Hall's helpful book “Just Enough Research”, in which she explains:
"Don't use expensive testing — costly in money or time — to find out things you can find out with cheap tests. Find out everything you can with paper prototypes or quick sketches before you move to a prototype. Find out everything you can in the comfort of your own office before you move into the field. Test with a general audience before you test with specific audiences who take more time and effort to find."
In other words, if you start off testing with users who might be non-representative, but that you can easily access, you'll not only likely gain invaluable findings from them, but you'll be able to conserve your time, budget and resources to test a much better product with your harder-to-reach and potentially expensive target users.
Once you’re ready to test with some real end-users, it does sound as if you will need to offer a relatively generous incentive, or come up with a creative approach (for example, would a sweepstake for a considerably larger prize work?), in order to convince these painters to offer up their time for testing purposes. If you get any pushback around the cost at this point, then it certainly pays to remember that anything you spend on testing early will be less expensive than the cost to make changes much further down the track. It’ll be even less so compared to the fixes you are forced to make due to customer complaints!
I’d say that the 50 users in your local area you've already identified would make a pretty good pool to start with. And while I'm no expert on this industry, I'd suggest that getting a range of experience levels and potentially work environments (smaller versus bigger shops, good versus poor internet connectivity) would be more important than an especially wide geographic spread. In order to reach a bit further afield, a potential approach that you could look into is running lightweight testing during industry events or conventions. Our other User Researcher, Ania, mentioned that she had done something similar in a previous role where she ran usability tests with farmers (who would otherwise be very geographically dispersed) during Fieldays, one of New Zealand’s largest agribusiness events.
I hope that these suggestions give you some ideas for getting around the challenges in your work! Perhaps our community has some other ideas for tackling frequent and lightweight usability testing with hard-to-reach users? If you’ve got a good idea, please leave a comment and let us know!