From time to time I hear people say that they prefer online card sorting to offline card sorting or vice versa. I think they complement each other! (and you should do both)
Let's start with a quick rundown on the major differences in outcomes from moderated and unmoderated card sorting.
Remote & Unmoderated Card Sorting (Online):
- Unlimited scale. You can have as many participants as required to get the answer you need.
- Much closer to "fire & forget". Set up a study, fire it out to potential participants, enjoy the afternoon in the sun.
- Relatively cheap. Compared to the cost of having a facilitator, note taker, clients on site, reception, coffees, compensation... remote testing is clearly cheaper to conduct.
- It can be difficult to know why things happen. Qualitative results are not nearly as apparent because participants are not facilitated, moderated, steered and often not recorded. You don't get to hear them thinking out loud or discussing decisions.
- Great for gathering quantitative results. If you have a hunch of your own or as a result of qualitative tests then remote and unmoderated user testing is a great way to back it up with some numbers.
In-Person & Moderated Card Sorting (Offline):
- Limited scale. You can only bring in as many participants as you can afford in terms of time and budget.
- Relatively heavy investment per participant. Each participant will have associated costs and will create work for you. (I'm not saying it isn't worthwhile, it generally is, I'm just pointing out the differences)
- Great for gathering qualitative results. This is where you get insight into how people feel about what they're doing or saying in the study.
- It is usually too expensive to get quantitative results from moderated testing. Yes, you will undoubtedly uncover most of the problems and convince yourself that something must be done, but many situations call for more.
So what should you do?
I recommend that people conduct from 1 to 5 offline, in-person and moderated card sorts to get a good understanding for themselves of how other people would organise their content and the rationale for it. Then I suggest that people conduct an online study using OptimalSort to put some numbers behind the hunches. By the way, I don't mean to belittle any professional observations by calling them hunches, I'm just making the point that however duly convinced you might be it is usually not unreasonable for a stakeholder to want more data if a change will impact thousands or potentially millions of other people (or dollars for that matter).
If you are fortunate enough to have crystal clear direction from your qualitative research to propose an immediate way forward then I suggest you could skip the online card sort and move directly to validating your proposed new information architecture using tree testing. Either way you should be validating your chosen labels and content hierarchy using Treejack after a card sort.
We believe there is so much value in both qualitative and quantitative research techniques that we want you to do both. To assist you with this we have recently implemented an important change to OptimalSort: You can now print your OptimalSort cards (from a generated PDF) for moderated and in-person paper based card sorting and easily get the results back into OptimalSort for analysis alongside your quantitative research data. Hooray!
Step 1: Print the cards
Step 2: Sort the cards
Step 3: Scan the groups back into OptimalSort
I'd love to know what you think of this new feature and whether it will be useful to you in your own card sorts. It certainly beats trying to moderate card sorts around a screen or retrospectively entering participant sorts by doing multiple sorts yourself (you know who you are!).