"Hi UX Agony Aunt,
I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Treejack studies, etc)?
Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!
Creating remote research studies
Use screener questions and post-study questions wisely
Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.
Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.
Do a practice run
No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to upload a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!
Now is also a great time to run a few moderated face-to-face tests with users before launching the final version of your remote study. You’ll gain some great qualitative insights that will complement the quantitative data your remote study will gather and it will also help you spot any labelling or comprehension issues early on! And if your remote research activity happens to be an OptimalSort study, the results from your moderated card sorts can be scanned into the tool using our super handy printed cards and a common barcode scanner! This means you get the best of both worlds — insights gathered face to face with users and powerful data visualization tools for interpreting and communicating your findings!
Sending out remote research studies
Manage expectations about how long the study will be open for
Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.
Provide contact details and be open to questions
You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.
Don’t forget to include the link!
It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information. To avoid this irritating mishap, I always complete a checklist before hitting send:
- Have I checked my spelling and grammar?
- Have I replaced all the template placeholder content with the correct information?
- Have I mentioned when the study will close?
- Have I included contact details?
- Have I launched my study and received confirmation that it is live?
- Have I included the link to the study in my communications to participants?
- Does the link work? (yep, I’ve broken it before)
General tips for both creating and sending out remote research studies
Know your audience
First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Tweeting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.
Keep it brief
And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.
As for the accompanying approach content, short and snappy equals happy! If it’s a Tweet, congratulations, half the battle is already won. In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.
Set clear instructions
The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!
Create a template
When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.
What are your top tips for creating and sending out remote user research studies? Comment below!