“The art and science of asking questions is the source of all knowledge.”
- Thomas Berger
In 1974, Elizabeth Loftus and John Palmer conducted a simple study to illustrate the impact of different wording on responses to a question. The two researchers quizzed their participants on an accident that occurred, asking them to recall the speed that two vehicles were traveling before the incident happened.
One of the questions Loftus and Palmer asked was: “About how fast were the cars going when they smashed into each other?”, which elicited higher speed estimates than questions containing the verbs ‘collided’, ‘bumped’, ‘contacted’ or ‘hit’. Unsurprisingly the ‘smashed’ group was also more likely to recall seeing broken glass at the scene, without there being any glass present.
Small wording changes can impact your data in a big way. The way you ask a question not only frames how a person responds to it, but it can introduce unintended bias in your findings. If you intend to use the data you collect in meaningful ways — to identify issues, deepen your understanding or make evidence-based decisions — you want to ensure your data is of the highest possible quality. The best way to be confident that you’re collecting quality data and not wasting your time and resources is knowing how to avoid the common mistakes that plague question design.
Regardless of whether you’re adding some pre- or post-survey questions to your Treejack, OptimalSort or Chalkmark survey, or simply aiming to ask your questions on paper, in person or your website, there are some basic principles you should follow. But first…
Questionnaire = survey?
Questionnaire and survey are terms that are frequently used interchangeably, and it is often tempting to use them synonymously. Unless you’re a word purist, chances are people will understand what you’re saying regardless of the term you use, however, it’s good to be aware of their differences in order to understand how they relate.
A survey is a type of research design. It’s the process of gathering information for the purpose of measuring something, which encompasses everything from design, sampling, data collection and analysis. Surveys involve aggregating your data to reveal patterns and draw conclusions.
A questionnaire is a method of data collection. Traditionally, questionnaires are used to collect information on an individual level, and have use cases such as job or loan applications, patient history forms etc. Think of questionnaires as an instrument you can use within conducting a wider survey, alongside other methods such as face-to-face interviews.
There are differences involved in collecting survey information by post, email, online, telephone or face to face, and each method comes with its own set of advantages and disadvantages to consider. For now, however, let’s keep things simple, and focus on the very basic principles that will hold true regardless of the method you choose.
Here are some practical tips to help you become a confident question writer.
1. Think clearly about your needs
Clearly define your objectives. Start by asking yourself “What do I really need to learn?”
When planning research, it’s tempting to jump right into writing your questions. However, taking a step back can save you a lot of time and frustration later down the road. Start by thinking about what you want to get out of your questions. Understand your information needs, draft your research questions and review them with your team or stakeholders before proceeding. Once you know what you want to get out of your study, you can narrow your focus and start to think about your objectives in greater detail. Being precise about the data you want to get out of your questions means it will be easier to plan how to organize and filter your findings.
2. Choose your words wisely
Badly worded questions lead to poor quality data. To help you write better questions it’s good to be aware of seemingly obvious, yet common mistakes that can plague question writing. Here are some tips to follow.
Use clear, plain language. Avoid technical descriptions, acronyms and jargon. If necessary, add a definition or some help text around your question to avoid confusion.
Be specific. Avoid ambiguity in what you are asking. The more specific your question is, the more likely people are to understand it in the same way. “Where do you usually shop?” will likely be interpreted differently by each respondent.
Ensure your questions are neutral and unbiased. Bias can be introduced into your questions in many ways:
- Avoid asking double-barrelled questions, e.g., “How satisfied are you with the use and visual feel of our website?”. Instead, stick to asking one question at a time.
- Leading or loaded questions use assumptions and emotional language to elicit particular responses. They (intentionally or unintentionally) bias respondents towards certain answers, e.g., “How happy are you with our service?” would become “How do you feel about our service?”
Set realistic timeframes. Utilizing appropriate timeframes in your questions leads to better estimates and more reliable data from your respondents. When providing timeframes, be sure to keep them reasonable — some behaviors can be asked on a yearly basis (e.g., switching internet providers), while others are easier to think of over the space of a week (e.g., supermarket visits).
It is also important to be realistic about how much people are able to remember over time. If asking about satisfaction with a service in the past year, people are most likely to remember either their most recent, or their worst experiences. Sticking to reasonable recall periods will lead to better quality data.
Don’t assume. The way we experience the world influences our thinking, and it is important to be aware of your own biases to avoid questions that make assumptions, e.g., “How many UX Researchers do you have at your company?”.
Don’t play the negatives game. Avoid the use of negatives and double negatives when writing your questions. On a cognitive level, negative questions take more time to comprehend and process. Double negatives include two negative aspects within a question e.g., “Do you agree or disagree that it is not a good idea to not show up to work on time”. Negatives and double negatives can lead to confusion and contradictory responses.
3. Think about your audience
Who is likely to answer your questions? What are the characteristics of the people you are trying to target? Consider the group you are writing for, and what kind of language and terminology they may be familiar with. Remember that not everyone is a native speaker of your language and no matter how sophisticated your vocabulary might be, plain language is going to lead to a better result.
Context is important and knowing your audience can impact their willingness to contribute to your research. Questions written for a sample of academics will differ in tone from those intended for high school students. Don’t be afraid to give your questions a casual feel if you’re trying to connect to a group that may otherwise be unwilling to provide their answers.
4. Don’t burden your participants
No matter how great your questions are, if they are too long, complex or repetitive, it’s likely your respondents will quickly lose interest. Bored respondents lead to not only poor quality data, but also higher nonresponse rates. Some subject areas lend themselves to higher respondent burden by nature, for example insurance, mortgages, or medical histories.
Generally if it’s not an immediate priority, avoid unnecessary details. A shorter set of high quality data is more valuable than a whole stack of potentially erroneous data collected via a lengthy questionnaire.
One way to remedy respondent burden is to offer incentives like vouchers, discount codes or competition entries. Giving people a good reason to answer your questions will not only make it easier for you to find willing respondents, but may increase engagement and lead to higher quality data.
5. Consider your response options (and avoid data insanity)
It is important to be pragmatic when choosing your response options to avoid being swarmed with data that’s difficult to handle and analyze.
Open questions invite respondents to elaborate and can help in identifying themes that closed questions may overlook. So, on the one hand they can provide a wealth of useful information, but on the other it is important to consider their practicality. If you want to collect 1,000 responses but don’t have the time or resources to review a multitude of varying open-ended data, consider whether it’s worth collecting in the first place. Open ended questions can be useful when you’re not quite sure what you’re looking for, so unless you’re running an exploratory study on a small group of people, try to limit their use.
Closed questions force respondents to select an existing option from a list. They are quick to fill in, and easy to code and analyze. Closed questions can include tick boxes, scales or single choice radio buttons. When asking closed questions it’s important to ensure the response options you provide are balanced, exclusive (they don’t overlap) and exhaustive (they contain all possible answers), even if this means adding an ‘other — please specify’ or a ‘not applicable’ option. For potentially sensitive questions, it’s important to give your respondents a ‘prefer not to say’ option, as forcing responses may lead to higher dropout rates and poor quality data.
6. Think carefully about order
Question order is important as it can impact the truthfulness of the responses you collect.
The general rule to follow is to start simple with easy, factual questions that are relevant to the objective of your survey. Additionally, it’s good to start with closed questions before introducing open-ended questions that may require more consideration. Once you get the basics out of the way, you can then introduce questions that are more specific, difficult, or abstract. Situate unrelated or demographic questions at the end. Once a rapport has been established your respondents will be more likely to answer these questions without dropping out.
7. If in doubt, test
Pretesting your questions before you go out to collect your data is a great way to identify any immediate issues. In a lot of cases, a simple peer review by a friend or colleague will help identify the things that are likely to cause problems for respondents.
For evaluating your questions more thoroughly, you may want to observe people as they make their way through your survey. This is a good time to see whether respondents are understanding and interpreting your questions in the same way, and will help identify issues with wording and response options. Getting your participants to think aloud is a useful technique for understanding how people are working through your questions.
8. Remember the basics!
Always explain the purpose of your research to your participants and how the information you collect will be used. Provide a statement that guarantees confidentiality and outline who will have access to the information provided.
Above all, remember to thank your participants for their time. We’re all human, and people want to know that their contribution is valuable and appreciated.
- The psychology of survey respondents - Our CEO Andrew discusses the different kinds of motivation people have to respond to surveys.
- How to ask about user satisfaction in a survey - An article by Caroline Jarrett on UXMatters talks about how to gauge the satisfaction of your users.
- Keep online surveys short - Jakob Nielsen from Nielsen Norman Group explains how to get high response rates and great results by using shorter surveys.
- Avoiding bias in user testing - Our very own Agony Aunt explains how to avoid bias in your user testing.
- Reconstruction of automobile destruction: An example of the interaction between language and memory - The original study from Loftus and Palmer showing the different questions and responses.