Utilizing card sorts for brand research
Back in 2013, a US-based analytics firm named RJMetrics suffered what could be called as a “logo fiasco”. In short, British customers were mistaking their new logo for a pair of popular underwear (called “Y fronts”). After the incident, their CEO concluded that “running a business with an international audience means internationally testing all new imagery and terminology”.
I couldn’t agree more: it’s why we at Atlassian sought to include research when redesigning our brand. And instead of just relying on the traditional market research techniques, like focus groups, we tried something a bit different. We adapted a tried-and-true information architecture research method — card sorting — to our approach.
The complicatedness of studying logos
Logos are graphic marks that help us recognize organizations. They represent products and brands, transferring the intended conceptual meaning of a shape or word to an offering.
In “The primitive power of logos”, legendary graphic designer Michael Bierut points out that logos captivate us because we invest them with meaning. Of course, logos, like the Nike logo, don’t get infused with meaning overnight. Logos gain their recognition and usefulness over time.
While the meaning that we might try to imbue logos with won’t always be interpreted in the way we intend, as designers and strategists we can hint, or suggest, ideas that relate to its purpose.
Take, for instance, the recent Hillary Clinton logo designed by Bierut himself. Bierut explained that he wanted to create not only a flexible lettermark, but also wanted to suggest something pointing forward. Conceptually it was a powerful metaphor.
While certain metaphors might logically suggest forward movement (and, by proxy, forward thinking or future-mindedness), it is often hard to understand whether or not the rendered shape makes sense to various audiences.
This is the problem: while traditional methods, such as focus groups, can help us learn about how people “feel” about a brand, the approach is methodologically limiting to understand whether or not a logo embodies its intended meaning. Focus groups are ineffective, mostly relying on groupthink and subjective opinions from paid (sometimes professional) research participants.
Focus groups often do not help us understand meaning. More importantly, they infrequently inform designers.
Enter: card sorting with Optimal Workshop
With our logo design work, we wanted to challenge our assumptions about our symbols and gather first impressions to help us iterate in the right direction. And we wanted to do this without throwing “insight gathering” over the fence to a market research firm.
We sought to capture emotional responses to our work — information from outside the building — that would help us better understand our logos’ comprehensibility and coherence. This was particularly important, as we wanted to avoid subjective and baseless “I like it” / “I don’t like it” types of feedback that logo work typically elicits.
The research challenge, though, was to support the fast-moving design team to conduct affordable and scalable rebranding explorations that would include customers and potential customers in what is typically a marketer’s exercise.
What we ended up doing is creating a simple, imaginary scenario that utilized a closed card sort with Optimal Workshop. In the instructions, we asked individuals to take a quick look at a logo and imagine they saw it while flipping through the app store.
Individuals were then asked to place approximately 25 words into two columns: “Words that apply to this logo” and “Words that do not apply to this logo”. Example words included ‘flexible’, ‘rigid’, ‘momentum’, and ‘unstructured’, among others. After that, we asked them to place the top three words that applied to the logo in yet another column.
In this section of the test we were able to get a sense of how our logos were perceived, identifying which words were most frequently and least frequently associated with each logo. This wasn’t necessarily for accuracy of word affiliation, but rather as a sense check into how our logos were interpreted in the market.
After the card sort, we asked people a simple follow-up survey. Questions included the following:
- Please tell us more about why you chose the top three words.
- What type of company do you think will use the logo?
- What company does this logo remind you of, if anything?
We were able to leverage results from this simple survey to think more critically about our logo system and inform additional research. For instance, we found our new Jira Software logo might elicit stronger associations with the technology industry compared to the old logo. In this case, it seemed, without the brand context, prospective customers were under the impression our old logo signified health and wellness, perhaps due to its juggling and humanoid form. This was an interesting data point for us to dig in further.
Moreover, we were able to collect all the quotes from the open-ended survey questions, analyzing the responses to understand trends and patterns in brand perception. These testimonials gave the design team more confidence that some iterations were “suggesting an idea” better than others, giving them a clear indication as to which designs iteration they wanted to focus on and optimize.
In the words of one of the designers I worked with: “(These card sorts) helped me get out of my design brain.”
Three tips for running your own test
We chose card sorting to conduct this study because the method is, in Donna Spencer’s words, a “quick, inexpensive, and reliable method, which serves as an input into the design process.” I trust it highlights one creative way that we can use information architecture research techniques (and tools) to accelerate learning.
In case you ever want to run this type of card sorting for your own conceptual work, here are three tips I’d keep in mind.
- Consider the context. We asked our participants to imagine coming across our logos in an app store, given that’s a realistic situation where future customers would encounter them. It’s also a logo-dense environment, which requires differentiation and uniqueness — something we were looking for in order to stand out in the market. If you’re testing your own brand material, consider whether it’s appropriate to use a context like an app store, or somewhere else. Other physical environments, like billboards, also come to mind.
- Words matter. While we originally used adjectives from Microsoft’s Product Reaction set, we ended up reconfiguring the test to include words from our own brand strategy sessions. We chose to only include 25 words following advice from Nielsen Norman Group, which states that if you plan to use a survey format — particularly an online survey format — you should reduce the amount of words and ensure you include a variety of positive, negative, and neutral words.
- Beware psuedo-quantiness. It is true card sorting has a higher degree of uncertainty than more behaviour-focussed methods of user experience research. Research activities that are qualitative in approach but generate numbers closer to quant studies, what Jan Chipchase refers to a pseudo-quant, come with serious limitations. Following his rationale, the main risk with a card sort like this is that it might lull a team into a false sense of security without considering the quality of data. In our case, we aimed to screen the right participants (via UserTesting.com) and triangulated our data by conducting interviews. It might be useful if you consider this type of test as just one of many methods you might conduct to learn about your brand.
As Venetia Tay recently tweeted: research should inform designers, not design. The above card sorting method isn’t bulletproof, and it certainly doesn’t mean we landed on the perfect logos. But it did help us get out of the building in an interesting way, informing our design team while including customers and potential customers in our creative process.
Thanks to the Atlassian brand design team (Sara VanSlyke, Leah Pincsak, and Megha Narayan) for their collaboration on the creation of this method. Thanks to the Atlassian research team (namely Leisa Reichelt and Becky White) for feedback on earlier drafts of this post. Hat tip to Optimal Workshop’s Rebecca Klee for her edits and suggestions.