A Matter of Life and Death: Finding content on Manchester United’s website
“Some people believe football is a matter of life and death. I am very disappointed with that attitude. I can assure you it is much, much more important than that.”
I work for a company stacked to the rafters with Manchester United fans, so starting a blog with a quote from a Liverpool legend already puts me on dangerous ground. Following this with a piece criticising Manchester United’s site is quite possibly a step too far. I have done the first, however. And I am about to do the second. Let me explain why.
On the first point, I live in Birmingham and am, as such, very much a neutral player in the Manchester/Liverpool rivalry. Add to this my lack of interest in football, and the fact that I like to goad my colleagues, and you’ll see point one is really no concern. On the second point, I have Treejack to thank: my colleague Hannah Pass and I ran a tree test on the site, and it did not shine.
On visiting manutd.com for the first time
My first surprise on visiting the site of possibly the best known football club in the world was that is not nearly as slick or well-designed as I had imagined. The home page is cluttered and the typography is poor and over reliant on capitalisation. The navigation is cramped and small, and on top of that the visual hierarchy is weak: it’s difficult to tell which information is the most important.
My impressions didn’t improve much on digging further into the site. On the page below, the navigation on the left uses different titles to the navigational containers on the right, and the duplicated labels on the containers — ‘First Team’, ‘Under 21s’, ‘Legends’ — do not link to the same pages. None of these issues were entirely pertinent to my study, since I only planned to test the findability of the content. Nevertheless, the poor design gave me an early indication for what I might find out.
How I structured the tree test
The two main parts of structuring a tree test are building the tree from the website sitemap, and writing relevant and clear tasks for participants to complete. Getting these two things right is vital to collecting valid and useful quantitative data.
Building the tree
As the design issues mentioned above might have led you to guess, working out and extracting the Manchester United tree was not a particularly straightforward task. We had to apply a certain amount of judgement in order to interpret the often container-led design hierarchically. As an example, the ‘Football News’ section looks like this:
…which is above a ‘Previous Stories’ section that looks like this:
…and a ‘Football News Archive’ section that looks like this:
How we chose to represent the tree
Here’s an example of how we represented the information architecture of the website.
For the ‘Football News’ section, we removed the category ‘Previous Stories’, because testing this would only make sense on the day of launch. And we removed the Month category from the ‘Football News Archive’ section.
(The stories have of course changed: news is something of a moving target.)
The whole process involved a considerable amount of cutting, pasting, and reformatting — but we got there in the end (thank you Hannah Pass).
Writing the tasks
We needed to come up with tasks that represented what real Manchester United fans would regularly use the website for. Since both Hannah and I are deeply uninterested in both the game and the team, we needed some help. Thankfully, our office is packed with fans, so I chose a few and ran my ideas for tasks past them. I quickly heard that:
‘There’s no such thing as a gift shop. It’s not a stately home, it’s called the Club Shop’
‘The UEFA Super Cup is a bit obscure.’
I was also able to confirm my suspicion (you can’t live in England and not pick some of this up), that fans would probably not look on their club site to work out their position in the Premiership . They would already know, and if they didn’t they’d look on the BBC Sports site.
We wrote tasks like ‘You have heard rumors about an exciting new signing. Where would you look to see confirmation of the rumours?’ and ‘You’re planning to watch a Manchester United game and need to drive to the stadium. Where would you look to find out where the parking is located at Old Trafford?’
The overall results, and three key findings
I’ll resist the temptation to draw a football analogy here (mostly because I don’t know any) but suffice to say: manutd.com did not score well at all. Most of the time, users could not find what they were looking for and had to backtrack up the tree in their search.
So, what went wrong? We could spend hours in the data we gathered (and still might), but the beauty of Treejack is that the most obvious answers shown in the visualizations are often the biggest problems. So here are the three key issues we consider to be a great starting point for anyone planning to redesign the website:
- Misleading labels
- Confusingly similar labels
- Lack of a ‘scent’
Misleading labels mislead users
The first issue we’ll confront is the existence of misleading language. It may not be the most difficult issue to solve, but its significance is often underestimated in web design.
Here’s an example. We can see instantly in the results of Task 4 that the information architecture is causing confusion. Only 7% of people got the right answer without having to go back up the tree, and only 18% of all participants got the correct answer at all. Furthermore, 60% of people went directly to an answer, which suggests far more people thought they had gone the right way than actually did.
Now, any user looking for a link to the club shop is likely to focus on the words ‘shop’ or ‘shopping’. We can see the label ‘Shopping’ on the homepage, so unsurprisingly, this page was a huge draw. You can see this easily in the pietree. The yellow circle indicates the percentage of participants that nominated ‘Shopping’ as the correct answer, while the green line shows the actual correct path:
Hovering over the yellow circle brings up the data for this incorrect answer:
61 of the 89 participants who attempted this task chose this answer. Wow. No wonder my first thought on seeing this result was ‘Wait on…am I wrong?’ I thought that this must be a correct answer that I had missed when setting up the tasks. Therefore, I could go back to the study set up, add this node as a correct answer, and radically alter the performance of the task.
But I didn’t miss it. ‘Shopping’ is a link to the online store. You cannot find the opening times on that page. If you’re looking for the club shop, you need ‘Megastore’.
It’s a fairly big design problem, but relatively easy to fix. Making it clear that users could shop either online or in the physical store would help a lot. And adding a link to the ‘Megastore’ in the online shopping site could also help.
Confusingly similar labels make users scatter
Fans of Treejack will have come across the rather dramatic-sounding concept, of ‘evil attractors’ — areas of the tree that seem to attract participants incorrectly, regardless of the task.
While I wasn’t able to come across any solid candidates for the title in this tree, I did see a related behaviour. For some tasks, participants seemed to ‘scatter’ to a range of tree branches, all of which seemed like possible answers. The names of the branches were just too similar and didn’t provide enough differentiation for users to distinguish between them.
For example, the following are all children of the ‘News and Features’ node:
- Football news
- United today
- Podcast latest
- Club news
- Exclusive interviews
Each of these labels could conceivably provide a recent news story. What, for example, is the difference between ‘Football news’ and ‘Club news’? So when we asked participants to look for confirmation of rumors of a new club signing, it’s unsurprising that all of these labels were visited by participants, which we can see clearly on the pietree:
This might be a harder issue to solve than the problem of one or two misleading labels. But focusing on cleaning up this type of information architecture could conceivably make a huge difference to the usability of a site.
Problems with information ‘scent’
Finally, and most consistently, the naming conventions across the site seem to lack ‘scent’. Information foraging theory, developed in the early 1990s by Peter Pirolli and Stuart Card from PARC, posits that when searching for information, we behave in ways analogous to animals hunting for food. In this analogy, we look for clues that tell us we are likely to find what we are looking for. These clues are information ‘scent’.
Problems with scent occurred on several of our tasks. And conversely, the few successful tasks of the study had a strong scent. Let’s have a look at a couple of examples.
Example of a strong scent
94% of participants got this answer correct, and 74% of participants got it correct without backtracking up the tree. The correct answers (shown underneath the task) are very easy to ‘sniff out’ — users focus on the words ‘ticket’ and ‘season’, and following that scent leads them to the correct pages.
Example of a weak scent
The scent here is pretty clear: participants will probably be seeking ‘parking’, maybe “how to get here”, or perhaps ‘the stadium’. But have a look at the correct answers (above the pie chart).
We can see that although the word ‘stadium’ does appear, parking has little to do with a tour or a museum. Car parks, quite clearly, have very little to do with tickets, and even less to do with hospitality. As you’d expect, we can see that the pietree shows confusion, backtracking, and some desperation (three people chose ‘Seating Plan’ as a correct answer’):
There isn’t space in a short blog to describe in detail how this navigational structure might be fixed. We can cover that another time (or feel free use the comments to recommend changes based on the data). But I hope the examples above give you some hints for how you might address similar issues in your own structures. Ambiguous naming conventions (or in the case of ‘Shopping’, simply barmy) will confuse users every time. And you disregard scent — the words and phrases users are focussed on and scanning for — at your peril.
I do suspect the actual website performs better than this study implies — visual clues on the site will help with some of the confusion. And Treejack scores do tend to be lower than scores from testing a live website. But designers cannot (and should not) rely upon visual design — in this case a cluttered visual hierachy — to fix or compensate for a fundamentally flawed information structure.
Up the reds… or is that Liverpool…?