Putting employees at the centre of intranet information architecture

In 2014, Nielsen Norman Group reported that 86% of the new intranet information architectures (IAs) they analyzed were task-based. Unfortunately, many of these attempts at task-based intranets have not helped employees be more productive on the intranet. I want to share my own experience of developing intranet information architectures using the Top Tasks methodology which offers a unique path to making task based architectures that work.
The Top Tasks methodology was developed by Gerry McGovern and members of his worldwide Customer Carewords partnership.
Identifying top tasks
The first step is to identify real employee tasks, then rank them systematically from the bottom up using simple polling.
1. Make a big list of typical employee tasks
It is very important not to simply ask people what tasks they would do on their intranet – this produces speculative wish lists. It’s much better to simply find out what they do by directly observing people doing their jobs or by looking at actual evidence of how they spend time. For example, review help centre call records; email traffic reports; intranet analytics; and enterprise search logs.
2. Refine the list
Pretty soon you’ll have a big list of candidate tasks. In a small team, work to reduce the list to a manageable size, consolidating overlapping tasks and ensuring there are no gaps.
This stage is hard work! The process is not objective; teams must make judgments about the scope whilst avoiding unbalancing the list with their own parochial concerns.
3. Poll employees to see what is important
Once the list is at a reasonable size (for intranets this is rarely more than 60 items) run a poll to see what the top tasks are. Put the list in front of a sample of employees, or all employees if you can, and get them to vote for the five tasks that are most important to them in doing their work. If you can segment your audience with some simple category questions like “Do you manage people?” then you will be able to compare the top tasks of groups of employees. This can be very helpful later on in the process. The results of polls always show a small set of tasks getting a large majority of the votes. In most intranet polls we conduct, the top voted 10 tasks get as many votes as the bottom 40 tasks combined.
Example of Top Tasks poll result:

- Read a full and detailed explanation of the task identification process.
Using the top tasks to build an employee centric architecture
You could simply dump all the top voted tasks into a menu called ‘Tasks’ on your homepage — I have seen this done on some intranets, but it does not offer a long term solution to confusing menus and links. How do I know if my task is under the ‘Task’ link? Is what I need a ‘Task’ or is it ‘information’? Pretty soon, any vague classification like ‘Tasks’ or even ‘Top Tasks’ becomes meaningless. Instead, we can use our task list as the basis for a card sorting exercise to discover patterns in the way employees want to organise and find tasks.
It’s crucial that only the top tasks, as voted by employees, make it into the sorting exercise. This is because our objective is to establish an architecture dominated by tasks that employees consider essential. We’re working on the top level of your intranet navigation menu here because we know there is a direct relationship between first click success and overall task success; the second level may emerge during these exercises, but our main aim is to get the top level established.

Of course, excluding low scoring tasks from the exercise has consequences; tasks that are not priorities for the majority of polled employees can never become major items in the menu structure.
Expect battles with stakeholders as you try to stick to the Top Tasks rule. In one intranet I worked on, the exclusion of ‘Policies and Procedures’ was strongly resisted. I was told:“People won’t be able to find the policies they need and I will get lots of emails”. When resisting arguments like this, lean heavily on the next stage of the process (hypothetical architecture testing), when you can test if controversial exclusions from the card sort can be found.
There are a couple of golden rules and some guidelines that help make the process transparent to everyone involved.
Golden rules
- Only the tasks in the top half of the poll results get automatically used in the sorting exercise. In other words, the tasks that received 50% of the votes. Normally, this might be a list of 20 to 25 tasks.
- No more than 30 tasks, maximum, can make it into the sorting exercise.
Golden guidelines
- Strategically important tasks that did not get a significant vote can be put into the sorting exercise, but only for powerful reasons, and no more than 10% of tasks (around 2 or 3) should be these kind of tasks.
- Check that the final list of items is not imbalanced with too many tiny tasks – examine the percentage of votes each task got in your poll. Tasks that got very few votes need a very special case to be on the same level as the top tasks
- Tasks can be broken up if necessary, e.g., ‘Procedures, policies, guidelines, standards, processes’ can be split into ‘Procedures’, ‘Policies’, and Guidelines’.
- The wording for the sorting exercise may need adjusting for clarity, e.g., ‘Find people by name, by role, by department’ from the poll becomes ‘Find people’ on the card.
Working with card sorting results
OptimalSort allows you to look at results in a variety of ways but I have always felt that the dendrogram view works best albeit that it needs more explanation than other ways. Our objective is to understand relationships between tasks. Set a percentage agreement rule to help decide on viable groupings. In the illustration below you can see how to apply a grouping rule to a card sort result shown in a dendrogram. These rules are not scientific but they help keep the process on track and make decisions transparent.

As you make these groupings, a nascent priority emerges, a primordial navigation menu. There should be a healthy debate during this creation of a testable ‘level one’ architecture, but it should always be informed by available data, in this case the poll result and the card sorting result.
These groupings can only become our proposed navigation menu through iterative testing and refinement. So we now move to another tool, Treejack, to do some tree testing.
Here’s what a typical mock menu in Treejack could look like, ready for testing.

Testing your hypothetical architecture
Now that you’ve made your tree, you need to craft several tasks (or scenarios) for your research participants to use in the first iteration of your menu. Because you’re testing top tasks, go back to your poll results for inspiration. The top voted tasks have to be findable in your tree, after all, if people can’t click in the right place for an important task then your design is not going to be successful. Be sure to test all the tasks in the top 25% of your poll results. This is a rule.
The challenge you set participants, of course, can’t be a simple copy of a top task from your poll; you need to write an exemplar of that task that will represent the overall top task. In my example above, the challenge is about support for a mobile phone which is designed to represent the top task of ‘IT Support’.
You might find that you have a list of unresolved issues from the sorting exercise (like my policies and procedures example); the temptation is to test them all here but caution is required. It’s useful to have rules that make it clear how many of these issues you can test.
It’s easiest to look at an example. Let’s take the task, ‘Reducing staff sickness levels’; it did not score highly in the employee poll but it did get significant votes from team leaders and it’s also an organisation wide KPI. Checking both these boxes, voted for by an important audience and strategically important, makes it a good candidate for the tree test (this same kind of decision-making process can be used at the card selection stage too).
Let’s look at a Treejack result to show how sticking with the top tasks changes the way we react to tree test results. The pietree view of results is very clear; in the example below, we see people confused between four choices in the tree.

How should you react?
Adjust the tree by adding classifications? Rename classes? A major risk here is that reacting to one result in isolation can affect all the other tasks. Guard against making changes that make it super easy to do a tiny task but make it harder to do top tasks. You can end up going round in circles, trying to satisfy everyone, leading directly back to the confusion you are seeking to avoid.
The rule here must be to never test in isolation, always test all the instructions in each round so if an adjustment upsets a top task you can see straight away.
After your first tree test you will hopefully have some of your hypotheses confirmed and some concerns to resolve.
Set a specific rate of success that your tree test must reach before you declare it successful; we aim for 90% ‘first click’ success. It takes at least two, sometimes three, rounds of tree testing to resolve issues and reach this kind of target. Test fatigue can be an issue; in particular, it’s crucial that senior partners have a clear view of the process and can see why multiple tests are required. Commitment to a process like the one I have outlined is rare but there is no other method which so rigorously places employee tasks at the centre of intranet designs.
How should you arrange the content on your website?
Use our online card sorting tool, OptimalSort, to figure out how you should arrange the content on your website in a way that makes sense to your users – try it now.
