How to define and refine ontology
Ontology is one of those terms we might hear getting tossed around while we’re developing an information architecture (IA) and not instantly understand what it means. We’ve all been there. In hindsight this is quite amusing because, as Dan Klyn explains, ontology is “what we mean when we say what we say”.
The word itself means the ‘study of being’ and appears in numerous contexts ranging from the philosophical to the applied through: artificial intelligence, software engineering and just about any shared information environment. Yep, that last one is an IA. In an IA context, ontology refers to the meaning behind our labels, terms, language and content categories — it’s about exactly what we mean when we call something ‘blah’ (insert label name here).

A sibling of taxonomy inside the house of IA (see above), ontology is driven by the needs, expectations and goals of your users. An organization’s meaning behind a label or term used on a website (or anywhere else content appears!) might carry a completely different meaning in the eyes of your users and customers. It’s really important to get your ontology right — a poorly designed ontology can lead to findability issues, retention problems, user error and much more. Your website is only as strong as your IA and that strength relies heavily on a well researched and thought-out ontology. Think of ontology as the proverbial straw that can bring that IA house crashing down.
We’re going to take a look at some examples to help build your understanding of ontology and then I’m going to walk you through some ways that you can apply and communicate it to others to help get them on board.
Ontology examples
I mentioned Dan Klyn’s helpful explanation of ontology earlier and he also has an example of it that really helped me understand and it may help you too. It starts with the word ‘orange’. When we say ‘orange’, what does that mean? Is it a piece of fruit? Or a color? And if it is a color, which particular type of orange is it? Is it a deep red-based orange bordering on an interesting shade of rust or is it a clean balance of red and yellow like my favourite red lipstick? In Dan’s example, he mentions the unique color identifier that Pantone assigns to every single color they have defined (see below image).

There is absolutely no mistaking my meaning when I say ‘Pantone XG Orange C’. It isn’t some other peachy shade and it certainly isn’t a piece of fruit. To give you an additional reference point, in the example above taken from Pantone’s website, the taxonomical category that Pantone XG Orange C belongs to is called ‘Graphic Designers’. That content lives on a page at the second level of the IA called ‘Pantone Color Results’ — see how it all fits together? This is a great example to use when explaining ontology to non-designers and people who might be new to UX.
Here at Optimal Workshop, we recently discovered an ontological mismatch of our own in our Tree Testing 101 guide. One of the sections of the guide under ‘Recruit participants’ is called ‘Aim for around 50 completed tree tests’ (see below image).

At face value, it appears to make sense but what do we mean when we say ‘completed tree tests’? We intended for it to mean 50 participant responses per Treejack study, however we learned through our customer support channels that some of our users thought it meant 50 Treejack studies! That is a whole lotta tree testing, but when you think about it that also makes sense. It’s easy and perfectly reasonable to see why some of our users thought they’d be tied up in Treejack until the end of time!
We have fixed it and this is also a prime example of the value of not only creating but also continually refining your ontology. Imagine what might have happened if our users hadn’t reached out to us and asked! We may have picked it up through user research further down the track but it’s hard to say what the impact might have been if we hadn’t found it when we did. This is also a great example of how your support channels can be a gold mine of free and easily accessed user insights! This is especially handy if you find yourself needing to make a case to stakeholders to go out and do some user research to further refine your ontology. Have a look at your support channels (social media, technical support tickets, call centres etc) and see if there are any terms or labels that are commonly confusing or tripping people up and use that as evidence of issues needing further exploration. Your users and your stakeholders will thank you for it.
The third example I’d like to share with you is from an independent research case study I recently conducted on a form, because ontology is everywhere! I was researching and writing about the IRS’ 1040 US Individual Income Tax Return 2016 — a form completed by millions of people in the US every year. When I first came across the form, I noticed some very confusing terms. I wondered if that was just my Australian ignorance talking and decided to run a Chalkmark study on the form followed by a post-study questionnaire that explored participants’ understanding of the meaning behind some of those terms. One of them was ‘head of household’ (see below image).

I had absolutely no idea what it meant. I had a feeling it might be income related but I wasn’t sure. So, I did my research and found out via the IRS website that it’s actually a tax technical term that describes a type of filing status and has a fair amount of depth and complexity behind it because it requires a taxpayer to satisfy 3 points of criteria:
- You are unmarried or considered unmarried on the last day of the year. See Marital Status , earlier, and Considered Unmarried, later.
- You paid more than half the cost of keeping up a home for the year.
- A qualifying person lived with you in the home for more than half the year (except for temporary absences, such as school). However, if the qualifying person is your dependent parent, he or she doesn’t have to live with you. See Special rule for parent , later, under Qualifying Person.
I was still curious to see what my research participants would say and I went ahead and included this question at the end of the study: “What does the term ‘Head of household’ mean to you?”
I found out that my participants were just as clueless as I was. 49 out of my 50 US based participants were unable to correctly define ‘Head of household’ with many citing breadwinners, home owners, rent payers and men as the person who fills this role. On a government form like this one, I understand that there are some terms that are required for legal reasons but there is no reason why additional and useful educational support can’t also be provided. The form states ‘See instructions’. Which instructions? Where do I find them? How do I know I’ve got right ones? The form references ‘instructions’ 13 times over the course of this 2 page form but doesn’t mention where or how a user might find them. Ontological issues are only compounded when the guidance that is supposed to support them is also lacking in meaning.
Creating and refining your ontology
Depending on where you are in your project or website, you might be taking the first steps to define an ontology for a completely new website that doesn’t yet exist or you might be dealing with an ontology that may not have had a lot of thought put into it and needs some work. Whether you’re starting from scratch or refining an existing IA, here are some tips to help you on your way!
Creating an ontology
If you’re designing a new website, you have an opportunity to build a strong ontology from the ground up and before you create your information architecture. The steps you’ll need to take to build your ontology are exactly the same as the steps you were going to have to take anyway to build your taxonomy because they are linked. If you plan your research well, you can gain insights that will support the creation of both your taxonomy and ontology in one go — something to keep in mind when looking for buy-in from stakeholders!
As discussed in a previous blog, the steps to building your taxonomy in a nutshell are: conduct a content audit, run an open card sort, create your taxonomy, build your IA, tree test your newly designed IA and revisit any of the previous steps for further clarification if/as required. From an ontological perspective, below is what you’ll need to do and look for at each step.
Content audit
- Look for inconsistencies and contradictions in terms and labels
- Look for opportunities to reduce, simplify or clarify terms and labels. Does anything appear ambiguous? Is there are better way to communicate ‘blah’?
Open card sort
- Conduct at least some of your card sorting activities in person with users so you can ask questions about not only what they mean when they say something but also to test their understanding of your meaning of the card labels.
- Ask your participants to explain their thinking behind the category names they created. This will provide insight into what those terms and labels mean to them.
- Record your in person card sorting sessions to refer back to later.
- Include free text response post-study questions in any online card sorting activities to gauge your participants’ understanding of specific terms or labels you may be thinking of including. For example “What does the term ‘it depends’ mean to you?
Taxonomy creation
- Refer back to your card sort results,session notes and recordings. Did you learn anything specific there that will help you make sound ontological decisions? Look at the language your participants naturally used and ask yourself “How does this align to the labels you used?”. Also consider the ways in which the meaning of those labels might be misconstrued.
- Pay attention to the category names your participants gave their groups — remember they did it intentionally.
Building your IA
- When creating additional content, labels and terms that weren’t included in the card sorting activities (for whatever reason, e.g., a new section got added), refer back to research insights and flag for inclusion in upcoming tree testing activities.
Tree testing
- Include test tasks that cover labels you’re unsure people will understand or didn’t cover during card sorting
- Include post-task and post-study questions in any online tree testing activities to gain further clarification or context around meaning
- When interpreting your tree test results, pay close attention to where the first clicks landed. Participants who started down the wrong path from the very first click may have had different interpretations of what the ‘successful’ label contained.
From here you would update your terms, language style, labels and category names based on your tree test findings as you would with any iterative, human-centred design process. When your design moves into usability testing, continue to include testing tasks and questions that explore your users’ comprehension of your meaning.
Testing and refining an existing ontology
If you’re dealing with an existing website, don’t worry because it is never too late to run a check up on your ontology to see where you can do better! The best way to research ontology is through user research — otherwise you’re just guessing! Luckily, this is something that can easily be included in just about any user research study allowing you to get more bang for your research budget.
As mentioned earlier, start by digging into what your customer support channels are seeing in terms of terminology confusion and misunderstanding. Once you have a list of terms that aren’t quite resonating with people as well as the context in which they are used, start thinking about how you might approach this research piece.
You might run some one on one user interviews to talk with people in-depth about their understanding and interpretation of the terms used. You could also hit the streets with a team and do some guerrilla research and record participant responses on a tablet device using Reframer. You might also have a round of tree testing coming up where you could include the post-study or post-task questions that we talked about earlier to explore participants’ understanding and interpretation of meaning for the terms and labels you’d like to learn more about.
Once you’ve gained a good understanding of how users interpret your terms and collected ideas for options that would resonate better with them, it’s time to iterate your design and test your thinking. Update your IA and test your new ontology during your next round of tree testing. When you’re developing an IA, tree testing is something you need to be doing regularly anyway. If your research surfaced multiple ontological options, you might run some A/B testing in the form of two smaller Treejack studies each with their own unique group of participants.
Maintaining your ontology and next steps
You only have to look at our example to see that creating, developing and maintaining your ontology is an iterative process. Like most design elements, you never really stop learning and adapting it. Language and meaning are also quite fluid and are both constantly gaining new context. I remember getting grilled by a relative once over the correct use of the word ‘icon’ — it has FOUR meanings attached to it. Terms that you use in your IA or anywhere content appears might mean different things to different people at different points in time. Stay on top of it through regular user research, specifically into the comprehension and findability of headings, terms and language. If they don’t understand the meaning, they’re going to get lost and they’re going make mistakes. They’re not going to be able to reach the content they need to be able to complete their task and they might just go elsewhere. Meaning matters.