The Long Answer
So, why is intranet findability so complicated? Mostly because so many factors contribute to the ability — or inability — to find that elusive process, person, phone number, or form on your intranet. And so many of those factors are outside of the direct control of the intranet team.
For starters, the content has to actually exist. Somebody has to create it, make it available, and keep it current.
It also helps if you have a decent search engine in place that can “see” and index all the content, which is probably housed in multiple systems. That can be a tall order.
So, often, the easiest and most valuable way to improve findability is to start with a solid information architecture (IA) and improved navigation.
Intranet teams usually do have the control and ability to recategorize information and re-label navigation, at least to some degree. That’s what we’ve set out to do with HR and benefits content, and we'll outline here exactly how we did it.
But first, let’s talk about IA and navigation and how they relate to one another.
IA is both an art and a science. It’s science because we can (to a certain degree) test it, quantify it, and standardize it. It’s art because we need to interpret it and assign meaning to it based on the context in which users will interact with it. And users are people, and people are adorably messy and irrational, so we have to (artfully) interpret the science accordingly.
Navigation is defined as the interface elements that allow users to find their way through the information. It includes components like mega-menus, buttons, and breadcrumbs. And where do the labels and names for those elements come from? Usually from your IA.
In short, your IA informs your navigation.
The 80 Percent Solution
We look at a lot of intranets, and we spend a lot of time cataloging, organizing, and classifying intranet content. Over the years, we’ve seen trends and patterns emerge. In short: we see a lot of the same type of stuff.
Also, people are people, regardless of whether they work for railroads or movie theaters, so a classification system that works for one group will typically hold up pretty well for another.
So, we wondered … could we apply the 80/20 rule?
- Could we use what we know about common intranet content, run it through a user research process, and define a baseline grouping that handles the 80-ish percent of common content that we see among companies of a certain size (about 1000 or more employees)?
- Could we use that research to walk into an intranet project already knowing some of the answers to the ever-present findability problems?
It’s not that we wanted to create a cookie-cutter intranet. Each intranet can (and should) be a special snowflake, designed to meet the specific needs of the company and users it serves. But, by conducting user research to define a baseline navigation, we can spend the bulk of our clients’ time and money on figuring out what makes them special and designing an intranet to address those specific needs.
From here, we determined our research objective:
After all, a Corvette and a Ford F150 are very different automobiles, but they both have steering wheels, gas pedals, brake pedals, windshields … you get the idea. So, our plan is to take what we know about intranet content and conduct user research to define those common parts. Then, for each client, we can focus on whether we need to build them a sleek sports car or a rough-and-ready pickup.
Because we love intranets and the people who work on them, we’re sharing our baseline findings. For free. To use on your own intranet, regardless of whether you ever hire or work with us. We’re all about sharing the intranet love!
The Starting Point
Warning: gross oversimplification ahead.
Most intranets contain information in these broad (yes, really, really broad) categories:
- Internal news, announcements, and events
- “About” information — history, mission, vision, values, leadership, etc.
- Departmental information — a breakout of the major divisions of the company and what each does (note: it’s a bad idea to get too attached to departments as a method of organizing intranets, but that’s another topic for a later report).
- How-to-do-stuff information — explains methods and procedures for getting work done
- Links to other internal systems and third-party tools
- Information about people — contact, expertise, photo, etc.
- Information about benefits, hiring, and performance
If we wanted to get really and tediously scientific, we could conduct a card sort to make sure we’re correct about these broad-stroke levels. However, remember that IA is both art and science. It takes into account experience, expertise, and data. We feel confident enough in these high-level categories to focus our efforts on the next layer down.
Our intent is to explore each area in its own report. We decided to start with the human-resources-type content: benefits, hiring, performance, etc.
We chose this content because it's:
- Common — nearly all large-company intranets contain some form of this information
- High-value — it’s the type of information that employees want and need to find
- Easy(ish) to classify — while there are a few outliers (more about that later), several logical categories emerge pretty quickly
The Process
To feel confident about our results, we needed:
- A decent-sized group of research participants (at least 15, according to smart folks at Nielsen Norman Group)
- A set of content to classify
- A tool — preferably web-based — to allow users to sort the content
The first was easy. Lucky for us, we have a lot of friends in the intranet biz, and they kindly agreed to open up our study to their employees. We set up the study and provided a link, our intranet managers distributed among their users, and we had 43 participants from four companies participate. Industries included transportation, health care, and engineering.
For the topics, we reviewed content from our own current and past intranet projects, as well as some publicly available intranet screen shots. We identified HR and benefits content that appeared repeatedly, then used that information to create a card sort, which we felt was the best research method for this particular project.
Card sorts can be done in person, with physical cards, and that method has its benefits. Mainly, it allows you to hear your participants “think out loud” about why they’re choosing how to sort the items.
However, in this case, we determined that an online sort would be best because it would let our participants complete the tasks at their convenience.
We used Optimal Sort, the online card-sorting tool that’s part of Optimal Suite. Optimal Sort also does all kinds of number-crunching of results, which we’d have to do manually if we did a manual sort.
We used an open-sort method, meaning, we did NOT create categories ahead of time and ask users to place content within them. We simply asked users to group like items together and then give each group a name.
We used an open sort because we didn’t want to provide participants with any preconceived notions about how to group the content … NOT because we expected our participants to come up with a brilliant set of labels for those groups. Labeling is difficult, even for professionals, so it’s not fair to expect your users to magically come up with the perfect labels.
The labels we chose were based on user input plus our own experience and interpretation. They’re all very plain-language, which is what we nearly always recommend.
Yes, we know your HR department is very attached to its branded program names (probably “Total Rewards” or “Focus on Fitness” or something similar). But those program names can change, and they can be unclear to new users when you attempt to use them as navigation. Stick with the basics.
The Details
So, how’d we get here? To interpret the results, we focused on two areas of Optimal Sort’s analysis: the similarity matrix and the dendogram.
The similarity matrix shows the proportion of participants who grouped any two cards in the same category. For each pair of cards, the intersecting cell shows the percentage of participants who grouped these cards together. The most closely related pairings are clustered along the right edge of the table.
Optimal Sort gives you a nifty, interactive view. This screen shot won’t do it justice, but should help illustrate how we interpret the data to come up with results.
In this example, since one hundred percent of participants grouped health insurance and dental insurance together, we can be pretty confident that these two items belong in the same category. So … numbers! Science! Yay!
If only all the groupings were that definitive. The fact is, some topics are just easier to group and classify than others.
When the numbers get lower, that’s where the “art” comes in. For example, 53 percent of participants grouped ethics and compliance certification together with training. Is that high enough to present the two in the same category? Or should we create an ethics category that stands alone, but may only contain one or two items? So … interpretations! Judgement calls! Boo!
Just kidding, it’s not that bad. But it does clearly illustrate how we can’t rely on data alone to make ALL our decisions. If only it was that simple.
The dendrogram — a tree diagram that shows the percentage of participants who agreed with particular card groupings — tells a similar story.
Since 86 percent of people agreed that health insurance, dental insurance, vision insurance, and benefits enrollment go together, grouping them is pretty much a no-brainer.
However, once we add in flexible spending and health savings, that percentage drops to 61 percent, and at the 61 percent mark, a whole bunch of other stuff gets lumped in too. Our potential category is getting awfully big.
There’s no magic number of items in a category, but we know that too many can be hard for users to wade through. Conversely, it can be hard to justify a category that only contains two or three items.
What to do, what to do?
At this point, we chose to include FSA/HSA with the original four, but break out the other items into smaller groupings that showed stronger agreement.
See? Art and science.
By the way, Optimal Sort includes two other analysis tools, but we chose to focus on similarity matrix and dendrogram because that’s the data we interpreted as being most useful for this particular study. So … even choosing the analysis method is a judgment call!
Progress, Not Perfection
In the end, did we come up with definitive, end-all, be-all, unquestionable IA for human resources and benefits content on an intranet? Of course not. There’s no such thing.
However, we are pretty confident that this is a great starting point, which can be tweaked and refined (and then re-tested, if necessary) to fit a client’s specific needs. Feel free to put it to similar use for your intranet.
Next Steps (and a Shameless Plug)
In future reports, we plan to run similar studies on those other big buckets of content. We’ll share the results in the name of intranet betterment for all.
In the meantime, if you need help with your intranet, feel free to give us a shout.