Zen and the Art of UX Maintenance
Author: Jeff Swartz
Date: October 6, 2020
The Road Ahead
Fine-tuning and optimizing digital experiences is an ongoing process. As time goes on, content will inevitably sprawl, features will bloat and priorities can change. Our websites, intranets and applications can’t help it, they are often serving the interest of multiple stakeholders — and when change is distributed, it can be insidious.
The real question is what do you do when you know things have veered out of control? Where should you start and what should you budget for? How do we even determine what the real user issues are without breaking the bank?
Read on, intrepid web professional, we may have a solution.
Accelerating Usability
In a typical UX lifecyle there is testing. Lots of testing. With users.
The problem is, when you start involving users, it’s not always cheap: it takes time to develop testing plans, recruit participants, and schedule testing. Not to mention actually conducting the tests. This all directly impacts budgets, of course, which aren’t always as generous as we’d like them to be.
This is where the heuristic evaluation (HE) comes in. It’s our favorite research method to use when we need a quick, low-overhead way of highlighting issues in a digital experience. It doesn’t require a lot of setup, or a long, resource-heavy process, because unlike other methods, it doesn’t actually require users be involved.
Wait, what? No users? Yeah, I know what you’re probably thinking right now, “What about the 'user' in User Experience?”
Well, the focus is still on the user, but unlike other usability research methodologies, HEs do not leverage direct observation. Instead, a reviewer inspects a site independently, and rates compliance against common usability best practices, (or “heuristic criterion” if you want to sound educated). Sometimes they are scored for comparison, but not always.
Best-Practice Guard Rails
The rules of thumb used in heuristic evaluations are there to help you keep on track as a reviewer. They can range from scholarly usability principles to more homegrown tactical checklists. Ultimately, the reviewer should be asking questions that don’t prescribe interface solutions, but instead seek to highlight problems to be solved for. A reviewer’s is job is simply to judge whether common user issues have been sufficiently avoided, or if the system is failing in a given area.
Reviewers may ask questions like:
- Does this workflow minimize a user's memory load?
- Are navigational labels clear and mutually exclusive?
- Do forms help users prevent errors prior to submission?
- Are interactive elements easily recognizable and used consistently?
- Can users easily recover from errors?
Many UX professionals lean on the ten original Nielsen Usability Principles that were used when Nielsen and Molich formalized the methodology in 1990. Other classic guidelines share a lot of common ground with Nielsen, but usability stalwarts such as Gerhardt-Powals, Weinschenk and Barker, and Lund do have some variances that are worth considering.
It’s not uncommon to look beyond the usability lens, as well, using additional guidelines to delve into other experience considerations. The Forrester Research Experience Review does just this, dividing the inspection into specific questions around navigation, presentation, usability and trust.
Peter Morville’s UX Honeycomb has a similar structure, and works well as a framework to slot in your favorite usability guidelines, alongside others such as Rosenfeld's Findability Heuristics, Stanford Credibility Guidelines and W3C Accessibility Cheatsheet in their respective areas.
Once issues have been identified, they can then be categorized and prioritized for potential improvements and budgeting purposes.
Need for Speed
Early Nielsen and Moloch studies, when they were proving out the methodology, were quite large and comprehensive. One study cited that 19 usability engineers were used to independently review the system, in order to ensure an unbiased, thorough inspection.
Given how HEs are used today, as more budget-friendly options, that probably seems like overkill, and they would agree with you. Nielsen later concluded that there was a point of diminishing returns, settling on three to five evaluators being adequate to catch hard-to-find usability issues in most cases.
On the other end of the spectrum, there is also a level of inspection referred to as an “Expert Review”, which uses a single reviewer and less structure. Instead of grading systems against a formal set of guidelines or checklists, it relies solely on the reviewer’s internal knowledge and experience. It’s quite a bit looser but can still highlight issues just as well, depending on who’s doing the reviewing.
At our shop, we tend to land somewhere in between, relying on a more formal process, but often using a single reviewer to save on time and costs, which is where HEs really shine. At this level, we usually plan for a week or so of activity, depending on the size the system that needs reviewing.
Go forth and evaluate
Whether it’s for quick improvements or long-term planning, heuristic evaluation is a proven research method that can provide valuable insights without prohibitive time constraints. Applying a critical eye, with a focus on user needs, will lead to improvements your users will appreciate.
Give the method a spin yourself, or contact us and we'll get you started with a free mini-HE.