Have you been thinking lately that it might be time to enjoy the benefits of website usability testing at your organization? There are many things to consider going forward, especially the higher satisfaction users will have when they visit your website afterwards. Unfortunately, for too many web managers their aspirations for a more usable website stall out on the budget question.
My advice: do not let resources get in the way, because there are usability options to fit any budget. This presentation reviews three popular options — at all different budget levels — and gives you all the basics to get started.
Note: In 2008, one of the first set of slides I posted on my Slideshare was this one ~ Usability Testing: Options to Fit Any Budget. I’ve been tickled that since then it has gotten over 10,000 views. This blog post is a narrative of that presentation.
USABILITY: AN ESSENTIAL PART OF GREAT WEBSITES
Usability is an essential part of all great websites. It is the intersection of where your content meets your technology. From a users’ perspective you are answering questions
- Are you happy or are you frustrated?
- Are you being productive or are you wasting time?
From a usability engineers’ perspective there are a variety of tools and techniques and processes to measure the answers to these questions.
Now when we talk about satisfaction and loyalty, there are really two tiers.
At the first tier, rational loyalty answers three questions.
- Are you satisfied
- Would you recommend it to others and
- Would you come back?
At the higher level tier we’re talking emotional loyalty. And here you are striking cords about confidence, integrity, pride and even passion in your product or service.
Before we go further, we should probably acknowledge the elephant in the room. The fact of the matter is, whether your site practices good or poor usability has an impact on visitors.
If a visitor encounters a site that is badly designed, they are going to leave. If it takes too many clicks, they are going to leave. If they are in the middle of a shopping experience or other transaction, they are going to leave.
If they don’t like what you have to offer because it is just too hard, they are going to leave. Organizations are losing because of this.
And yet organizations aren’t even adopting the simplest practices.
WHY USABILITY WORKS
You are viewing this post because you know that’s a problem. What you need is the ammunition to sell usability inside your organization.
Here’s why usability works.
- It ensures that your site is easy to learn
- It helps your target audience get things done
- It is going to help your users find your information, perform tasks, understand the content and accomplish their goals. And do so without wasting time
- And it is going to bring you satisfied users who like what they see, recommend it to others and decide to return
- You get actionable results
- You don’t reinvent the wheel
- And when making changes and leveraging resources, you get to lead with facts, not opinion
When you commit to usability that means you are putting the interests of your users front and center. Who are they? What are they like? When do they visit your site? What do they want to achieve when they come to your site? Fundamentally you need to understand the answers to these questions, and usability can show you how.
WHAT’S THE RIGHT TEST FOR ME?
There are many ways to find out which tasks your Web site visitors consider the most important and which tasks are the most popular.
You can gather research from interviews, look at your call center data, conduct surveys or field studies and even mine your search log analysis.
But you also have to make sure that you are strategic. That means you have to assess the goals of your business and your organization.
Here you want to talk to the leadership in your organization as well as your stakeholders. You should also look at the relevant documentation and look at your competition.
So let’s suppose I’ve sold you on the idea of doing usability testing. You are probably wondering, what’s the right test for me?
Well you have four choices. In combination or as single shot projects they all have pluses and minuses.
- Expert Review – Sometimes called an inspection evaluation, which gives you a scorecard
- Human Performance Testing – Has actual users clicking through your site trying to do top tasks
- Software Inspection – Another option; We like Lift Machine by usable.net which supplements our expert reviews and be another set of eyes on the data
- A Web Blueprint Approach – A full-throttled effort where you do card sorting, audience feedback and benchmarking to really get a handle on how your website is performing
The rest of this post dives into these four methodology tracks to examine what’s involved in each selection and the pros and cons of each approach.
The deliverable here is a scorecard. What i really like about the scorecard that I’ve developed is that it follows this value ladder. The idea that users start out at the bottom rung and they need to be aware before they are satisfied, they need to be satisfied before they are confident, they need to be confident before they trust and they need to be trusting before they are loyal.
Because we map back all of our indicators on the scorecard to these metrics you have a step-by-step list that prioritizes what to work on first.
Our scorecard helps you climb the ladder, so in step one it is all about awareness. What is the page weight like? How about accessibility and marketing your site?
Next, it’s satisfaction. Are you delivering an experience that is free from errors? If you do make a mistake, can you recover quickly? So what is the help section like? How about those broken links? What about search and indexes?
At a higher level you are looking at issues of confidence.
Is the site learnable? Can you find what you are looking for? Here we look for measures of navigation, link behavior, readability and the like.
YOU GET A WEB SCORECARD
At the end of this evaluation you get a scorecard. Your scorecard is a roadmap to identify where you need to tweak, where you need to overhaul and where you are doing well. We’ve completed over 200 web site evaluations for the U.S. House of Representatives. Here we had a custom scorecard.
The current scorecard instrument that we’re using is a little more streamlined. We have a total score of 100 based on 36 weighted factors and we are looking at three areas, three core processes where you need to shine to succeed as a website.
- Providing valuable content
- Helping people find stuff and
- Leveraging the feedback loop
Together the 36 factors could be the basis of editorial guidelines or publishing rules.
A lot of large enterprise organizations look at a scorecard as step one in a process that’s about implementing some standards across the organization.
If you have a baseline review you get a scorecard that shows how you are performing today along with the vision of where you want to be tomorrow. To get there you need to document the rules, to set expectations with your content publishers.
We have a lot of experience here and would be delighted to talk to you about helping you create publishing rules as an extra project after you complete the scorecard.
PROS AND CONS
An expert review has some pros and some cons.
On the plus side, this is a method that performs really well as a change agent. You have an opportunity to use it to pave the way to be a standards organization. Grades are a very familiar way to score things for executives, and you get a report that helps you track things over time.
On the downside it is very important that you have a uniform understanding of what the rules are, what the measured points are. You also have to understand that this is a plus or minus score. Do you do it, or do you not do it? There are not shades of gray in the way we do the scorecard.
Most importantly you are not involving users in this, it is an expert taking a look at your site. And you are not really looking at task performance.
HUMAN PERFORMANCE TESTING
To set the stage here, I’d like to call out Professor Barry Schwartz the author of the book Paradox of Choice, Why More is Less. He makes the point that many times people judge their own satisfaction based on peak moments, the highs and the lows, as well as the result.
The bottom line is did they complete the task and what stood out — good and bad — along the way?
When we do a top task tune up a standard project looks something like this:
- First you recruit your participants, your test subjects. No less than eight is highly recommended.
- Then you are creating scenarios by identifying the top tasks that these folks try to do.
- Then you conduct the test. We use the Usability Testing Environment, and we do that remotely using the go to meeting collaboration tool.
We are going to analyze the results for you and prepare a test report which will be very illuminating.
A quick word about our favorite tool, the applications we use to administer the test, called UTE.
What it does for you during a usability test is it allows you to take the test remote. It expands the geography over which you can take the test. It also speeds up data collection, and it automates data collection that might be in dispute. This eliminates the guesswork of when people complete the task how many clicks it took and the like.
When you automate these sorts of things you can concentrate on analyzing and observing behavior to discover patterns that will improve your website.
SELECT THE RIGHT TASKS
Selecting the right tasks to include in your test is a critical success factor.
We recommend no more than seven. We chose the number seven because, frankly your testers are going to lose focus and get a little restless if they have to submit to a test longer than an hour-long. Seven is usually the right number to assure you don’t burden your testers.
In securing your participants the thing to think about is whether they are representative of your user community. You should know going in that there is good research out there saying you can get good results with this sort of test with as few as eight subjects.
The first time you conduct a top task tune up you will be doing it to set up a baseline of performance. Here you are comparing the results the success ratio of your users against the goals your organization has established about where you score your success ratio.
Ideally after the baseline is set, you go back and re-test the same scenarios.
This way your comparison point is against the baseline. When you compare a baseline test with an iterative test you have a much more realistic grasp of the strength of the improvements made on your site based on lessons learned when doing usability testing.
Some web managers are really interested in how they stack up against the competition. Here what you’ll plan is a top task tune up which compares how your target audience performs on your website versus how they do on a competitor’s site using the same questions.
ABOUT TEST PARTICIPANTS
Let’s switch gears a minute and talk a bit about test participants.
You may recall that I said no less than eight participants are necessary when you do recruitment for testers. I know of some complex site owners who decide they needed 16 participants. They did this because they had a multi-layered audience where they had a real need and wish to get into the weeds and do comparisons on how different audience members would react to the site. Here, you would aim to split the test group between men and women, and capture info about their age and education level to match the target audience that the web manager has in mind.
Think about what your own target audience looks like. How would you cut the apple between gender, age and education targets?
If you do a top task tune up you’ll get a really beefy report at the end of the project which shows the success ratio of your users in performing top tasks during the test.
In addition to the success ratio we are measuring other factors such as number of clicks and time to task.
YOUR TEST REPORT
In presenting the data, you might show an iterative test where you can see there was some small improvement such as a test done in March and the test done in November. Improvements, however high or low, will be measurable results.
A test report can also be presented as a nice one-pager that you can imagine being very easy to deliver to executives. It would show a real measurable improvement between the baseline test and a final prototype test. I’ve seen such reports presented at a government conference, and I think it’s a wonderful way to showcase actionable results and show your success to your leadership. You can include data about the success ratio, how much time it took to do the task, the number of clicks and how they got there.
And most importantly: capture notes and observations as you listen in as the tester thinks out loud as they are trying to do the task as quickly as possible so your internal documents from the test give decision-makers as much information as possible.
PROS AND CONS
Any usability test choice has pluses and minuses.
For the top task tune-up, on the plus side, the software is sweet. It is going to measure everything with precision. Of course, on the downside it must be setup correctly to collect the data.
On the plus side, the think aloud method is going to show important cues – lessons learned from the user experience. On the downside you are missing the visual cues you’d get if you were in the room.
On the plus side, you can do this with as few as eight testers. On the downside, you must be really careful about selection bias when selecting those participants and not drop below eight people.
On the plus side, you are going to zero in on the top tasks the things that are really important to your users and matter most. On the downside, you are going to miss the deeper review of content and navigation that another usability test procedure would give you.
Finally a top task tune up can be repeated over time, as we’ve seen with some of the examples I’ve shown you.
On the downside if you choose to do this only once it is going to limit the impact of choosing this method in the first place.
The last usability test method in this presentation is what I call the web blueprint package.
I draw that name from my web governance model that says there are three phases of web management:
- Draw the blueprint
- Build the Web site
- Manage the life-cycle.
This strategic map gives you a foundation to make marketing choices to reach the right audience with the right content.
There are different times across the life-cycle of a website when the time is right for doing a web blueprint. Perhaps you are at the start of a redesign and need help with navigation labels and structure. Or maybe you want to avoid an expensive redesign altogether and make targeted fixes and adjustments to poorly performing navigation. In other situations, you may need to ramp your positioning in the marketplace and get a handle on how you stack up against competitor sites from a users’ point of view.
If you go down this road, you can expect several deliverables including a card sort exercise, a facilitated feedback session and some valuable benchmarking that compares your site to your top competition.
What is great about a card sort is you are putting your whole navigation through a rigorous exam from your users’ perspective. Does it fit the mental model of how they want to organize the website?
You want your site navigation highly intuitive, simple and easy to learn, and your users can tell you how to do that.
A card sorting exercise is low-cost and low tech. You are essentially putting your labels for your navigation — first tier, second tier, sometimes even third tier — all on cards in a nice big deck, and you are asking your users to sort it out and put it in a hierarchy that makes sense to them.
You are going to maximize the chance that users will find what they are looking for when you follow their mental model.
As you compare the results of different testers with the same deck of cards patterns will emerge that will improve navigation.
Remember, it is only one item per card, and you can do this as an open sort or a closed sort. Meaning you can make some preliminary limits of the categories names you want to sort into or you can let your users decide what category names make sense to them.
Other options are to ask your participants to clarify what the words mean to them to unmask jargon. You can also ask them to make notations about whether they understood the name and mark it on their cards. All very revealing data.
A lot of people wonder about navigation. Should I organize my stuff by business group, by information type, by category, by subject, by process? Let your users answer these questions by doing a card sort. It will also help you collapse parent child categories and create a simpler, cleaner and streamlined navigation. Again, intuitive to users.
PROS AND CONS
When you choose to do a card sort as your usability test of choice there are pros and there are cons
On the advantage-side it is easy, it is inexpensive, it is very user-centric. And you are going to get top results to really improve the navigation structure of your site for terminology, labeling and categories.
On the downside, remember you are not doing task analysis, you are not doing best practice expert review, so you don’t get the advantages of those types of tests.
It is a little time-consuming, a little tricky to analyze and you limit the number of people who are your participants your testers.
But on balance, card sorting is a really good choice to get you the information you need to improve your site.
A facilitated feedback session is the last leg of our blueprint package. And this is something you might choose to do yourself. But oftentimes we notice large organizations find it very valuable to bring on help for this kind of exercise.
Let’s face it. You have enough to do managing the day-to-day activities of your Web site. Strategic planning is something that you outsource so you have an informed way to make judgments with an independent look about your organizations strategic goals in hand.
Interview key stakeholders you have identified with questions to get issues out on the table. We like the structure of arranging a workshop or stakeholder feedback session where as a group people can get together and talk to each other about the goals, long-term goals and what your organization wants to do through your website. What you get is a valuable, strategic report that illuminates your competitive advantage and where you stand in the marketplace.
This post has reviewed the pros and cons of a range of usability techniques. We’ve talked about expert reviews, top task tune ups and a blueprint package that includes card sorting, facilitated feedback and benchmarking.
All of these are valuable tools in your arsenal as you make the moves to make your website a more friendly, more satisfying and more productive for your users.
So if you are ready to move beyond exploring your options and commit to usability testing, we hope you’ll give us a call. Any of the services we’ve talked about today — from expert reviews, top task tune ups to a blueprint package — can be offered as single shot projects, as iterative projects or as a combination of projects.
We think web managers who commit to website usability make a smart choice. You are going to help the user experience, make them more satisfied, more productive and put your organization on the road to achieving its goals.