Mr.Ling.Lee

博客园 首页 新随笔 联系 订阅 管理

Usability Testing Demystified

There seems to be this idea going around that usability testing is bad, or that the cool kids don’t do it. That it’s old skool. That designers don’t need to do it. What if I told you that usability testing is the hottest thing in experience design research? Every time a person has a great experience with a website, a web app, a gadget, or a service, it’s because a design team made excellent decisions about both design and implementation—decisions based on data about how people use designs. And how can you get that data? Usability testing.

Jared Spool will tell you for free that when his company researched the causes of failed designs, they found that lack of information was the root of all bad design decisions. The point of user research is to make good, solid, confident decisions about design. Why usability testing as opposed to using other methods? I contend that 80% of the value of testing comes from the magic of observing and listening as people use a design. The things you see and the things you hear are often surprising, illuminating, and unpredictable. This unpredictability is tough to capture in any other way.

The other 20% of the value comes from the pre-testing discussions team members have as they decide what their Big Questions are and the post-testing discussions about what to do with what they've learned.

One test doesn’t fit all

When I say “usability test,” you may imagine something that looks like a psych experiment: The “Subject” is in one room, with a stack of task cards and may even have biometric sensors attached. The “Researcher” is in another room, madly logging data and giving instruction over an intercom as the voice of god.

That image of a usability test is what I’d call “formal usability testing,” and is probably going to be summative and validating. It’s a way to verify whether the design does what you want it to do and works the way you want it to work.

This is often the kind of test done toward the end of a design cycle. What I’m interested in—and I think most of you are interested in—is how to explore and evaluate in the early and middle stages of a design.

THE CLASSIC PROCESS

The process that Jeff Rubin and I present in the Handbook of Usability Testing, Second Edition could be used for a formal usability test, but it could also be used for less formal tests that can help you explore ideas and form concepts and designs. The steps are basically the same for either kind of test:

  • Develop a test plan
  • Choose a testing environment
  • Find and select participants
  • Prepare test materials
  • Conduct the sessions
  • Debrief with participants and observers
  • Analyze data and observations
  • Create findings and recommendations

Let’s walk through each of these steps.

DEVELOP A TEST PLAN

Sit down with the team and agree on a test objective (something besides “determine whether users can use it”), the questions you’ll use, and characteristics of the people who will be trying out the design. (We call them participants, not subjects.) The plan also usually includes the methods and measures you’ll use to learn the answers to your research questions. It’s entirely possible to complete this discussion in under an hour. Write everything down and pick someone from the team to moderate the test sessions.

CHOOSE A TESTING ENVIRONMENT

Will you use a lab? If not, what’s the setup? Will you record the sessions? Again, the team should decide these things together. It’s good to include these logistics in the test plan.

FIND AND SELECT PARTICIPANTS

Focusing on the behavior you’re interested in observing is easier than trying to select for market segmentation or demographics. If you’re testing a web conferencing service, you want people who hold remote meetings. If you’re testing a hotel reservation process on a web site, you want people who do their own bookings. If you want to test a kiosk for checking people into and out of education programs, you want people who are attending those programs. Make sense? Don’t make recruiting harder than it has to be.

PREPARE TEST MATERIALS

You’re going to want some kind of guide or checklist to make sure that the moderator addresses all of the research questions. This doesn’t mean asking the research questions of the participants; it means translating the research questions into task scenarios that represent realistic user goals.

In the test materials, include any specific interview questions you might want to ask, prompts for follow-up questions, as well as closing, debriefing questions that you want to ask each participant.

CONDUCT THE SESSIONS

The moderator is the master of ceremonies during each session. This person sees to the safety and comfort of the participants, manages the team members observing, and handles the data collected.

Though only one person from the team moderates, as many people from the team as possible should observe usability test sessions. If you’re going to do multiple individual sessions, each team member should watch at least two sessions.

DEBRIEF WITH PARTICIPANTS AND OBSERVERS

At the end of each session, be sure to take a step back with the participant and ask, “How’d that go?” Also, invite the trained observers to pass follow-up questions to the moderator or to ask questions themselves. Thank the participant, compensate him or her, and say good-bye.

Now, the team observing should talk briefly about what they saw and what they heard. (This discussion is not about solving design problems, yet.)

ANALYZE DATA AND WRITE UP FINDINGS

What you know at the end of a usability test is what you observed: What your team saw and heard. When you look at those observations together, the weight of evidence helps you examine why particular things happened. From that examination, you can can develop theories about the causes of frustrations and problems. After you generate these theories, team members can use their expertise to determine how to fix design problems. Then, you can implement changes and test your theories in another usability test.

WHAT YOU GET

If you follow this process in a linear way, you’ll end up with thorough planning, solid controls, heaps of data, rigorous analysis, and—finally—results. (As well as a lot of documentation.) It can feel like a big deal, and sometimes it should be.

But most real-world usability tests need to be lighter and faster. Some of the best user experience teams do only a few hours of testing every month or so, and they may not even think of it as “usability testing.” They’re “getting input” or “gathering feedback.”

Whatever. As long as it involves observing real people using your design, it’s usability testing.

Someone, something, someplace

Really, all you need for a usability test is someone who is a user of your design (or who acts like a user), something to test (a design in any state of completion), andsomeplace where the user and the design can meet and you can observe. Someplace can even be remote, depending on the state of the design. You can do all that fancy lab stuff, but you don’t have to.

Once you get into a rhythm of doing user research and usability testing, you’ll learn shortcuts and boil the process down to a few steps that work for you. When we get down to the essential steps in the usability testing process, this is what it tends to look like:

DEVELOP A TEST PLAN

In the classic process, a usability test plan can be several pages long. Teams in the swing of doing testing all the time can work with a minimalist structure with one or two lines on the elements of the plan.

FIND PARTICIPANTS

Again, this is about behavior. The behavior you’re interested in for the study is parents going through the process of getting their kids into college. Just make sure you:

  • Know your users
  • Allow enough time
  • Learn and be flexible
  • Remember they’re human
  • Compensate lavishly

CONDUCT THE SESSIONS

If you’re the moderator, do your best to be impartial and unbiased. Just be present and see what happens. Even the designer can be the moderator so you can step back and see the test as an objective exercise.

Remember that this is not about teaching the participant how to use the interface. Give a task that realistically represents a user goal and let the rest happen. Just listen and watch. (Of coure, if the task is something people are doing in real life and they’re having trouble in the session, show them the correct way to do the task with the current design after you’ve collected your data.)

As the session goes on, ask open-ended questions: Why? How? What?

Debrief with observers and come to consensus about design direction

Talk. Brainstorm. Agree. Unless the design was perfect going into the usability test (and that’s a rare thing) and even if the team has only done one or two sessions, use the observations you made to come up with theories about why things happened for participants the way they did. Make some changes and start the cycle again.

 

Where do great experience designs come from? Observing users

Getting input from users is great; knowing their requirements is important. Feedback from call centers and people doing support is also helpful in creating and improving designs. Whatever your team might call it—usability testing, design testing, getting feedback—the most effective input for informed design decisions is data about thebehavior and performance of people using a design to reach their own goals.

posted on 2010-04-10 13:07  Mr.Ling.Lee  阅读(285)  评论(0编辑  收藏  举报