3 Clever Tips for Anyone Doing Usability Testing

3 Clever Tips for Anyone Doing Usability Testing

I recently shared my experiences with usability testing during a knowledge exchange with fellow researchers at work. As I was mocking up my casual presentation, I kept thinking about all the different resources, articles, and guides for how to conduct usability tests. While these serve as a great starting point or reference, the best way to learn how to do usability testing, is to just do it. You will likely mess up, unintentionally be leading, or struggle to get a wayward session back on track. But unlike other more cerebral or analytical research activities, usability testing is more akin to a sport. Watching a game on TV doesn’t make you an athlete.

The dialogue that emerges from conversations surrounding usability testing is more useful than any one-way presentation on the topic. I’ve outlined three actionable usability testing takeaways from my discussion with five fellow user experience researchers.

1.No quantitative metrics without user quotes. We can all relate to the pressure of delivering numbers. Those in business, sales, and managerial positions often need numbers to make a case and support decision-making within an organization. Those in research and design look to numbers to help prioritize production tasks and decipher where users may be struggling. As someone who is conducting usability tests, you will be asked to gather quantitative data from sessions with users in order to assess how a particular offering is performing.

While the act of gathering quantitative data is extremely valuable, it can be incredibly misleading if the numbers are not supported, or further illuminated, by user quotes and qualitative feedback. This is especially true when teams have various roles conducting usability tests. The issue is this: if a person conducting the usability test is told that their only goal is to assess time on task or record a measure of difficulty, the session outcomes will be superficial at best, especially if the person performing the test is not experienced with usability testing. Let me break down a scenario in which numbers alone prove to be more misleading than informative.

Say after a set of related tasks, you ask the user how confident they are that they completed the assigned task correctly on a scale of 1 to 5 (with 1 being not confident and 5 being the most confident). The user may assign their confidence level at a 4 or a 5 even if they are vocally expressing statements like, “Wait did I do this correctly?” or “I don’t understand what I’m supposed to do next.” They may have unknowingly failed to complete the task correctly and still assign themselves a high confidence rating! Therefore, providing an average confidence level of a ‘4’ after a usability test is a poor outcome because it could be wrought with misunderstanding.

I’ve found that a good rule of thumb is to not present a numeric outcome without 2 user quotes that either support or illuminate the number. For example, a user may indicate that their confidence during a set of login tasks is a 5 (with 5 representing the most confident). The user quotes that support this number may be, “This login is a lot like Facebook” or “This feels familiar.” Now we can understand that the reason this user is confident in their ability to login to the platform is because he or she feels they have done this before. Conversely, another user may also rate their login confidence as a 5 (with 5 representing the most confident) because “it only requires one entry field to login.” As designers, creators, and product managers we don’t just want our users to be confident; we want to understand what makes them feel confident so we can design this into the entire platform experience. In this case, the product team would have to decide between designing for familiarity with other popular applications or designing for the least amount of input fields, all dependent on the simultaneous analysis of quantitative and qualitative usability testing feedback.

2.Abstract user tasks 3 times before testing. One of the biggest challenges in usability testing is crafting a scenario or statement in such a way that you are able to test what you are looking to test, without leading anyone. The ability to articulate without hand-holding is what separates insightful results from superficial ones.

When preparing a usability plan, work yourself towards abstraction. During a round of usability testing, my product team and I wrote out exactly what we wanted the user to do within the interface: Highlight the star icons, ranging from one to five starts. Naturally, we cannot tell the user to do this, or the test will simply be how well they can follow literal instructions! So we abstracted this task once by asking ourselves, what could we ask to prompt the user to do just this? We then wrote out: Apply a rating to the selected article. This is getting a little better. We are no longer referencing the exact ‘star’ icon bur rather the action of ‘rating’ and we are also changing the context from ‘the range of 1 to 5’ to the content of the article. While this is better, it is still riding the line between leading and open-ended. As our usage with interfaces and interaction patterns continue to normalize, we begin to realize that ‘apply a rating’ and 5 adjacent star icons are nearly synonymous. We may still only be evaluating the user’s ability to follow instructions and not how they translate personal goals into actions within an interface. We abstracted our statement one step further: Document your satisfaction with the selected article. Now, our user has a few options. They could leave a comment, save the article for later, or apply a rating using star icons. By abstracting the task, we not only learn how easy it is for the user to understand icons and interactions in the interface, but how they interpret and act up on the concept of ‘documenting satisfaction.’ We can learn more about our users when we abstract tasks to such a degree that we learn about how they interpret and act, rather than how well they listen and execute.

3.Have a set of tactics you can employ for different usability personalities. Let’s face it, no two users are the same. They may have the same job role, work for the same company, and have the same educational background. This doesn’t mean they will engage with a usability test in the same fashion. I’ve listed out some of the more common usability testing personality types you may come across and tips on how to address each during testing.

Literal thinkers. This user tends to focus solely on completing the task at hand. Addressing them with binary tasks or questions will result in binary answers.

  • Tip: have a list of short but non-binary (not ‘yes’ or ‘no’) follow up questions that correspond with certain tasks in order to get more information from the user. You can employ as needed throughout the session.

Wayward explorers. This user may get off task easily or direct the conversation away from the tasks. Allowing for some flexibility is good because you want the user to express themselves, but it’s critical to balance the wayward explorers with focus.

  • Tip: a great tactic from my research lead, Cary-Anne, is to always have a tab open with a screen or starting point that you can use to reorient your user at any point during the session. Find a time to elegantly interject and have them begin from a selected point in a scenario or task list.

Eloquent explainers. This user is very articulate and able to easily relate the tasks they are given with anecdotes or illuminated comments.

  • Tip: stay out of the way! This sounds like the easy scenario to encounter but it’s easy to mess up a good thing when you aren’t aware of its presence. Let them talk, avoid nervously trying to refer back to your task list. Don’t cut them off to move to another task if it means missing out on expressive qualitative anecdotes.

Determined hung-ups. Sometimes, those we are testing with get stuck. This user gets hooked on a particular part of the interface, be it a design bug or a hiccup that prevents them from moving forward.

  • Tip: never make a user feel like they have done something wrong. There is no right or wrong, there is only observation. If a user cannot get over some element, button, or hiccup employ a similar tactic to the wayward. Identify a point either prior to or beyond the hang-up area and reorient the user to begin again. Frame it as a natural progression of the testing and not a redo. You want the user to walk away feeling like they contributed, not feeling frustrated.

By preparing yourself ahead of time to encounter a range of behaviors during usability testing, you can employ tactics that help your user feel at ease and generate a higher quality of feedback. While there may be correlations between job roles and usability testing behavior, a user’s job title should not determine your tactic. Only their behavior in the testing session should indicate the tactic, or combination of tactics, you employ. Additionally, it’s important to note that while some usability testing types may be easier to test, the content and value-level of the feedback provided is not determined by a user’s testing behavior. That’s why it is so critical to have a set of tactics to help get the most feedback out of a variety of behavior types because each user brings a unique and valuable lens to the table.

Jack E. Burroughs, DDS, FAGD

Jack E. Burroughs DDS FAGD UT Dental Branch Houston. Dallas-Fort Worth. 25,000+. American Dental Association Health Policy Institute Covid-19 Impact On Dental Practices Panel

5y

Awesome Let's Connect On LinkedIn

To view or add a comment, sign in

More articles by Gabriella Campagna Lanning

Insights from the community

Others also viewed

Explore topics