Coming from a communications and business oriented background, the reading that caught my eye was “When to ask and when to shut up: How to get visitor feedback on digital interactives.” Just two weeks ago in Management class, we learned about proper ways of hypothesis testing and interviewing customers. Interestingly, this article resonates with the methods I learned in that course, and actually goes deeper. There are four major ways to learn customer insights: interview, surveys, observations, and usability testing. The last one is the focus of this article.
Usability testing in particular can be very effective in helping assess how a museum’s digital gadget is accepted by the users. This is important because when users have bad experiences with digital tools, they themselves often don’t know why they’re struggling, this is where usability testing brings makers and users together, helping work out the problems. In order to conduct a proper usability test, one must first recruit proper participants. Of course, it’s possible to ask some museum visitors while they explore the gallery on a given day, however, that participant sample will not be representative of your target segment. Thus, it’s better to pre-select participants, inviting them to the institution for the specific purpose of testing the interface. Next, it’s important to give people tasks because when people use digital tools it is usually to accomplish something. For example, the article says “if you are concerned that the map does not distinguish between the first and second floors – the participant to find an object on the second floor while on the first floor.”
The next step is knowing how to guide the user’s experience smoothly without incorporating your own biases in the questions you ask. Here one must be very careful and patient, and really focus on wording open-ended questions that allow the users to do the talking as much as possible to describe their experiences. It is highly unlikely that they will know the source of a frustration if they encounter a problem during use. Thus, as you have them talk through what they do, see, feel, and want, and knowing your digital tool’s feature, you’ll be able to better realize the source of the problem.
The specific examples provided in this article were quite fascinating to read through. For instance, how should you guide a user who gets stuck when using your tool? Giving “hints” is obviously wrong, as is constantly asking “What’s the problem?” Instead, the author of this article suggests to take the screen away for a few moments and ask the user what was on the screen. This will give insight into what things were easy to find vs. hard to find for the user.
Finally, after usability data has been gathered, it’s important to evaluate it relative to all other participants, trying to find common trends and discovering whether some features were really problems or just inconveniences. This process will help prioritize the digital tool’s design iterations.
In conclusion, I’d say that this article definitely provided a much more in-depth and detailed review of usability testing than my management textbook!