Week 7: How to get visitor feedback on digital interactives

Coming from a communications and business oriented background, the reading that caught my eye was “When to ask and when to shut up: How to get visitor feedback on digital interactives.” Just two weeks ago in Management class, we learned about proper ways of hypothesis testing and interviewing customers. Interestingly, this article resonates with the methods I learned in that course, and actually goes deeper. There are four major ways to learn customer insights: interview, surveys, observations, and usability testing. The last one is the focus of this article.

 

Usability testing in particular can be very effective in helping assess how a museum’s digital gadget is accepted by the users. This is important because when users have bad experiences with digital tools, they themselves often don’t know why they’re struggling, this is where usability testing brings makers and users together, helping work out the problems. In order to conduct a proper usability test, one must first recruit proper participants. Of course, it’s possible to ask some museum visitors while they explore the gallery on a given day, however, that participant sample will not be representative of your target segment. Thus, it’s better to pre-select participants, inviting them to the institution for the specific purpose of testing the interface. Next, it’s important to give people tasks because when people use digital tools it is usually to accomplish something. For example, the article says “if you are concerned that the map does not distinguish between the first and second floors – the participant to find an object on the second floor while on the first floor.”

The next step is knowing how to guide the user’s experience smoothly without incorporating your own biases in the questions you ask. Here one must be very careful and patient, and really focus on wording open-ended questions that allow the users to do the talking as much as possible to describe their experiences. It is highly unlikely that they will know the source of a frustration if they encounter a problem during use. Thus, as you have them talk through what they do, see, feel, and want, and knowing your digital tool’s feature, you’ll be able to better realize the source of the problem.

The specific examples provided in this article were quite fascinating to read through. For instance, how should you guide a user who gets stuck when using your tool? Giving “hints” is obviously wrong, as is constantly asking “What’s the problem?” Instead, the author of this article suggests to take the screen away for a few moments and ask the user what was on the screen. This will give insight into what things were easy to find vs. hard to find for the user.

Finally, after usability data has been gathered, it’s important to evaluate it relative to all other participants, trying to find common trends and discovering whether some features were really problems or just inconveniences. This process will help prioritize the digital tool’s design iterations.

In conclusion, I’d say that this article definitely provided a much more in-depth and detailed review of usability testing than my management textbook!

3 thoughts on “Week 7: How to get visitor feedback on digital interactives”

  1. I like how you connected the article to management concepts. As a sociology major, theres definitely some points that I could resonate with too, when collecting data in the social sciences. It invokes a lot of critical thinking, and I think a lot of this is somewhat statistical as well. I think a lot of people don’t realize how much strategy goes into these tests, and that it’s not just simply a test-run or asking for input.

  2. As a fellow sociology major, I agree with ayagrace. The interesting part about getting visitor feedback, and often the most divisive, is the wording that comes with interviews and surveys. I think that usability testing can take out a lot of the bias that comes with the previously mentioned methods, especially since it’s recorded observations and quantitative data of how the user is responding to the technology, or what they remember about it.

    I feel you on the management textbooks though. They don’t always explain things in depth.

  3. Usability testing is definitely a really cool tool, and very effective in figuring out the benefits and drawbacks of a given technology, because, as you said, people often don’t know what they’re struggling with (they might also feel compelled to say that they can easily use a technology in a survey, but it’s harder to mask their ability during usability testing). I wrote about the article about the poltillize app, which mostly got positive feedback during evaluations, but all of the evaluations were based on interviews and Google analytics. I wonder if usability testing would have changed the results.

Comments are closed.