How to conduct usability testing
If you’re new to usability testing, you may find my post What is usability testing and why do we do it, useful before diving into the ‘how-to’. While that post looked at the what and why, this post will look at the how, and will focus on the five planning and testing stages of usability testing. If you’d like more information on what to do post-testing, check out an agile way to prioritize usability problems.
Step one: determine what you would like to find out
Testing can be helpful for finding issues with content and design at all stages of the development process – and you may need to learn about different aspects of the site at each stage.
If you have a brand-new website that has never been tested, more generalised tasks for the tester will give an overall view of the main usability problems.
If you’ve just updated a section of a website, creating tasks that force the participant to use this section will be beneficial. This was key for the digital communications team when we tested the PGT pages: users were instructed to find information specifically relating to the newly designed course pages.
But you don’t need to have a new website or new content to conduct usability testing. In fact, it’s beneficial to test your current site before any major changes are made so that it can be compared with newer versions to check that they successfully remove any usability issues.
Having a clear idea of what you want to learn about your website will make creating the script easier and result in relevant and constructive results.
Step two: write a script
Writing a script ensures what you need to be tested will be attempted by the participant, and also provides a general, repeatable structure for all tests.
But before you dive into the questions, it’s important that the user is put at ease, as they will behave more naturally during the testing. This is achieved by clearly explaining the set up of the test, how long it will be and stressing that you are testing the website, not their abilities. It is also worthwhile adding some icebreaker questions such as ‘What’s your name, what do you do?’ etc.
The script should contain specific tasks that the user must complete, these tasks are focused on an area you would like to evaluate. For example, could the user find out the cost of a particular course, or contact details for a member of staff. Rather than just asking a question straight off the bat (“Find the contact details for so-and-so”, a scenario is also created. The scenario alongside the questions sets the scene and provides context for the participant. Expert in web usability Steve Krug on what a scenario is:
“The Scenario is like a card you might be handed for an improvisation exercise in an acting class: it gives you your character, your motivation, what you need to do and a few details.”
So, when we tested the PGT pages, the scenario for all participants was that they were a prospective postgraduate student thinking about applying to the University of St Andrews to study economics and they were looking to find out more information about the course before they applied.
Check out Krug’s example of a test script more more ideas.
At this stage it’s also worth creating a recording consent form. This ensures that you are. An example consent form can be downloaded from Steve Krug’s website.
Step three: organise your location and technology
Finding a suitable place to conduct the testing is important for more accurate results – try to find a comfortable location which is quiet and distraction-free.
This is also the time to check you have all of the necessary technology for the testing. Typically you will need:
- computer, tablet or phone (depending on what you’re testing)
- Screen recording software
A microphone is perhaps not necessary, as many computers have built-in mics. However, adding an external mic ensures sound clarity which is invaluable for evaluation.
The test needs to be as distraction-free as possible, so using a buggy computer from the back of the cupboard that crashes all the time is unadvisable. The same goes for the screen recording software. There are many screen recorders available, but one that we use is Lookback, which is free to use and can also record mobile devices.
Step four: find the right people
Ideal testers are those who were not involved in the development of the site (or have extensive knowledge of your site) and are from your target audience. For example, when we were testing the PGT pages, we recruited prospective postgraduates.
It’s also worth noting that there are benefits to recruiting people outside your target audience, one main one being the ‘fresh pair of eyes’ factor. These recruits can often spot issues that your ‘traditional’ users may miss.
As for the number of participants, that’s up to you. Traditionally when collecting data, the bigger the sample, the better. However this isn’t the case for usability testing. Krug recommends a maximum of only three users, and with this suggests several reasons why this is the ideal number. Perhaps the key benefit is you end up with more concise notes that are easier to process than a mountain of notes which make it harder to see the underlying usability problems.
Step five: conduct the testing
The room is booked, the computer and recording software are set up (and tested!), you have the script and you’re ready to go. This is the general process on the day:
- participant arrives
- participant signs recording consent form
- the facilitator begins reading from the script and the participant attempts to complete each task
- the participant completes all tasks
- the recording is stopped and the participant leaves.
It is beneficial to have two facilitators, one reading the script and one observing the test and taking notes.
Here’s a video of a demo test provided a demo test provided by Krug:
Step six is the review stage, where you and any stakeholders watch footage of the recordings. In my post on an agile way to prioritize usability problems I go into detail about the reporting stage of usability testing, where usability issues are actioned and prioritised.
Usability tests should not be one-off events. To reap the full benefits of this process, testing should be on regular basis, to help discover and rectify any usability issues.
If you’re a member of staff at the University of St Andrews and need support with conducting usability testing, please contact the digital communications team.