Top task performance indicators

Lewis Wake
Wednesday 20 September 2017

This is a follow-up to a previous post I wrote on “Identifying your website’s Top Tasks” following a seminar I attended hosted by User Experience expert, author, and speaker Gerry McGovern. This post will illustrate the methods to identify customer success rates through testing.


Success and time

The internet is about answers. If it takes a long time for a user to complete a task then they will likely have a dissatisfied experience of a website even if they completed their task. In fibre broadband era, where an Amazon order can be delivered to your door within two hours, digital response times are vital to customer success rates.

Google have long stated that if a web page takes more than three seconds to load then a customer is likely to drop out. Optimising the technical speed of your website is one solution, but in order to finesse the site further, the top tasks must be made more simple for a customer to complete.


Choosing what tasks to measure

Having identified your site’s top tasks, you will already be aware of what reasons users visit your website. These are the tasks you will compose your test around. The identified tasks which have higher than 25% of the cumulative votes should have multiple questions for the test, while lower priority tasks should have one question each.


Differences to usability testing

There are a lot of similarities between regular usability testing and tests regarding the performances of top tasks. There are also some subtle, but key, differences. Going forward I will be identifying the differences between both methods.

For further reading on usability testing, please refer to the listed posts from Maria Drummond.


Choosing customers for participation

Commonly, conducting usability testing with as few as five people can bring meaningful insight to a product. However, testing with a small sample size doesn’t provide enough confidence for a task performance indicator. Gerry McGovern’s method has found that testing with 13 – 18 participants stabilizes the confidence with success rates.

To further improve confidence in the test results, it is important to maintain the same mix of participants for repeat tests. This is very important to ensure your metrics are not affected by changes to the customer selection mix.


Refrain from vocalising during measurement sessions

Due to video recording during usability sessions, it is preferred that participants vocalise their thought processes and reasoning for every action during the test session. This is another key difference for measuring task performances as this changes the participants behaviour and it isn’t how they would typically use the product.

Although you are likely to lose insights regarding the justification for the participants actions you will in turn gain accurate results on the success rate of the tasks measured.


Continuous improvement

Measuring the performance and success rates of top tasks should be undertaken continuously. Gerry McGovern advises the same tests occur every six to twelve months to capture results of increased performance of tasks.

Iterative improvement helps to maintain the continuing success of a website and capturing the results of these tests will help repel the eventuality of your manager walking into your office and uttering the five most dreaded words in the digital world; “We need a new website.”


Further reading about top tasks

Related topics

Share this story