Clicks don’t really matter

Maria Drummond
Wednesday 18 September 2019

Over the last few months I’ve received several requests to track the number of clicks it takes a user to complete a task to determine the “improved” usability of a website.

This type of request is in addition to meetings where the number of clicks have been used as a deciding factor when discussing potential design changes.

So, to avoid any uncertainty: the number of clicks it takes for a user to complete a task on a website is (usually) completely irrelevant.

Why click counting fails

Relying on the number of clicks to make decisions about web design is a recipe for disaster – just counting the number of clicks from the homepage does not tell the whole story.

As Page Laubheimer for Neilson Norman Group (NN/g) puts it:

“In the real world, users make mistakes, misunderstand things, and get confused along the way. Simply counting the number of steps in a process misses out on what users actually do, and the opportunities to provide them with a less frustrating experience.”

Specifically, click counting “misses out” on the following topics for consideration:

  • What is the content like?
  • Is the website slow to load?
  • Is the content organised in a way that makes sense?
  • How would users actually reach that content (i.e. would they more likely go straight to the page from a Google search?)

From my perspective, there are two rationales fuelling this anti-click discourse:

  1. the 3-click rule
  2. the desire to quantify usability tests.

1. The 3-click rule

The 3-click rule is a rule which states that all pages should be navigable to in three clicks or less.

As I type, the University has over 250,000 web pages. Yes, you read that correctly. Now let’s imagine that each page is the equivalent of a task a user needs to complete. That would mean, from the University homepage, the user would have to be able to potentially access any one of those 250,000 web pages within three clicks.

This could feasibly be done, but at what cost? Pages, navigation bars and drop down menus would become saturated with links to subpages; the website’s information architecture would become broad, rather than deep.

This method of organising content would not only look terrible, but would be distracting for the user, who would likely become overwhelmed and use the website search or Google search to find what they need instead.

Fun fact: the 3-click rule is not supported by data in any published studies, and a study by Joshua Porter has debunked the rule:

“If there is a scientific basis to the Three-Click Rule, we couldn’t find it in our data. Our analysis left us without any correlation between the number of times users clicked and their success in finding the content they sought.

Our analysis showed that there wasn’t any more likelihood of a user quitting after three clicks than after 12 clicks. When we compared the successful tasks to the unsuccessful ones, we found no differences in the distributions of tasks lengths. Hardly anybody gave up after three clicks.”

Bonus fact: our own usability testing has shown that users prefer using the search bar when completing a specific task, such as “find semester dates”, rather than trying to navigate to the page anyway. How d’ya like them apples?

2. Quantifying usability tests

Let’s be honest, hard data sounds good in reports, for example:

“Users completed tasks using fewer clicks. Task completion now takes an average of four clicks compared to eight with the previous design.”

But what does a statement like this actually tell us? It doesn’t indicate how the users felt during those four steps, or whether their experience was hindered in other ways because of the reduction in clicks.

However, to some – often those who commission usability studies – fewer clicks equals happier users.

This is why the time taken to complete a task is also largely redundant as a metric for success. The average time it takes a user to complete a task is only partially due to the design of a website. You also need to consider the age of the user and their experience with both IT and the hardware you’re running the test on. How does the facilitator play into the test? Were the questions worded in such a way which confused users, resulting in several false-starts?

With that many caveats, how helpful can these metrics really be?

Quantifying the unquantifiable

I often find it’s worth clarifying that the usability tests I conduct and report on are primarily a source of qualitative, rather than quantitive data. The focus is on observational findings that identify the ease of use of certain designs.

As stated in another insightful article on qualitative vs quantitive data by NN/g:

“Quantitative metrics are simply numbers, and as such, they can be hard to interpret in the absence of a reference point. For example, if 60% of the participants in a study were able to complete a task, is that good or bad?”

Quantitative data can be brought into a report as long as it is supported by context or a reference. For example, what counts as a user successfully completing the task?

Exceptions to the rule

Obviously there are times when the number of steps a user needs to take must be considered. This decision depends on the size of the website and the top tasks for users on that site.

For example, there would be issues in putting the most important task several steps in the process. Imagine you were ordering a pizza online. These are the steps you would likely follow:

  1. Go to pizza website
  2. Click ‘order for takeaway’ (or collection)
  3. Choose pizza
  4. Pay for pizza.

It would be futile to bury the link to order pizza beneath pages about company information or job vacancies. Now, this is an extreme example, because a pizza company has a very well-defined user audience (people who want to eat pizza), and one main ‘super task‘: ordering pizza.

Now, compare this with the University website. Who is our audience? More like who isn’t our audience, amirite? Audiences span both prospective and current undergraduate, postgraduate and research students. And that’s just the students. How can we possibly provide the information they need within three clicks from the home page without detracting from the overall user experience?

What to measure instead

Instead of worrying about small things like clicks (which also take an age to measure), more insight can be gleaned from general patterns in user behaviour. Some starting points to consider when reporting on usability testing:

  • Did users ignore the feature you’re trying to promote? How many did this effect, and why could this be?
  • Is there any particular terminology that users didn’t like or understand? What could be used instead?
  • Was something actually broken?
  • Did several users mention they liked a particular aspect of the design?

One finding I like to lead with is whether users were able to complete all tasks or not. This is usually a good indicator of the overall success of the website and is straightforward enough to indicate to any stakeholders whether the design works or not.

Finally, when it comes to proving your point, video evidence of a user saying “this looks awful” will be far more impactful than a line in a report about the number of clicks.

References

Related topics

Share this story