BLACK FRIDAY SALE - up to 57% off memberships for a limited time!

Explore our Multichannel lifecycle marketing Toolkit

How to Select and Prioritise Your Testing

Author's avatar By Matt Lacey 03 Dec, 2013
Essential Essential topic

What’s wrong with endless button colour testing?

Disclosure: We DO test buttons. Size, prominence, copy, positioning and very occasionally, colour.

However, in many cases there are higher value tests that we could be running. There will almost always be constraints that limit the number of tests that you can run, based on design & developer effort, a limited number of conversions, limitations of your tools, etc. Therefore prioritising your tests is absolutely crucial. Where can you get the most bang for your buck? In this post I will share the most important things to consider when choosing what to test.

Selecting What to Test

selecting-tests

If you've been working on Conversion Rate Optimisation (CRO) projects for any length of time you'll know that there is a huge range of ways to capture insights on user behaviour. No matter which tools/techniques you use, here are the three areas you absolutely must have covered.

1. What is the data telling me about visitor behaviour?

This is typically but not exclusively, found through your web analytics tool. Web analytics can provide a range of insights that help you to understand where people are abandoning forms, bouncing, hitting error messages, which devices/countries/browsers are under-performing,

Key areas to consider:

  • Top landing pages
  • Pages with high footfall (traffic)
  • Key leak or conversion points

2. What are users telling me about their experiences? What can we observe about their behaviour?

If web analytics can help us to understand 'what' users are doing, user research can start to fill in the picture by tackling the 'why' behind user behaviour on areas such as;

  • Usability errors
  • Relevance, persuasion, motivation
  • Influential proposition messaging

3. What does the business know about customer struggles or business challenges?

Finally, don't overlook what your business knows about customer behaviour. Often people within the business have valuable information about common user errors, feedback from social media, feedback from stores, etc.

“If only HP knew what HP knows, we would be three times more productive.” Lew Platt, CEO, Hewlett-Packard

I love this quote. It highlights the importance of setting up communication channels for feedback and suggestions. This can be a really positive process that gains support for your testing programme.

Key Sources:

  • Customer service insights
  • Merchandising team
  • Store staff

Prioritising your tests

prioritising-tests

So now you've got tons of exciting ideas and you're ready to start testing, but where to start? Here's a few key considerations that we have found invaluable when developing our testing schedules.

1. Triage

Firstly, it's important to identify if an issue or improvement is worth testing or can be tested. Armed with insights from regular analytics investigation, a range of user research techniques and internal feedback you're almost certain to have more hypotheses than you can test. Some things will be too small to warrant testing and may be best implemented as a JDI (Just Do It) change, while some may require new functionality or a more comprehensive redesign that sits above testing (e.g. a full checkout redesign).

Here are the categories that we use to triage ideas/improvements:

- Just Do It (JDI) – A small changes that can be implemented without testing and monitored using analytics.

  • Single Feature Test - a specific test on a single issue
  • Batch Testing - a test on a range of small changes with a similar theme
  • Radical Redesign Testing - A full-page redesign tested against the current version.
  • Larger Redesign Projects - Improvements that are not practical to test and require a more in-depth redesign process.

Ideas might not fit neatly into one of these categories, but by thinking in this way it will help you to start thinking about the scale of issues in a pragmatic way.

2. Triangulation

Once we have classified issues into different levels we look for issues that have been identified in multiple research methods. If multiple sources confirm or strengthen the same issues we give them a higher priority.

3. Potential Impact vs Effort Required

The next step is to start to evaluate the estimated potential impact of the improvement, against the effort required to run the test. This might be the amount of design or development required, creation of test assets, test configuration, level of sign-off required, etc.

So your estimate should be based on a balance of the 'likelihood to impact key metrics' versus 'effort required'.

Summary

Selecting tests based on understanding the 'hows' and 'whys' of user behaviour, and prioritising tests in a considered, pragmatic way you will give you the best chance of running tests that have a significant impact on key metrics that improve the overall performance of your site or business.

How do you prioritise tests in your organisation?

Author's avatar

By Matt Lacey

Matt Lacey is our commentator on Site Testing and Optimisation as part of Conversion Rate Optimisation. Matt Lacey is Head of Optimisation at PRWD. You can follow him on Twitter or connect on LinkedIn.

Recommended Blog Posts