Developing a robust A/B testing process is important for a number of reasons. In our experience:
- It leads to the best results
- It helps you to avoid common pitfalls
- It’s more credible within your organisation
- You gain learnings that last longer than any single UI
In this post I will outline our tried and tested process for running a single A/B test as part of our ongoing optimisation programmes. Typically we will run between 2 and 8 tests each month although this can vary widely based on a range of factors.
1. Gather Insights
We have a wide range of tools and techniques that we use to gather insight from users. From analytics to lab-based moderated user testing, surveys to session recording and much more.
2. Develop Hypotheses
This phase will include identifying usability issues, finding opportunities to make a design more persuasive. This may also be the stage at which you identify fundamental business questions or hypotheses which can be answered through testing.
Defining an aim and hypotheses for each test before you start will allow you to evaluate the performance and draw conclusions later on.
Deciding how to prioritise your hypotheses for testing is really important. Fortunately, I’ve written about it in detail here:
3. Design Concepts
This stage will vary a great deal depending on the scope of your test. If you are looking to reword some key messages then this will be fairly quick stage.
On the other hand if you are looking to do something more dramatic such as redesign a page template then this is where we starts sketching, prototyping, user testing, building and signing-off our variations.
4. Configure Testing
The next step is to get your experiment set-up within your chosen testing tool (we love Optimizely!).
As well as configuring your variation(s) and key settings such as: segmentation, targeting, goals, analytics integration, etc., this is also the stage at which you should consider what additional tracking might be useful to gain further insight from your experiment.
For example you may decided to create new goals within your testing or analytics tool (provided they integrate) to track key user behaviour such as :
- Tracking a new video
- Tracking clicks on a new page element
- Tracking scroll depth on a new long-form landing page
Once it’s live, monitor your test initially and then try to avoid constantly peeking while the test runs. We find it can lead to panic or set unreasonable expectations.
5. Analyse Results
You will need to judge carefully when you can call your test. Then you can start drawing insights from the results. Of course it’s always great to get a big headline conversion lift but there’s a lot more that can be learnt by looking at secondary conversion metrics and even failed tests.
I’ve recently written an article about it here:
6. Identify Learnings
We take great care in carefully documenting the key learnings from testing. This partly acts as a record of testing results and it allows us to avoid repeating tests and often fuels hypotheses for further testing.
Communicating results and learnings is also really important. It’s a great way to build support for your optimisation programme.
I hope this process serves you well or allows you to build on your current process. Let me know if your process differs in any way.