It’s great to get positive test results, not only for the impact it will have on business KPINecces but the support your testing programme will gain from internal stakeholders. But not every A/B test will get the results you expect.
Don’t be disheartened; tests where the variation didn’t win can often be the most insightful and valuable tests to learn from. You’ve tested an assumption, letting the visitors to your site decide which option works best for them. By testing the change you’ve measured the impact that it would have had, as opposed to releasing it site-wide and monitoring the change. Furthermore, a negative test forces you to reassess and test again.
It is incredibly important to take learnings from unsuccessful tests, but there are some key best practice techniques to help increase the likelihood of success for your next test.
5 Steps to Perfect Your Next A/B Test
1. Sound hypothesis
A well thought out hypothesis and rationale is key. We base our tests on insights from user research and analytics. This way we have a comprehensive view of the ‘what’ and the ‘why’. If you are basing tests on internal bias and opinions, it will take a long time to build up any solid learnings and most likely not lead to big wins as there is little insight into the behaviour of users, backing the rationale of the test.
2. Don’t be too trigger happy
If the results look negative upon first release of a test there can be a panicked temptation to pull the test. Don’t! There may be an initial ‘shock’ effect as the users’ journey is disrupted but this will even out over time. If your site has a high percentage of returning visitors it may be that you run the test for slightly longer. Make sure you are running your test for long enough, e.g don’t stop the test until it has been running through at least two typical business cycles and has reached statistical significance.
3. Don’t make assumptions
When you have seen a positive test result, don’t assume that because the change was a success on one page of the site you can apply it site-wide; test it! Your customers will have different behaviours depending on what kind of browsing state they are in, where they have come from and other elements on the page.
4. Take time to properly interpret the results
When focussing on a test on one page of the site, other metrics across the site and the effect on the end-to-end conversion funnel are frequently overlooked. The variation may be winning in terms of the goal for the specific test, but are you just moving drop-out to the next step of the funnel? Ensure you are able to analyse the overall impact of a change and focus on the macro conversion rate.
5. If you get a negative test, learn why
If a test does fail in one area of a site where it had previously improved metrics in another, surely you should be asking why? Visitors are at different stage of their journey. This should help you to understand why their behaviour isn’t the same. More importantly, find out what you can about the mix of visitors in the test. A good start to interpreting the results is drilling down and analysing the data; segment your visitors by returning/ new users, devices, and referral.
The Win/Win Mentality
Following our tips will help improve your success rate in getting meaningful results and insights. Meaning that testing can be a win/win opportunity; you either have a positive impact on your KPI’s or you gain insight on your users that you didn’t previously have that you can feed into further tests and improvements.
- Ensure you start with a considered hypothesis and rationale for your test, preferably based on user insight
- Let the test run for long enough and always check the statistical significance
- Fully analyse, understand, report and learn from every test, whether it had a positive impact or not!