Once an AB testing experiment is completed, we can then start to carry out a post AB test analysis in order to evaluate the results, draw conclusions and decide on what action we need to take next.
A brief note before we continue on to the post-test analysis, it is crucial that we only conclude experiments when we have sufficient data to draw valid conclusions. For an AB test, this includes a minimum number of conversions, a minimum number of business cycles and of course, statistical significance for our test goals.
Results and analysis
An experiment will usually have a single primary goal, so that is the first set of data that we must present. Typically we will then drill down in to secondary metrics and specific interaction data. To illustrate, if our primary metric on an ecommerce site is ‘Transaction Complete’, secondary goals might be ‘Add to Basket’ and ‘AOV’. A specific interaction goal might be measuring the usage of a new element added in the variation.
In a lead generation landing page optimisation context this might look like:
- Primary goal: New lead generated
- Secondary goal: Newsletter sign-ups
- Interaction Goals: Video plays, Clicking in to different tabs
Beyond our stated goals specified in our test plan, we will dedicate some time to look for other trends or patterns in the data and exploring the behaviour of different audiences and segments. In these instances we have to be careful to ensure that our segments still include a minimum level of conversions and test them for statistical significance.
At this stage we want to evaluate how the results collected impact our initial experiment hypothesis. There are three main outcomes that we may arrive at. If the variation outperforms the control and validates our hypothesis then we have a successful test outcome. If the variation performs significantly worse than the control then we have a failed experiment, potentially saving us from launching something that would have had a detrimental impact on the bottom line. The other outcome is that from the results show that there is no discernible difference in the performance of our variations leading to an inconclusive test result.
Beyond our primary conclusion about the experiment, we can also draw a range of conclusions about how users behaved in the different variations and what impact our changes had on a range of different metrics.
Once we have drawn conclusions form our experiment we need to outline a set of follow up actions. Key actions that we often recommend are:
- Implement the winning variation
- Re-run the experiment
- Run a new experiment that optimises further
- Test a similar hypothesis in another area of the site
- Test a new hypothesis that arises from insights from the analysis phase
- Share tests results and learnings with relevant stakeholders
- Carry out new user testing to investigate new insights further
Key tools for AB test analysis
- Analytics Tool – Google Analytics, etc.
- Testing Tool – Optimizely, VWO, Qubit, etc.