• Conversion Rate Optimisation

18th Jan 2017

6 min

What is A/B Testing?

A/B testing (sometimes referred to as split testing) allows you to determine the impact of content, design and functionality changes to your website. By creating multiple versions of a web page and collecting data & statistics for a control group (usually A) and at least one variation (B) we can learn about how your ideas impact user behaviour and ultimately your key goals and objectives.

What is the difference between A/B and multivariate testing?

Multivariate testing (MVT) is a technique in which multiple variables are modified, whereas A/B testing is a technique where one variable is modified. So instead of running one A/B test, multivariate testing runs multiple tests, each with a different combination of element variations. The ultimate goal of multivariate testing is to see which combination of element changes delivers the biggest uplift against the control.

The number of tests will always be [# of Variations on Element A] X [# of Variations on Element B] (and so on) = [Total # of Variations]. So for example, if you have two variations of a header image (A) and two variations of a headline (B) you will have four separate tests running.

Due to a higher number of variations being run at one time, MVT often requires a high volume of site traffic (to compensate for the split of traffic to each variation and how long it would take for each combination to reach statistical significance) to ensure each variation in the test reaches validity. This means that for many businesses, multivariate tests are not the most efficient way of testing and improving their online experience.

Why should you perform A/B and multivariate tests?

In too many businesses, changes to the website are made on a whim or without solid evidence to support them. Marketing decisions are often made by the person with the most experience, highest salary or loudest voice; the HiPPO. This kind of subjectivity removes the key part of business success, the view of the customer. Conversion Optimisation allows your business to be truly customer-centric.

A cartoon showing why data should drive decisions, and not the highest paid person's opinion          HiPPO by Tom Fishburne. Marketoonist

Even when you have great insights from data and user research, it is still difficult to predict how users will behave when you roll out changes. Testing takes the guesswork out of the process and allows you to be confident that you are making the right decisions.

Your business models, target audiences and value propositions are unique, which means your marketing methods have to be too. By gathering insights from your customer-base and testing based on these, you are being proactive rather than reactive and will be one step ahead of your competition.

Furthermore, many businesses believe the answer to improving online sales revenue is increasing customer acquisition. Because of this, acquisition is an extremely competitive marketplace, making it expensive. Eventually, the cost of acquisition will outweigh the ROI.

The trick is to stop thinking that acquiring new customers is the solution. Instead, you should capitalise on the customers you already have by improving their onsite journey. Having a testing strategy at the heart of a Conversion Optimisation programme will increase the number of people converting and improve your primary KPIs. Basically, you can make your marketing spend work better for you.

Beyond that, we have seen across all of our optimisation clients that A/B testing is consistently one of their greatest levers for growth. Our growth methodology™ consistently delivers a 77% test success rate (much better than the in-house industry average of 33%). By working with an AB testing agency like PRWD, you will become more data-driven and more likely to meet your growth forecasts.

Analysing the results of a website test

Once an A/B or multivariate test experiment is completed, we can then start to carry out a post test analysis in order to evaluate the results, draw conclusions and decide on what action we need to take next. It is crucial that we only conclude experiments when we have sufficient data to draw valid conclusions. For a website test, this includes a minimum number of conversions, a minimum number of business cycles and of course, statistical significance for our test goals.

Results and analysis

An experiment will usually have a single primary goal, so that is the first set of data that we must present. Typically we will then drill down in to secondary metrics and specific interaction data. To illustrate, if our primary metric on an ecommerce site is ‘Transaction Complete’, secondary goals might be ‘Add to Basket’ and ‘AOV’. A specific interaction goal might be measuring the usage of a new element added in the variation.

On a lead generation website this might look like:

  • Primary goal: New lead generated
  • Secondary goal: Newsletter sign-ups
  • Interaction Goals: Video plays, Clicking in to different tabs

Beyond our stated goals specified in our test plan, we will dedicate some time to look for other trends or patterns in the data and exploring the behaviour of different audiences and segments. In these instances we have to be careful to ensure that our segments still include a minimum level of conversions and test them for statistical significance.

Drawing conclusions

At this stage we want to evaluate how the results collected impact our initial experiment hypothesis. There are three main outcomes that we may arrive at. If the variation outperforms the control and validates our hypothesis then we have a successful test outcome. If the variation performs significantly worse than the control then we have a failed experiment, potentially saving us from launching something that would have had a detrimental impact on the bottom line. The other outcome is that from the results show that there is no discernible difference in the performance of our variations leading to an inconclusive test result.

Beyond our primary conclusion about the experiment, we can also draw a range of conclusions about how users behaved in the different variations and what impact our changes had on a range of different metrics.

Next steps

Once we have drawn conclusions form our experiment we need to outline a set of follow up actions. Key actions that we often recommend are:

  • Implement the winning variation
  • Re-run the experiment
  • Run a new experiment that optimises further
  • Test a similar hypothesis in another area of the site
  • Test a new hypothesis that arises from insights from the analysis phase
  • Share tests results and learnings with relevant stakeholders
  • Carry out new user testing to investigate new insights further

Key A/B testing tools