• Conversion Rate Optimisation

8th Jan 2017

15 min

When it comes to having an online business, there are dozens of acronyms and terms to get your head around. This guide will be your go to jargon buster relating to all things Conversion Optimisation.


Conversion rate is  the number of visitors that complete the desired goal divided by the total number of visitors you have to the site. Depending on whether you website is a transactional Ecommerce site or a lead generation site you will have ‘conversion goal’ as they usually align with your business goals. For example a conversion goal on an Ecommerce site might be when a visitor purchases something, or for a lead generation site, when a visitor requests a call back.


Conversion Rate Optimisation or Conversion Optimisation refers to the process of improving the percentage of visitors who complete the desired goal on your website. This means that your business can get higher returns on the traffic that is already being sent to your website.


When conducting an A/B test, ‘A’ (the original) is known as the control. When testing, a control is included in all experiments to prove that the results of a test are because of the changes made (the variant) and not other circumstances.


When conducting an A/B test, ‘B’ is known as the variant. This is changed version of the part of your site you are testing (which is the control).


When you have identified a pain-point or opportunity for improvement from your data and user research, you build a hypothesis. Essentially a hypothesis is an  idea or solution on how to improve part of your website. You then observe the validity of your hypothesis with an A/B test. The stronger the hypothesis (strength being determined by how much user research, data analysis and other research sources have been used to develop the hypothesis) the more likely your ‘variant’ will outperform the ‘control’.

Here is a hypothesis kit on how to develop a ‘strong’ hypothesis.


In order to improve your conversion rate, you need to identify and improve various parts of your site.

A/B testing (sometimes referred to as split testing) works by first identifying what area of the site you want to improve. You can split test most things such as website copy, calls to action, processes, etc. When A/B split testing you will have an “A” and “B” variation. The ‘A’ version will usually be your control (the current version of the page/element) while ‘B’ will be a version of the page/element with the changes you have made When the test has finished, you can compare the results to see which version performed better against your defined test metrics and goals.


A/B/n testing is the same as A/B testing but rather than testing one variant (the ‘B’) against the control (the ‘A’) you will test multiple. When the test is live, you will split the traffic equally between the control, the first variant and then the other variables. If you have four variants, you will split the traffic to your site five ways and the ‘n’ will be variants ‘C, D, E’.

A/B/n testing is particularly effective for high traffic sites whose tests can gain statistical significance quicker. This way they can test more variants in one test to determine a champion.


This is your standard A/B test, where you change just one element of a page in one test variation (i.e. a headline, an image etc.). You may have more than one test variation (B, C, D) being tested alongside your control (A).

One of the main advantages of isolation testing is that you will know with clarity that the one change you have made has directly impacted your primary conversion metric. Isolation testing however, can be quite a time consuming task for those with lower traffic volumes (as tests take longer to reach statistical significance) and require a lot of resource.


Rather than having one element variation, a batch test is an A/B test which features multiple element changes to a page. For example, instead of running a two isolation tests that test a copy change against the control or a header image against the control, you run one A/B test which includes both variations against the control.

Batch testing is perfect for sites with lower traffic volumes and less resource. One issue with batch testing however, is that it is harder to deduce which specific change you have introduced has had the most impact on conversion changes. To avoid this, ensure your analytics tool and tracking is set up correctly.

It is worth noting that many sites who have run a batch test to gain a positive net impact will later conduct isolation tests on the various elements to deduce what provided the positive result.


Multivariate testing is a technique for testing a hypothesis in which multiple variables are modified. So instead of running one batch A/B test, multivariate testing runs multiple tests, each with a different combination of element variations. The ultimate goal is to see which combination delivers the biggest uplift against the control (rather than whether your one batch A/B test with multiple changes beats the control).

The number of tests will always be [# of Variations on Element A] X [# of Variations on Element B] (and so on) = [Total # of Variations]. So for example, if you have two variations of a header image (A) and two variations of a headline (B) you will have four separate tests running.

Due to a higher number of variations being run at one time, multivariate testing requires a high volume of site traffic (to compensate for the split of traffic to each variation) to ensure each variation in the test reaches validity.


Statistical significance is a measure used to assess the probability of a fluke result when testing. The industry norm for statistical significance is at 95%. This means that when examining test results, you are 95% certain that the difference in conversion rate percentage accurately represents your site traffic, therefore making the results of the test valid.


Statistical power measures how often, when a variation is different to the original, the test data will pick up on it. If statistical power is high, the chance of a false result, or concluding there is no effect on the test when there is one, decreases.

If a test’s statistical power is 80% or higher (the industry norm) you know that when your test has reached Statistical significance the result is highly likely to be true and accurate.

You can read more about statistical significance and power here.


When running a test, you need to decide how much how much of a change from your baseline conversion rate you want to be able to detect.

Using a tool such as Optimizely’s Sample Size Calculator, you can work out how much traffic your test (with its MDE) needs to reach statistical significance. For example, if you have a baseline conversion rate of 5% and wish to detect a 10% change (so a conversion rate of 4.5% or 5.5%) you will require 31000 users.

In general, smaller differences (such as 5%) take longer to detect because you need more data from users to confirm that a test reached validity and the small changes were not just random fluctuations. For some businesses, the MDE can be as little as 1% or 2%, which often means a test requires a sample size in the millions.


Personalisation is the technique of tailoring an experience to a specific individual or group of similar individuals based on demographic or behaviour based data, such as number of site visits, page viewing behaviour, purchase history, or social media interaction.

An example would be Amazon’s ‘recommended for you’ section. Amazon analyses the combination of products you’ve looked at or bought on their site and then suggests other products you might also be interested in. This is an example of ‘behavioural personalisation’ and is just one of a broad range of techniques used to improve the online experience.


In the context of marketing, attribution refers to the process of identifying which events or touchpoints contributed towards a desired outcome or conversion and then awarding them appropriate credit for their role in the outcome. There are different types of attribution model that assign credit in different ways. The key to using any attribution model is that it is applied consistently.


The conversion funnel the set of steps that a website visitors completes in order to achieve a desired outcome. We most often refer to the conversion funnel of primary conversion goal whether that is a sale, a new lead generated, etc. In analytics tools we can look at use funnel visualisation reports to measure progressions through the conversion funnel.


A website wireframe is a simplified, lo-fi visual representation of a web page. Starting your design process with a simple wireframe allows you to arrange important functionality and content is a structured way and illicit feedback at this level before colour, graphics and styling are applied.

This process helps to focus on designing an experience that will work well for the user and help the business achieve their online goals without starting debates about minor styling issues


Macro-conversion is typically your key overarching conversion goal. In terms of conversion rate it is likely to be your broadest onsite conversion for your most important goal. For an ecommerce site it would be the conversion rate of all visitors to purchase complete. When we refer to macro-conversion we are not specifically interested in the steps or stages that users go through to achieve the conversion.


A micro-conversion is an action or conversion on your website that is important and worthy of measurement, but less significant that your overarching primary goals or perhaps a step in the journey towards your [macro-conversion].

For example, on an ecommerce site you might decide that configuring a goal for ‘Add to Basket’ is a micro-conversion that indicates progress through the user-journey and may be a sign of increased intent to purchase, but it is not as important as your macro conversion which would be ‘Completed Transactions’.


User-centred design (UCD) refers to a set of processes, which ensure that the needs, wants and limitations of the end user of our websites and interfaces are considered at each stage of the design process. This will include a wide range of research techniques with real-world users such as surveys, user interviews, user testing and sometimes A/B testing. The aim of this approach is to reduce the number of assumptions made by designers and gather feedback throughout the process to tailor the website or interfaces for the end user.


In a CRO context, when people talk about agility they are usually referring to either the speed or flexibility with which a team is able to deploy A/B & MVT tests.


User Research focuses on understanding user behaviours, needs, and motivations through observation techniques, task analysis, and other feedback methodologies. This field of research aims at improving the usability of products by incorporating experimental and observational research methods to guide the design, development, and refinement of a product. User Researchers often work alongside designers, engineers, and programmers in all stages …

User research is an iterative, cyclical process in which observation identifies a problem space for which solutions are proposed. From these proposals, design solutions are prototyped and then tested with the target user group. This process is repeated as many times as necessary.


P.E.T. is a framework for user experience improvement that reviews a website or interface for Persuasion, Emotion and Trust. It is a system developed by Human Factors Inc. to help suipport the developing information ecosystem.


Quality Assurance (QA) happens at a number of stages in our design and build process where we review progress and look to prevent issues. Before launching any A/B Test we go through an internal QA process that ensures that we have a robust experiment that will not have any negative impact on the users or their ability to complete their goals online.


Personas are fictional characters designed to allow a business to focus their marketing media and online experiences to a limited number of key audiences and with defined behaviours, goals or objectives. Good personas should be based on market segmentation strategy and created based on quantitative data and qualitative insights.


A term used in user experience circles to describe either the entire website and the set of actions that a user can carry out or sometimes used in a more specific way to describe specific journeys, for example the purchase user journey. The describe at a high level the steps that users take to navigate through your website.


A heuristic evaluation is when we use recognised usability principles and standards to evaluate the user experience of our websites. PRWD have developed a 50+ set of usability guidelines that we use to evaluate websites. Best practice guidelines on their own, but in combination with analytics data and primary user research, a heuristic evaluation can highlight or validate insights from other sources.


Uplift is simply used when a metric has improved over a defined period. For example, a winning A/B tests could be said to have created an uplift of X%.


Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured,[1][2] which is a continuation of some of the data analysis fields such as statisticsdata mining, and predictive analytics, similar to Knowledge Discovery in Databases (KDD).


Persuasion is a key tool in the toolbox of any serious CRO professional. We use persuasion to align our website experiences with and influence a person’s attitudes, behaviours, beliefs, intentions, or motivations. Persuasive techniques can be applied to appeal to user’s logic & reason or habits & emotions. We use a wide range of persuasive techniques in order to create engaging and compelling websites that benefit the user and our business goals.


Social Proof is a psychological phenomenon where people are influenced by the behaviour of others in a given situation. In the context of CRO, it is often used as a persuasive technique to highlight the behaviour of other visitors who are not seen. For example, highlighting a large number of satisfied customers or subscribers may be influential in a users decision-making process about a product or service. Ratings and Reviews can also model broader user behaviour.


Tests that do not include any major innovation, that often look to refine or improve existing features may be referred to as iterative tests. When scheduling experiments, there may be benefits in terms of simplicity and speed to implement iterative changes, but their impact may be more limited or more rare with iterative experiments.


Innovative experiments tend to be more radical in nature than iterative experiments. It is more likely that they introduce new elements or larger scale redesigns of a page or interface. They are more likely to have a significant impact, however they tend to be more costly in terms of effort and time to deploy.


The Call to Action is the main action that you want users to takes on your page. This may be the ‘Add to Basket’ button or a ‘Form Submit’ button. The copy that you use on your Call to Action can be influential and is something that can be tested to good effect.


Landing Page Optimisation refers to the specific activity of conducting A/B & MVT tests on landing pages, usually with a view to increase conversions and ultimately increasing the efficiency of the advertising spend used to drive traffic to the page. This may include testing call to action copy and design, copy, content, promotions, relevancy of categories, imagery and many other factors.


Visitor segmentation is the process of using data to define useful segments from your board visitor base in to subsets of customers who may behave in similar or more predictable ways. It might be that mobile users behave differently to desktop visitors or perhaps free and premium subscribers use your website in distinct ways.

These segments can be used and in A/B testing and over time you may be able to optimise your experience for a number of different segments.


After reviewing user behaviour in analytics, conducting a conversion evaluation, carrying out user research and collecting internal ideas, we typically end up with a long list of potential test hypotheses. It then becomes extremely important that we effectively priorities tests that are most likely to deliver a positive impact for a reasonable amount of effort. Using a rating system is a common way to prioritise tests.

We’ll be adding to the list of terms but if there’s a term you’d like to feature here, let us know by emailing dantenaylor@prwd.co.uk.