Chris Todd
Optimisation Strategist
View posts

  • Conversion Rate Optimisation

6th Aug 2015

7 min

In Part 1 of A/B Testing Mistakes I uncovered some of the mistakes the UK’s top companies have made when conducting A/B Testing. Ignoring the importance of user research, optimising websites for conversions not usability, aiming for quantity of tests over quality, not to mention the dreaded button colour test.

This week’s post is based on the approach and analysis of AB testing as this where most mistakes are made. When done properly these stages can mean the difference between prolonged success and instant failure.

“What should we test next?”

A/B testing mistakes - what to test next?

It may surprise you that even some of the UK’s most heralded testing companies lack an AB testing methodology and instead bounce from test to test with no clear understanding of what the future holds.

In my previous post I stated that conducting a CRO test is a lengthy process that shouldn’t suffer due to focus being wasted on quick & unimportant tests that are going to make very little difference to your business.

So how should you prioritise your tests?

At PRWD we collaborate with a client to create a hypothesis log. Through an in-depth web analytics investigation, user research project and expert heuristic evaluation, we develop a list of test hypotheses which we rank based on potential, importance and the ease that it can be implemented in a given context. Averaging these scores gives us an overall test priority which we use to create a test roadmap.

One company that we have had previous experience with were perpetrators of a test plan without any methodology driving it, publishing tests based on best practises they found online or changes their competitors had made to their online experience. This leads on to my next point…

“Look at what our competitor has done. Let’s try it!”

Recently we had a project where a client had redesigned their results page based on ideas they formed looking at their competitor’s recently altered website. The new design utilised a vastly different layout structure that allowed users to view more products at once, meaning in theory it was easier to compare results to find one that met their requirements. Was their theory proven? When the results came in, conversion rates for the new design were worse than the previous layout, hence our involvement.

Our user research indicated that the client’s old design was superior to their competitor’s and as a result their new website they’d spent substantial time and money to commission was a slight step backwards. Their users preferred being able to see key details like price and availability in a vertical list as it was easier to compare than when this information was displayed horizontally across the page.

During our insight presentation we pitched a number of initial recommendations that the client had already independently thought of, along with some other great ideas that were in line with our user research insights. So why did they redesign their website with the competitor’s ideas in mind?

They say the grass is always greener on the other side…but this often isn’t the case, especially when a company is A/B testing.

Conversion rate optimisation is a relatively new marketing practise that, when done correctly, combines several disciplines such as user research, methodical strategy and in-depth analysis. These disciplines aren’t new to marketing and have been in place for centuries yet start-up and established companies fail to utilise these tried and testing methods which only leads to mistakes. Currently only 44% of marketers spend less than 5% of their budget on optimisation.*

A/B testing mistakes - is the grass greener?

“Orders are up. Set it live.”

A company I previously worked with ran a test to see if removing distractions from the cart resulted in an increase in orders. The control featured multiple merchandise zones featuring new, recommended and low price products geared towards increasing average order value. The variation did not include either of these elements.

The test concluded and we found that the variation, with less in-cart distractions, had indeed seen an increased number of orders; a rather significant increase. Success!

Unfortunately the company had spent a large proportion of their budget on launching the testing tool across a large number of their websites and could therefore only afford basic implementation of the winning test across all sites (this is an A/B testing mistake to discuss on another day). Tests did not track revenue data and without cross-platform integration, other metrics such as profit margin were not taken into consideration when analysing the results. How could they be sure that this increase in orders wasn’t nullified by a drop in average order value or margin that would affect revenue and overall profit?

A test we ran for Get Revising resulted in a drop in subscriptions to their free service. Although free service subscribers were important to their business, their main source of income came from paid subscribers. The same test resulted in a 185% increase in paid subscribers and an award from Which Test Won in 2014.

Think about what metrics and key performance indicators are important to your business and how your test could affect each of these positively and negatively. You don’t want to drown in data but you should always track the metrics that matter to your company.

“Stop the test, it’s seen a decrease!”

A/B testing mistakes - stop the test!

This one is on me.

We recently published a test that saw a three figure improvement rate in the first week. I’m ashamed to say, I got very excited. This was my first test since joining PRWD and it was going to be huge. Two weeks later and the test had levelled out at a reasonable but much less exciting double figure improvement rate.

I was so eager to impress with my first test that I forgot the golden rule about testing; don’t peek!

Everyone knows that if you flip a coin 100 times you are not going to flip heads, tails, heads, tails, heads, tails, despite the 50/50 chance. There will be peaks and there will be troughs in your data which is why it is important to wait until your test hits statistical significance before taking any action.

My excitement was based on a positive start but what if a test initially sees a decrease against the control?

Exactly the same! Yes, your hypothesis could be wrong and your variation worse than the control and yes this could result in a loss of revenue for your company over the duration of the test, but you will never know this is the case after the first day and rarely after the first week.

Get to the truth behind your tests or you’ll make the same mistake a huge UK retail company did when they started testing.

Regularly, this company would end tests or deploy variations based on only days of data in order to maximise their return on investment. When they got to end of their financial year they compared the conversion rate of their website based on the previous year and it was flat. A year of positive tests had resulted in a minimal change to their conversion rate. An entire year of testing was potentially wasted because they were not running their tests until they met validity.

Key takeaway

If you take one learning from these common pitfalls, it should be that optimisation is not a quick fix, it is a long-term process that stems much deeper than Variation 1 beats the control by X%. The UK’s top companies don’t become successful overnight – there are dozens of mistakes and subsequent lessons learnt at every stage of the journey.

Hopefully the mistakes listed in this series will help you avoid making them in the future. Don’t forget to go back to part one and if you have any of your own, don’t hesitate to leave a comment or tweet us @PRWD

*Adobe Digital Marketing Optimization Report 2014