Chris Todd
Optimisation Strategist
View posts

  • Conversion Rate Optimisation

23rd Jul 2015

8 min

There is a wealth of experience amongst the PRWD team. Combined, we’ve worked for or with some of the UK’s top companies in industries including, but not limited to, travel, recruitment, sports, construction, media, energy and retail.

One thing we can all agree on is that every company is different. They have their own story, their own goals and their optimisation strategies are all at different stages of maturity. The one trope that connects them all however, is when it comes to optimisation and A/B testing, they have made mistakes and will continue to make mistakes.

75% of Internet Retailing Top 500 companies are currently using an A/B testing tool, but their poor optimisation methodologies are costing them up to $13bn (£8.4bn) a year in lost revenue; a staggering amount when you consider that number is just shy of the GDP of Jamaica.

This post contains some of the horror stories and mistakes we’ve been unfortunate enough to witness and will serve as a guide of what-not-to-do in your optimisation programme. Just know that the list was so long, we had to extend this post to a two-part series, so make sure you check out part 2!

“Test that and we’ll see how our customers respond.”

A/B testing mistakes - see how customers respond data.path Ryoji.Ikeda – 3 by r2hox

At PRWD we champion customer-driven optimisation. Our belief is that in order to get the best return on investment, you have to focus your tests on improving the aspects that excite your users’ and alleviating their pain points.

What’s the best way to do this? Ask them.

Many companies do not communicate with their users effectively, preferring instead to rely on analytics to influence potential tests. Web analytics tools are a vital component to a successful optimisation strategy, but these tools only offer quantitative data usually based on current content. Rarely will a web analytics tool give you true insight into your users’ minds.

One such example was a multi-variant test that was conducted by a company in order to unearth the best converting headline for their PPC landing page. They looked at their long tail keyword data to see what terms were bringing qualified users to their page and from that, came up with a list of suffixes and prefixes that could be used to create headlines. Using this list they developed half a dozen headlines that they tested to discover the headline that most effectively converted their users.

Although this was a conclusive test and it did unearth some insight into their users, the sheer number of variations meant that it took a long time to reach significance. As the company waited to reveal the winning variation, their page continued performing at a sub-standard level.

If the company had spent time conducting user research, they would have collected qualitative data that could be used to create fewer, more targeted headlines based on a stronger hypothesis. Even if the final uplift remained the same as the quantitative data inspired headline, the test would have concluded sooner, meaning the page could start generating additional revenue earlier and provided a greater return-on-investment.

“It doesn’t matter, no-one uses that feature anyway.”

An old colleague of mine went to a presentation where a representative of a large UK company (which is well respected in the optimisation industry) suggested removing elements of a page to see if that gave a positive uplift.

What followed was a series of tests based around removing features from product pages that were deemed unimportant to customers.

  • “Our wish-list functionality is poor. Get rid of it.”
  • “Nobody shares our products on social media. Take the social icons out.”
  • “Product ID numbers are only useful for the call centre. They’re gone!”

All of these tests resulted in slight improvements to the product page and when the small increases were combined, they provided a healthy boost to ‘Add to Basket’ rates.

Success!

Or was it?

With no user research or in-depth data analysis, these tests were based on assumptions that these features weren’t used by customers and the takeaway from the test results was that they were negatively effecting the user journey.

80% of companies believe their customer experience is superior, only 8% of customers agree.*

Firstly, these tests weren’t conducted over a long period of time, so users that engaged with the wish-list feature and saved products to purchase at a later date might not have returned to complete their order before the test concluded. Phone orders – where customers might use product ID numbers to build their basket – were also not tracked by the testing tool. And campaign tagging wasn’t implemented on the social icons, meaning any sales from the shared links would have looked like organic social purchases rather than being attributed to the social icons.

Without data or insight it was assumed that these features offered nothing to the customer experience, so an increase in add to baskets outweighed the perceived miniscule amount of sales that these features encouraged.

Often the small features that add functionality to your site but don’t necessarily result in direct sales are the ones that make your site easy and enjoyable to use.

“How many tests have you set live this week?”

An optimisation team should be judged on overall impact, not the number of tests they are setting live each month. This is what Paul Rouke calls ‘Vanity versus Sanity metrics’ (he wrote an article for Econsultancy discussing this topic). Below is a list of steps that should be carried out every single time you run a test.

  • Analyse prior user research
  • Investigate your web analytics data
  • Develop a strong hypothesis
  • Brainstorm variation ideas in a workshop
  • Create a wireframe for your variation based on all of the above
  • Build your variation(s)
  • Thoroughly test your deployment
  • Wait for the test to become statistically significant
  • And finally scrutinise and evaluate the end results

Conducting a test is a lengthy process that shouldn’t suffer due to focus being wasted on quick & unimportant tests that are going to make very little difference to your business.  If a business’s main target is a high volume of tests run per month, it will be missing the chance to gain genuine, sustainable uplifts in its primary conversion metrics. Running lots of tests simultaneously can lead to far less clarity of what has and hasn’t ultimately influenced user behaviour. Instead, precious time and resources should be honed into a smaller volume of tests which are validated through extensive research and data analysis that have the potential for big impact.

“Let’s test button colour! Green means go.”

 

A/B Testing Mistakes - Button Colour Testing

 

It seems that at some point in time every company has tested the colour of their call to action buttons, but why?

Well, it is easy. It can be done quickly too, but usually it’s because the internet is littered with examples of companies experiencing big gains by altering their button colours.

As a consequence, it has become common place that companies will test different coloured call to actions without truly understanding why a specific colour might have an effect on click-through rate.

At a previous company, I was involved in a meeting where button colour was on the agenda. One of the websites we managed tested changing a call to action from yellow to green purely on the basis that green is seen to inspire action in people. The test saw an increase in click-through rate and the same test was then planned for every website the company managed, with varying degrees of success and failure.

Unfortunately, as is the case with a lot of A/B tests, results tend to be remembered more than the context of its implementation.  The initial test mentioned above did see positive results after changing the button colour. What is forgotten, however, was the main colour of the website was red, causing the original call to action to blend in with the rest of the website.

The logic behind this test is questioned in a Hubspot test case study, where a red button saw 21% more clicks than a green button. But why was this so successful when green inspires action? The context of this test is that the red button was in stark contrast to the rest of the website, making the red stand out. Rarely will a user click on something they don’t see!

Call to actions are about placement, size, shape, message and colour. Changing these elements can have a strong effect on the visibility of a button however there is the no hard and fast rule (or colour) that applies to all.

We are about to suggest a test hypothesis to a client that focuses on changing the colour of a call to action, however this is based on user research as we found users were scrolling past the button in question because it didn’t stand out.

 

A/B testing mistakes - no-one is perfect by becca.peterson26

George Bernard Shaw once said “a life spent making mistakes is not only more honourable, but more useful than a life spent doing nothing”. The companies we’ve worked with (and ourselves) can call themselves honourable for trying to enact change. Learn from them and don’t be afraid to make your own…as long as it isn’t randomly CRO testing button colour.

 

For more testing mistakes, check out the second part of this series. If you have any stories of your own, don’t hesitate to leave a comment or tweet us @PRWD.

*Harvard Business Review