• Conversion Rate Optimisation

31st Jan 2019

6 min

Conversation rate optimisation (CRO) is a complex subject.

There are lots of moving parts and factors in achieving a successful CRO Program.

At the core of any good CRO team should be a culture of testing OR Experimentation. Not both.

I say OR because they are two clearly different ways of working.

On the face of it, testing and experimentation sound like the same thing. For one, they have similar definitions:

‘Test’ noun

1. A procedure intended to establish the quality, performance, or reliability of something, especially before it is taken into widespread use.

‘Experiment’ noun

1.A scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact.

 

The key difference between testing and experimentation is that experimentation is used to ‘make a discovery’ or to ‘test a hypothesis’. Testing, on the other hand, is used ‘before it is taken into widespread use’.

 

Many companies have adopted an experimentation culture: Amazon, booking.com, Uber and Netflix are the most visible advocates of this approach. They are conducting research and hypothesising then qualifying and quantifying their assumptions. They challenge the status quo by asking: ‘What if?’ and ‘Why not?’

Ultimately, this leads to change, innovation and results!

Delve a little deeper, however, and you’ll see that they also state that between 80 to 90% of their experiments fail.

What’s great is these companies aren’t quiet about it. They share the details of their experimentation culture and how it’s changed their internal structures. They also share their results which on first glance, seem rather impressive! This is why these pieces gain such traction with Forbes and other publications: it’s a game-changer.

Delve a little deeper, however, and you’ll see that they also state that between 80 to 90% of their experiments fail.

This is where I believe ‘experimentation’ is a flawed approach. All that wasted time and effort for something to ‘fail’.

How annoying would that feel?

Making a case for testing over experimentation

Picture this: your driving test or your school exams are not experiments to see if you’re ready for the world. They are designed to prove you ARE ready for the world.

This isn’t to say experimentation is a waste of time. We can learn valuable lessons from failure: important lessons that inform decisions going forward. In fact, experiments can often become lucrative ‘tests’.

Here’s an example. Let’s say you believe the introduction of scarcity messaging could increase conversion rate for an ecommerce site. A good hypothesis based on psychological principles that fits your demographic. It should result in an uplift.

Then we must apply it and we have to think: what should that message say? Where should it appear? How should it look? Who should you target? What font, what size, what colour? Should you use an icon?

experimentation struggles

Faced with these questions, an ‘experimental’ approach is to throw it against a wall to see what sticks. Once these questions have been answered by an ‘experiment’, it can evolve into a quality ‘test’.

Unfortunately, I believe experimentation has become a data feed;

Yes, no, this works, this doesn’t, this is right, this is wrong. Conversion went up, conversion went down.

And here lies my biggest concern about experimentation culture:

Experiments affect people.

Your data feed has feelings. Your lab rats aren’t numbers or code, they are real. Humans are emotional beings ; someone who could be having a bad day and are having a worse day because of your failing experiment. We might learn that users don’t like a certain colour or a certain word or a certain process but at the expense of what? It affects revenues, credibility and brand reputation, but probably worst of all, a ‘failed’ test is usually a bad experience for the user.

And the wider industry (both tools and practitioners) don’t always help the cause. ‘Experimentation Engines’ used to be called testing tools. A test would be a something that we ran when we knew everything about that subject and we ready to successfully launch.

Picture this: your driving test or your school exams are not experiments to see if you’re ready for the world. They are designed to prove you ARE ready for the world.

Some practitioners emphasise test velocity and cite how experiments help create a more democratic approach to improving an experience.

I’m not against creating a more democratic approach, I’d rather see a collaborative approach be taken to create the best alternative for a test, rather than throw everything (including the kitchen sink) at the user.

[A/B testing] has become a commodity of resource and test velocity has become the primary metric.

But what about the craft of the optimizer? In such a short space of time, the CRO specialist has almost been replaced by automated tools running multiple experiments in an environment that takes no thought or process to make intelligent decisions. Multi-variant testing, dynamic traffic allocation and machine learning are helping the digital industry run more and more experiments, but are they improving the customer experience?

A/B testing is no longer a product of carefully calculated research, curated and refined by experts. It has become a commodity of resource and test velocity has become the primary metric. ‘Who cares if we make a difference to users – look at all the data we’re gathering!’ I hear CRO teams cry. Well, if gathering data is your primary concern, then keep doing what you’re doing!

Conclusion

….a solid hypothesis log and a well thought-out roadmap of prioritised testing. That, in my experience, benefits the business and also improves the user experience…

Some of you may be thinking that A/B ‘testing’ is just as bad as experimentation. Fair enough. It’s just that I believe a well thought out A/B test should be much more robust than a simple experiment. We should pull data sources from every angle (including speaking to users through user research) consult with a wide range of experts (from psychology to data to development and design) to craft a solution that has been vetted by the collective mind of the team. The result of all that is a solid hypothesis log and a well thought-out roadmap of prioritised testing. That, in my experience, benefits the business and also improves the user experience, which in turn, makes people’s lives better.

Let’s stop experimenting on people and ‘test’ to improve people’s lives.