Part one of this series detailed how to set the foundation for a winning conversion optimisation programme. Before you go any further, you need to make sure that you have a testing methodology. This can be very easy to define but it must form the basis for every test moving forward. This article will provide some of the key components and considerations for building your own successful testing methodology.
Using data to drive testing hypothesis
It’s very important when it comes to making change in the Ecommerce world that you combine both qualitative and quantitative data. The quantitative will tell you WHAT’S happening whereas the qualitative will give you more understanding of WHY it’s happening.
User Research is one of the best ways of collecting qualitative data and can be achieved in many different ways depending on the type of site and time allowance you have available e.g: usability testing, surveys, personas (see my article on how personas can help increase conversion rates here) or card sorting. Once the data has been collected, it can be used in two different ways:
To drive experimentation…
There are 3 key rules to using data to drive experimentation:
- Use data to drive the hypothesis not to determine solutions – What you originally think is the solution may end up being wide of the mark, therefore use the data you have to form your hypothesis, not determine what the end result should be
- Needs to be an inherent testing culture which is measured and data informed – All tests need to be measured and goals need to be set as to what success looks like before you start
- Experimentation should be accepted as the only way to make changes to your website – no changes should go live on your website without ideally testing them first or monitored closely. You never know what impact changes could potentially have on your customer interactions (more on this later)
…and inform design
Using data to inform design is an interesting concept and is something I will go into deeper at a later stage. Ultimately, there are many different ways you can look at data within your business. Looking at ‘Success Events’ data is something many businesses seem to overlook when trying to optimise your website.
Traditionally, a business’s approach is to look for broken pages and journeys that aren’t converting well. This is an important exercise and I’m not saying that you should stop doing it as it is integral to creating the best online experience for the user. Alongside this exercise, you should also spend time understanding events that typically lead to an order and work from there to create more of those events. A key example of this are reviews; if customers are more likely to order when they read a review, why don’t you design your page/site around social proofing and customer reviews?
Evidence-led hypothesis prioritisation
Once you have used data to generate your hypothesis and come up with your designs, they then need to be prioritised based on evidence – whether that be insight (through analytics data you have available which shows bounce rates, traffic volumes etc) or customer feedback (via usability sessions, surveys etc) – not because someone has a ‘good idea’ or because your competition are doing it.
There is a saying within conversion optimisation that ‘BEST PRACTICE IS PAST PRACTICE’. You must always remember that what has worked for one competitor may not work for you – each business has different customer bases who will interact differently.
Read why ‘Ecommerce Best Practice is Dying’ here.
Iterate towards redesign
Following on from the same theme of not always thinking huge change – there may be occasions where you feel a website or page redesign is the best port of call. There are different ways in which you can approach this:
- Very small incremental changes – eventually adding up to a full re-design
- Full redesign – implement you ideal change in one go
- Iterative redesign
The iterative redesign will get my vote 100% of the time. What this means is that you can have a view of the vision/desired end goal of your website or page, but you split the testing and implementation up into manageable and measurable chunks. By doing this you are easily able to identify what’s worked and what hasn’t. If one chunk fails during testing, that’s fine. You can learn from it, tweak the design and go again. You will not have the ability to do this if you go for a full redesign in one go – this is risky and could be very costly to your business. Iterate towards redesign.
More information on a user centred redesign can be found here.
Small changes can make big impacts
There is a misconception within organisations that large change equals the biggest impacts. This is not always the case. Just as a tiny rock from space can cause a massive change to the earth’s landscape, the smallest change to your site can have a massive impact. There are many businesses that have traffic constraints, therefore testing smaller changes takes too long to produce a result, but that doesn’t mean it shouldn’t be considered (explanation as to why small traffic sites can struggle testing can be found here).
As part of a previous business, there were a number of small changes made to the checkout area of the website. As these changes were so small, they were implemented without testing, as we presumed they would be a success. Unfortunately, this backfired and it cost the business thousands to reverse engineer the changes. Not only was the lesson here to test everything but also that small changes and tweaks – especially within a sensitive part of the site – can have a big impact in the worst way.
Learn from failure
When it comes to actually testing and analysing results, don’t be scared to fail! Test failure should not be considered a bad thing. All businesses want to make money and find that silver bullet which is going to make them thousands of pounds but in conversion optimisation you will learn more from your failures. The likes of Google and Netflix have in and around a 90% test failure rate. The more mature you’re testing programme the harder it becomes to find ‘winning tests’, therefore prioritisation of testing hypotheses becomes even more important. As long as you analyse your results and understand why your test failed that’s fine – “fail fast to learn better”!
I hope this goes some way to helping you develop a sustainable and effective conversion optimisation programme. If you have any questions, please don’t hesitate to leave them in the comments section.