This is the third in a series of posts which aim to make clearer a few commonly used phrases in Conversion Optimisation statistics and debunk a few myths of what you can and can’t infer from your test statistics. You can find the first post in the series which is around confidence intervals and limits here. Next on the jargon buster list is 1-tailed vs 2-tailed tests.

## What does 1-tail vs 2-tail tests mean and how does it apply to CRO testing?

This is a 1-tail test. Think of tails as directions that we are testing. If we are testing to one tail, or direction, we are testing to see if the difference (between the website designs for instance) are either better **or **worse. The ‘Tail’ part relates to the statistical distribution model that is being used in the test. Above is a picture of a bell curve, which shows the normal distribution. Don’t worry about what the normal distribution is, all you need to know is that it’s a statistical function that is commonly used in CRO. Basically, the graph shows probability of results. If the test result lies in the white bit, it is statistically significant. Not sure what statistical significance is? Check out our last blog post where we go over what this often coined term is.

This is a 2-tail test. If we are testing to two tail, or direction, we are testing to see if the difference (again, between website designs) are better **and **worse.

So imagine you have run a simple split test. The blue bit of the graph is the part where we say ‘there is no change in the conversion rate due to the website change’ and the tiny white bits either side allow us to say “it’s an extreme enough result to say there’s a difference’.

So a 1-tailed test is when you’re only testing for one direction (i.e does this variation have a better conversion rate than the original?) and a 2-tailed is when you’re testing in both directions (so, does the variation have a better or worse conversation rate than the original?) Or put simply, does the variation have any effect on conversion?

Now, you may be thinking we only want to find the good results and if we saw it was going down we’d move on to the next test.

For us, there are 2 arguments against this line of thought:

- At PRWD we look to give our clients more than a ‘yes/no’. We strive to give them quality results with actionable learnings. So actually, being able to statistically prove that a variation performs worse than the original is valuable and allows us to start to ask why did that happen? And maybe other ideas for tests spring from that. Maybe you notice a trend where every time you do a certain thing, you notice a drop in conversion. That would be a valuable insight to any business. Not just what makes their customers convert, but what definitely
*won’t.* - Assuming time and visitor numbers aren’t restrictive, we can think of very few reasons why you would conduct a one-tail test. One of the few examples we can think of is if we’re testing whether removing part of a page reduces conversion. We’re not really bothered if it increases conversion, we just want to know if it reduces conversion, because if it doesn’t, we can reallocate the resource of maintaining that part of the site to something else.

Basically, a one-tail test classically should only be performed when changing the conversion rate *isn’t *really the goal. The medical instance of this (an often referenced industry when talking about statistical testing) is when a cheaper drug has been developed and a test should be conducted to see if it’s *as *effective, not more. Again, here the primary concern is about cost reduction and not about the new drug being *more *effective.

Maybe you think differently. Maybe you think ‘Actually, sometimes we want quick and dirty results to work with’ or ‘our tests require a really small minimum detectable effect, the only way we’ll show anything significant is if we continue with 1-tail tests’. What do you think?

I hope this clears up the difference between 1-tail and 2-tail tests. As always, let us know if you have any questions. Alternatively, see here for more on CRO testing.