Effective decision-making is driven by confidence. If you’re sure that you’re right about something, you can move mountains; on the other hand, if your confidence wavers a bit, you will have difficulty achieving success.
The same goes for landing page optimization. Ultimately, there will come a time when you’ll be asked to make decisions about the outcomes that you generate from testing your landing pages. How confident you are in your ability to analyze the comparative performance of two (or more) landing pages will directly affect your results.
When you’re optimizing your landing pages, the first question to ask is whether the data that you’ve collected are significant.
Significance is reached once your results are very unlikely to have occurred by chance. Just how unlikely? It’s up to you: your confidence level— or the probability that the results you see from your test are due to the actual differences between the pages—can vary depending on how much traffic you’re sending to the page and how long you’d like the test to run.
One way to pick a confidence level is to weigh the risks of declaring a winner too soon against the risks of not declaring a winner at all. If the risks of declaring a winner too soon are small, then your confidence level can be low, such as 70%. If you’d rather be positive you have a winner, pick a higher confidence level (90% or more).
Why do you need to be confident that your data are significant in order to declare a winning landing page? The quick answer is that it’s hard to eyeball whether the results that you’re observing represent true differences, or whether they are just due to pure luck.
Consider this sample split test:
It’s really tempting to say that Version A is the clear winner here: there’s a 1.7% difference in conversion rates, and so far we’re seeing a 34% increase in conversions. In order to know whether you should declare a winner and start diverting traffic to that page, you should ask one question:
Is there a real difference here?
You can apply statistics to this dataset to determine whether the difference that you’ve observed is significant. For this example, you can draw a 2x2 contingency table:
Next, run a hypothesis test, like a chi-squared analysis:
Then, determine how many degrees of freedom you have. A rule of thumb for determining degrees of freedom when comparing conversion data is to take the number of landing page versions you have minus one. Since we have two versions (A & B), the df = 1.
You now know your x2 value (2.08), your df (1), and your pre-determined significance level (I chose 0.10, or 10%). Next, compare these values to those in the critical value table below.
Our value lies somewhere between 0.5<p<0.10, which is less than our pre-determined significance level of 10%. This means that we can’t declare Version A the winner quite yet—more traffic is required before we can accurately divert our visitors to the winning creative.
If we let this test continue by allowing more visitors to interact with each one of these pages, we might see Version A start to pull further ahead of Version B. As the gap between the two versions widens, your significance level will increase and you’ll be able to divert traffic to the winning creative with confidence.
Once you’ve declared a winner, keep on testing! It’s a great way to glean extra results from your online marketing campaigns, and can represent a huge return on investment. Try introducing a new challenger or test individual page elements against one-another using multivariate testing. If you’re starting to build a more robust testing strategy, consider using landing page software to help you manage multiple A/B and multivariate tests without code or statistics.
If you’d like to learn more about online testing, be sure to check out our free Guide to Online Testing, and keep an eye out for our upcoming Guide to Landing Page Analytics!