Why Your Split Tests Results Could Be Meaningless
Marketers and business owners have become fond of A/B testing over the past few years. We had a client recently come to us for consulting help who had run a series of three split tests and redesigned their website to improve conversions.
What happened, however, was exactly the opposite. After the design went live, conversions dropped dramatically. The site under-performed and they were in trouble very quickly. They hired us to do an independent audit of their site and changes to see what went wrong.
When we dove into the test data, it was fairly obvious: the tests were never setup to create a valid winner.
Problem 1: Not Enough Data
The third & final test completed was rushed. The client was pushing the agency to go live so the agency cut the test very short. They only saw about 200 combined visitors to both versions of the site.
If the client only averaged a couple hundred visitors a month that would represent a large portion of their traffic. They don’t. The client averages a couple thousand pageviews per week. Seeing just 200 to the tested page was very disappointing as it represents just a small portion of their actual visitor traffic.
Problem 2: Testing the Wrong Users
When we looked at the source, almost all of the 200 visitors were from a social media push. While 200 users may or may not be enough data (and it’s very low from our experience), the item being tested? Social button placement.
Most of the visitors who saw the test were already fans & followers of the business so the resulting clicks were even lower than they would’ve already been. If you click a Facebook ad and get directed to a website you are very, very unlikely to immediately jump off to Twitter.
Pay attention to the behaviour you are targeting as well as the visitors you are testing. There’s no point in asking a bunch of Facebook users if they want to press your Facebook Like button.
Problem 3: Testing at Inappropriate Times
Most b2b businesses get the majority of their traffic Monday through Friday from 8 am until 6 pm. Do you know when a really bad time to launch a new A/B test is? Friday 3 pm.
Yes, the third mistake we discovered was starting a test on a Friday afternoon and ending it Tuesday at noon. A business that gets 80% of their traffic during work hours ran a conversion test that only ran for 16 work hours out of 91 total hours.
Weekend users are not the same visitors as you would test Monday through Friday. Maybe they’re more passionate about the topic and more likely to engage. Maybe they’re overworked and more likely to respond to “time saving” techniques. Whatever differences it renders your test practically useless.
Problem 4: A/B Testing is Crap
Many people who are just starting conversion testing think A/B testing is enough. A is a control, B is a change. But that doesn’t tell you if your test is working. Try an A/A/B test. If your A/A data matches, you’re all set. Keep rocking the test. If your A/A data is significantly different you will easily find problems in your tests before you’ve wasted time & resources.
A/B testing is also the simplest conversion testing and many marketing software tools give you easy access to these test tools.. You can do multivariate testing, bandit testing and various types of usability testing. A/B testing does have its use as well and is very popular.
Conversion testing is a necessary way to improve and optimise your business. Making decisions on bad data, irrelevant tests or with improper testing tools can be painful. Smart decision-making relies on smart testing. Learn from these mistakes and improve your conversion optimisation testing starting today.