5 Most Typical A/B Testing Mistakes You Should Avoid

One of the best ways to test your ideas in order to improve your conversion rate is through A/B testing. A/B testing is a simple, yet incredibly effective tool which provides you with a way to structure your ideas and test them out online. It’s also an extremely cheap way of trying to find what works and doesn’t. Seemingly, it doesn’t get any simpler that putting up two different versions of the same item, and seeing which one becomes a crowd favorite. But, while A/B testing is fairly straightforward, it is not as easy as it appears to be, because most marketers tend to oversimplify the procedure and call the results as they see them.

There are some important factors and considerations you need to take into account each time you are doing you’re A/B testing, not just to make the test as useful as possible, but also to figure out what really works and what doesn’t. If you come up with a false assumption that something works well, and you take it from there, you will wind up making a mess of it, and all that hard work and effort you’ve put into your project will be wasted. And the worst thing of all, you will have gained nothing from it. In order to help you avoid that, we have put together a list of the most common A/B testing mistakes. Keep reading to find out more.

 

1. A/B testing is cut short too early into the process

The biggest mistake you can make as tester is to call it quits too early into the A/B testing process. Yes, based on the data you have gathered so far, version 1 seems to be winning over version 2, but is that really the case? For example, you would never call it quits at 50%, because that doesn’t prove anything. You might as well avoid doing all this testing altogether. OK, so how about 75%? While that’s good, it is not good enough for you to end your test just yet.

Even if the confidence is around 80% of more, the data you are relying on may be false, because the sample size is simply not representative enough. Even if a particular variation seems to be winning by a landslide, you should keep in mind that time is a powerful factor, and that a leading variation might end losing by a landslide in the end. Also, marketers in Edugeeksclub  paper writing service end their test at 90%, but our advice is to hold out until the numbers reach 95% or more.

While you may be in favor of a particular design, or you may be rooting for the leading variation just so you can end the test early and head out to lunch, you need to suppress your wishes and expectations. A/B testing is a scientific experiment, and it should be treated as such. Your goal is find out what really works, and why it works. Sometimes the truth is not all that convenient. It takes time and patience, and a certain degree of objectivity on your part. Aim for about 350-400 conversion per variation or more, before ending the test. Your data will be more accurate; you will be able to draw the right conclusions from it.

 

2. Running tests without a hypothesis

Running random A/B tests which are based around a particular hypothesis or theory you are trying to prove will get you nowhere. Remember, you are conducting a scientific experiment, and at the core of every experiment is something that you can measure and quantify. It’s all about the numbers. You may come up with a random test that shows that version A is superior to version B, but without a proper hypothesis and measurable results, the test is useless, because you won’t be able to learn anything from it. You won’t know what worked and why, and that’s the question A/B testing should answer.

Putting together a strong hypothesis means using limited data and evidence, and gathering further evidence through testing, so that the accumulated data either proves or disproves your hypothesis. If you put two variations against one another and run a random test, one will surely emerge on the other side a winner. You will know that one is better than the other. But, you will have no idea why, because it was a random test without a hypothesis. Even if a hypothesis proves to be a failed one, you will know what doesn’t work, or what is irrelevant, so that you may exclude it from the next hypothesis, and the next batch of tests.

 

 

3. Discarding a failed test

Another common mistake when you are doing tests is to discard a test just because it hasn’t produced a lift. Most people would give up and go about running test on another page, for example. However, that would be a mistake, because once your test has failed, the first thing you should do is check the data it has produced. The reason why it has failed is not because it is not good, but because your theory is flawed, and it needs to be strengthened. After you have done that, run the test again and learn from the data you have gathered. Adjust your theory, run another test, and so on. You get the idea.

Tests will not always produce results, simply because it is hard to predict human behavior, and to take into account all of the variables. Only through multiple tests can you gather all of the pieces necessary to fully complete the puzzle. Just because testing has failed to produce the desired results during the first test, doesn’t mean that it should be abandoned, and that the data gathered can’t be used constructively. If you dismiss your first failed test, you may save some time and money at first, but it will hurt you in the long run, because iterative testing is where real results can be found. That’s the angle you should pursue, even if it seems counterintuitive and costly.

 

4. Running tests with overlapping traffic

One of the ways you can cut down on the cost of testing and speed up the process of gathering data is by running multiple test at the same time. However, such an approach will provide erroneous results if the pages you are running tests on have overlapping traffic. This will lead you to the results which can be misleading at best. If you are running the same test on multiple pages, and while measuring the same parameters, you are not going to gather the most accurate data, because some overlapping in traffic in bound to happen in situations like that.

There are ways of getting around this problem. Run multiple tests using multi-page experiments if you want to save time and money. Also, one of the things you need to look out for when doing A/B testing that involves overlapping traffic is distribution. Keeping your distribution even is key. Make sure that the traffic split between pages A and B, C and D, or any others is always 50/50.

 

5. Not running tests constantly

If you are not running tests all the time, you need to seriously rethink your strategy. Finding a test that works and improving your conversion rate is awesome, but you should not leave it at that and rest on your laurels. A successful A/B test is all the more reason for you to keep on doing tests. Take all of your successful tests and carefully analyze in order to determine just what made them so productive. Use that knowledge to devise new hypotheses and create new tests that will help you gather even more valuable data and improve your conversions even further.

All this testing help you improve your testing skills and learn to put together optimal theories, and as a result, you will make the most out of your time and money. Also, through continuous testing, you will be able to learn more about your audience and its behavior, which will come in handy when you are considering improving other aspects of your business, such as marketing, social media activity, and ads. You can waste time guessing what would work and trying out random things, hoping you will get lucky, or you can take matters into your own hands and start testing everything you implement from now on.

Conclusion

Testing is a time-consuming activity, but it is well-worth the effort, because you can finally take control and pull all the strings when it comes to your own business. Some people out there might get lucky, or they might find the right niche at the right time, and score big, without having to put in any thought into it, but they are far and few in between. We recommend that you take a more methodical, scientific approach to doing your business, and that you keep on using it from now on. You won’t be sorry.

How have you used A/B testing in your marketing? What was your biggest success? Tell us about it in the comments below.

The post 5 Most Typical A/B Testing Mistakes You Should Avoid appeared first on GetResponse Blog – Online Marketing Tips.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s