Five A/B Testing Mistakes You’re Making

Share this post

By Lauren Beerling, Associate Director of Performance Media – Paid Search, Collective Measures

Any digital marketer knows that running an A/B test is a surefire way to improve the performance of a paid media campaign. It’s as easy as looking at which ad in your test performs the best, right? Well, yes and no. Comparing performance is certainly the purpose of an A/B test, but what if your test is flawed, which leads you to a false result? You will end up wasting time and money without being any closer to the performance improvement you assured your client.

Running an airtight A/B test is always important, but in this unprecedented time of Covid-19, it’s more important than ever. We’re living in a “new normal,” meaning that what was working for your campaigns before the pandemic is probably not working for you right now. Testing – and testing well – is essential if you’re going to lead your business to success amidst a fluctuating market and shifting consumer behavior.

So, how do you know if your test is flawed? Here are 5 A/B testing mistakes you might be making.

1. You’re testing too many things

It’s a mistake to write two completely different versions of an ad and test them against each other. To run a true A/B test, you must test just one variable at a time. If you test multiple variables, you may end up with a clear winner, but you will have no idea exactly why one ad outperformed the other. To move forward effectively, you need to be able to isolate the variable that led to increased performance.

2. You don’t have a hypothesis for your test

It’s a mistake to run two ads against each other without a hypothesis. You want to make sure your test is designed to prove something specific. For example: are you looking to optimize towards click-through rate? Articulate that ahead of time so you have something tangible to benchmark success against; something you’re working to either prove or disprove. If you skip this step and simply look for the metrics that tell the most positive story, you could end up running with the ad that performed better in more metrics, but not in click-through rate specifically, which in this case would ultimately negate your efforts.

3. You’re running too many ad variations

It’s a mistake to run too many ad variations for the volume of traffic you have. The standard best practice is to run between two to four variations at any given time. If you test any more than four variations without adequate traffic, the test will either return statistically insignificant results or take too long to gain statistical significance. Ideally, you want your test to gain significance within two to four weeks so you can learn and optimize quickly. If it takes longer than four weeks to gain significance, efficiency diminishes as you continue to spend money on the test without conclusive results.

4. You’re missing the mark on timing

There are two mistakes that are frequently made when it comes to timing. The first is comparing dissimilar time periods. If you’re going to run a test and compare it to previous performance, make sure you’re comparing similar time periods. For example, you don’t want to compare peak seasonality to low seasonality. If you do so, your test will be skewed with results that look either artificially higher or lower.

The second mistake is cutting a test short. When early test results show a negative impact, you instinctively want to turn off the test, especially in the era of Covid-19. But saving money now might cost you money in the long run. When you’re tempted to turn off a test early, remember that it’s not uncommon to see poor performance early in a test’s lifecycle while the platform is still learning. Also remember it’s crucial to reach statistical significance before making any decisions. If you fail to wait, you might be turning off something that, had it reached statistical significance, would have driven more revenue for the business.

5. You’re launching a new campaign with a test

It’s a mistake to launch a new campaign with a test. When you launch a campaign – for example, supporting a new initiative or product with media, platforms are in a learning phase as they acquire historical data. If you run a test during this time, your results are likely to be skewed due to inadequate data. To run an effective A/B test, you want to have at least a month of a solid baseline to return reliable and actionable results.

Ultimately, it’s easy to get caught up in the fast pace of advertising, especially during this time of heightened anxiety. However, A/B best practices remain true and are arguably more important than ever – taking the time to make sure your A/B test is absolutely clear from the outset and waiting the proper amount of time to reach statistical significance will save you time and money in the long run. And it will deliver the efficient and effective performance results you’re constantly working towards, which is the whole point!


Share this post
No Comments Yet

Leave a Reply

Your email address will not be published.