What is A/B Testing? A Guide for Beginners
- Emjae Williams
If you’re running marketing campaigns, sending emails, or trying to gain leads from your website, you’re producing valuable content for your followers on a regular basis.
But do you know how well that content is performing?
Are you satisfied with the results you’ve been seeing? If you’ve been trying to figure out why some content pieces seem to be so much more successful than others you might want to consider running an A/B test.
This valuable tool takes the guesswork out of your content strategy and gives you the data you need to make the best decisions for your business. Not sure what A/B testing is or how to run one? We’re going right to the basics with this guide so you’ll be able to run your own tests in no time!
What is A/B Testing?
A key component to a successful marketing strategy is knowing what elements work well and leveraging this information. A/B testing is a statistical method that provides marketers with this key insight and eliminates the need to guess what works best. With A/B testing you run a random experiment using two similar pieces of content that you share with different groups and monitor to see which one yields the desired results.
This is often done with emails, where different subject lines or angles might be used to test which results in a higher open rate. Or you might test two ads for the same offer with slightly different copy to see which version leads to a higher click rate.
The content pieces that produce the better set of results are considered the ‘winning’ sample and are used in future campaigns or to create other marketing materials. If you find that both samples have the same or similar results it might mean you need to make each more distinct in order to properly test them.
Of course, this doesn’t mean that the ‘losing’ sample is bad. In fact, the insight from its performance can be crucial in helping you decide how to best communicate with a smaller segment of your audience or what to avoid doing in the future.
What Can You A/B Test?
Now that you have a sense of what A/B testing is, you might be wondering what elements of your marketing campaign would be good candidates for this type of analysis.
Here are some examples:
- Websites: With A/B testing, you can test out different website designs to see which version attracts and engages more visitors.
- Marketing emails: You can use A/B testing to test email subject lines, content, and send times to uncover which versions have higher open and click through rates.
- Ad campaigns: You can run A/B tests on your ad copy, headlines, images, and targeting options to see which version of your ad obtains the most clicks, impressions, and conversions.
- Call-to-action buttons: Using A/B testing, you can assess different text, color, size, and placement of your CTA buttons to see which one gets the most clicks and conversions.
- Product features or pricing pages: You can also use A/B testing to test different versions of a product features or pricing page to see which one results in the most views, sales, and registrations.
Why You Should A/B Test
Without AB testing you’re essentially flying blind. You’re sharing content and hoping that you’ve crafted the right message for the audience you’re trying to attract. The main reason to conduct A/B testing is to gather insight into your audience and how they respond to your content. This arms you with data you can then use to improve the content you’re producing, the frequency with which it is shared, and even the platform it’s published on.
In the initial content production stages, you’re using general information to guide you but A/B testing allows you to fine-tune your approach. If you did an A/B test on an email and sent it to one group at 10 am and the other at 3 pm you can use the open rates of each to determine the best time to send your emails. Similarly, you can make minor changes to subject lines or the email preview to see which produces a better response from your subscribers.
How Do You Perform an A/B Test?
A/B testing sounds fairly simple but it needs to be executed properly in order to produce reliable results. You’re working with a number of uncontrolled variables such as time, software, and people so there is a large room for error. This is where proper planning can help. Here are some key steps you should follow to run a successful A/B test and produce accurate results.
Step 1. Decide What Variable You Want to Test
The first step is to know exactly what you’ll be testing. For each AB test you run, you need to only be focusing on one thing. This variable could be your subject line or use of personalization for one set of emails or the copy used for your call to actions. While you can test multiple variables for one piece of content, be sure to test each set of variables at different times. If you try to test multiple variables at once you won’t be able to tell which variable was truly more effective.
Additionally, by narrowing down exactly what you want to test you’re better able to decide how to create the variables. If you want to test how effective personalization is in increasing your open rates you know that one set of emails will need to include personalization and the other will not. SImilarly, if your focus is on how your copy affects your click rates, your focus will be on creating two different sets of call-to-action copies.
Step 2. Identify Which Metric to Focus On
You also need to know what you want to measure. Is it your click rates? Your open rates? The number of new subscribers? By being clear on the metric, you know exactly what data to focus on when deciding which version is most effective.
In some cases, especially if you have existing data, it helps to have an actual goal in mind or even a hypothesis. For instance, you might have noticed that certain words affect your open rate negatively and you intend to run an AB test to see if this is true. Your hypothesis might be that using the word ‘burnout’ in my subject line reduces my open rate by 3%. Your goal will be to identify which subject line results in a higher open rate.
Step 3. Set Up a Control and a Challenger
By completing the first two steps you’ve identified your variables and your desired outcome. Now you’ll be ready to decide what your ‘control’ and ‘challenger’ are. For your control you’ll create your content as you normally would.
Going back to our example of trying to increase open rates by testing subject lines, you would use the typical format of your subject line with the inclusion of the word ‘burnout’. For instance, Ten Proven Ways To Prevent Burnout as Creative.
Your challenger is where you’d start to make adjustments based on the hypothesis you have. In this case your subject line might look something like this: Ten Ways to Fuel Your Creative Energy.
Step 4. Divide Your Sample Evenly If Needed
How you split your sample is determined by the type of content you’re testng and the tool you use. For emails, you typically divide your sample equally so each group is fairly similar but you can also opt to have it randomly split by your AB testing tool.
For other content that you have less control over such as a landing page or ads, your sample will be randomly split.
Step 5. Choose Your Sample Size
Much like with choosing how to divide your sample, you’ll determine the actual sample size based on the tool you use and what content you’re testing. For emails, you can typically send your control and challenger to a small subset of your email list. Once a specific target has been reached the ‘winner’ will be sent to the remaining contacts.
Web pages and ads are a lot different since you don’t have a set number of people you expect to see them. Therefore, your sample size will be determined by how long the content is being shared or how much money is spent on the ad.
No matter what method is used, you want to ensure that you allow your test to run for long enough to achieve conclusive results.
Step 6. Determine How Significant Your Results Should Be
Remember that earlier step to identify the metric you want to focus on? This is where that becomes particularly important. You need to determine how significant your results should be in order to choose the ‘winner’ or better performing content. Here statistical significance comes into play. If it’s been a while since you did a statistics class it’s time for a quick refresher.
Statistical significance speaks to how likely it is that your results are due to error or chance. The higher your statistical significance, the more reliable your results as this means that your results were unlikely to have been random or achieved by error.
Remember, the output from your test is going to be used to determine your marketing strategy, how you budget your ad spend, and how you communicate with your audience. Therefore, you want to be as certain as possible that the data guiding these decisions are accurate. Usually you want to have at least a 95% confidence level but you can go as high as 99%.
Calculating your statistical significance and confidence levels can be quite an involved process but thankfully there are handy tools that can take care of that for you.
Step 7. Pick an A/B Testing Tool
Many of the popular digital marketing tools on the market can be used to run A/B tests. Tools such as Facebook Ads Manager, Google Optimizer, Hubspot, ActiveCampaign, Adobe Target and Visual Website Optimizer are just a few examples of software that can run A/B tests for emails, web pages, or ads.
When choosing a tool you want to consider how you’ll be using it, what types of content or campaigns you’ll be testing, affordability, and ease of use. Another crucial feature to focus on is how the data is gathered and shared. These numbers are the most important output you’ll need so you want to choose a tool that provides detailed reports in an easy to understand format.
Step 8. Test Versions A and B at the Same Time
Your test needs to be done with both your control and challenger at the same time. That means you can’t send email A today and email B next week or run each ad days apart. They need to be tested under the same conditions with the only differences being the altered element and the actual individuals who see the content.
The only exception to this rule is when the test relates to your timing. If you’re trying to find the right time or day to reach your audience then naturally you’ll share your content at different times. In this instance, however, the only difference between the control and challenger would be the time.
Step 9. Focus the Analysis On Your Primary Goal
Once you’ve run your test and start to gather your results you’ll be flooded with data. While all of this is relevant, you need to prioritize the metric you have set out to measure. If your primary goal was to find out what works best for your open rate, then that needs to be the focus of your analysis. That will be the determining factor or which of the two was successful.
That isn’t to say you should discard the remaining data. This can be used to help you better understand your audience and even improve your content further. The important thing to remember is that this additional data is just nice to have – not the main focus of the test.
Step 10. Measure Your Results with an A/B Testing Calculator
At this stage, you have all the data and you’re pouring over the numbers. So how do you actually measure your results and determine if they’re substantial enough to make changes to your strategy?
Tools like Hubspot or Survey Monkey’s A/B testing calculator can take the guesswork out of it. Using these tools, you’ll input the number of people who received each variable and how many people took action. This will produce the conversion rates for each and gives a clear indicator of which performed better.
Step 11. Use Your Results to Guide Your Next Action
Now that you have solid data, you can confidently use this to determine what changes, if any, are needed to your strategy. Note that AB testing isn’t always a one-time activity. You can test your winning content against another challenger to get even more insight until you’re satisfied that what you’re producing will produce optimum results.
And if you aren’t satisfied with the results, you can always start over with completely new sets of content. The great thing is that even if the results are unsatisfactory, they still provide you with valuable information you can use.
How to Interpret A/B Test Results
We spoke quite a bit about how valuable the information received from your test is but how do you properly interpret it? Once again you need to focus on your primary goal. If the metric you were focusing on was open rates then you’ll look at those first. That is the number you’ll plug into your A/B testing tool.
Next you’ll look at the differences in conversion rates. You might have seen a 3% conversion rate for email A but a 7% conversion rate for email B with a 95% confidence level. These results are considered to be statistically significant and you can expect that using email B as your model for future emails should result in a higher conversion rate.
You can also look further into the other insight – the audience demographics such as age, gender, location, device type or the time of the day your emails were opened. All of this information gives you a wider view of who your audience is and what might work for them.
Common A/B Testing Mistakes
Even seasoned marketers make errors with A/B testing which can negatively impact their results and, by extension, the strategy. Here’s a look at some of the most common mistakes and the steps you can take to avoid making them.
Not Allowing the Test to Run Long Enough
A/B tests are typically done through a specific platform and these platforms provide data in real time. Now this can be a great benefit as long as you’re patient. It’s easy to see the initial performance of the test and end it prematurely because you want to quickly make a decision. The problem with that is you aren’t allowing the test to run for long enough to give you a look at the big picture. If you end your test after a few hours that would not have been enough time to gather real results.
To avoid this, decide in your planning stage how long you want your tests to run. If you decide on 24 hours do nothing for those 24 hours, no matter how the content is performing.
There’s also the issue of people not setting aside an adequate period of time to run their tests. Remember, different types of content need to be tested under different circumstances. Your ads can’t be tested for the same period of time as your emails or landing pages, for instance. Additionally, you’ll want to allow more time for larger audiences. It can be expected that a group as small as 50 people will produce significant results in less time than a group of 35,000.
Testing Too Many Variables at Once
There’s a reason it’s called A/B testing – you’re testing element A against element B. While there are multivariate tests, that’s a completely different form of testing and is done under different conditions. What happens when you run an A/B test and include too many variables is that the results cannot be trusted. There would be too many instances of error or random chance that might have impacted the outcome. If you send your emails at different times that could be what forces the open rate and not the subject line. If you change the design of the call-to-action button and the copy you can’t be sure which made the difference.
This is why it’s crucial to know your goal and use that to guide how your test is conducted. If you want to focus on open rates your variable should relate to that. If you’re trying to get more website visits, you should only have a single variable that relates to that and nothing else. When you do this, you can more confidently rely on the results.
Testing Too Soon
This might sound a bit confusing but bear with me. The more traffic you have, the bigger your audience is, the more people you can include in your test, and the more reliable your results will be.
This isn’t to say you shouldn’t test your content when you’re just getting started but you can’t rely too heavily on the data you get. You’ll need to retest again as your numbers grow. The other caveat is that testing too soon might be driven by a sense of desperation to see better numbers which can skew your test. This sets you up to be impatient when you’re running the test and you might fall into the trap of ending it too soon, leaving you with inconclusive or failed data.
The best way to avoid making this mistake is to simply be patient. Wait until your original content has had a chance to perform and then decide if there is room for improvement. Give yourself some time to start growing your audience and attracting your ideal targets so the data is actually relevant to you. Chances are, with enough time you won’t need to run a test – your campaigns will begin to pick up speed, and if they don’t, then you can make the decision.
Ready to Use A/B Testing to Improve Your Marketing Strategy?
There’s no doubt that A/B testing is a vital part of any successful marketing strategy but it needs to be executed well. This means identifying your goal, the primary metric, tools you need to use, and identifying your variables.
If you’ve properly planned your test using the steps outlined above, gathering and interpreting the results should be straightforward. Input the data into your calculator and decide if the difference is significant enough to make changes to your content.
If it is, take time to really look at the data and interpret your results. Then use your findings to give your marketing strategy a boost.
And before you know it, you and your business will be reaping the benefits of A/B testing.