Iterative Marketing Podcast Episode 46: How To Run An Effective A-B Test

A/B testing is the core of experimentation. With the right execution, it not only provides uplift in click-through rate and conversions but also serves as an audience insight generator. This podcast explores how six things — sample size, random sample, controls, duration, statistical confidence, and testing for insight — can make an A/B test effective and beneficial to all departments in an organization.

In this episode, we discuss how to run an A/B test…


What is an A/B Test (2:59 – 4:29)

  • The testing of two different versions of the same content to determine which results in a better outcome
  • A-B tests are important to Iterative Marketing because they are the core of experimentation
  • Can apply to any medium (print, banner ads, direct mail, email, etc.)
  • Tools for A/B testing (Optimizely, Convert, Google Optimize) are becoming more user-friendly. Many testing tools are embedded in platforms like Marketo and Pardot.

Why A/B Testing Is Important (4:30 – 6:06)

  • Produces an impact on our marketing that is based on data, rather than gut feelings or personal preference for the best way to allocate marketing resources
  • Helps multiple departments find out definitively what the audience prefers

Six Things That Make an A/B Test Work (6:07 – 7:06)

  • Sample size
  • Random sample
  • Controls
  • Duration
  • Statistical confidence
  • Testing for insight

1) Sample Size (7:07 – 9:42): the number of times you need to present version A or version B to determine a clear winner

  • Sample size calculators can help you determine how big of an audience you need to achieve 90% or 95% confidence.
  • Marketers should not attempt a test if you are not going to have a big enough sample. It’s important to determine this BEFORE you start the A-B test to not waste resources.

2) Random Sample (9:43 – 11:42): Sample must not only be large enough, but it must also be segmented randomly.

  • Many tools do this for us

3) Control (12:32 – 17:10): The efforts put in place to make sure the thing being tested is the only thing that’s different between the experience of those getting version A and those getting version B.

  • Test only one variable at a time so you know which change is producing the result
  • Design version A and version B as exact replicas in layout, font size, color etc. except for the variable being changed to isolate what is being tested
  • Run the test with version A and version B at the same time so breaking news, weather, or other elements, do not change the outcome of the test
  • Make sure your audience has not seen either version before the test starts

4) Duration (17:11 – 18:49): How long to run an A/B test

  • In our experience, do not run a test longer than 90 days because too many factors may impact the result
  • If testing relies on cookies in a browser, they are not reliable for more than a few weeks
  • A test should be run long enough to factor in various business cycles
  • Ex: Running a test Thurs-Mon favors weekend habits, while running it Mon-Thurs favors weekday habits.

5) Statistical Confidence (18:50 – 21:35): A complicated math equation to help us determine if an A/B test is repeatable, or the result of chance

  • We have an easy-to-use A/B confidence calculator on our website. Simply plug in your impressions or sessions compared to clicks or conversions to find out the statistical significance
  • Usually represented as a percentage, which represents probability.
  • Marketers usually strive for 95% confidence, although we have taken the results of a test with 90% confidence as usable information, or as a good working hypothesis until a better test can be run.

6) Testing for Insight (21:36 – 26:12): Learning more about our audience beyond gaining an increase in click-through rate or conversions.

  • The best A/B tests test the psychographics of an audience segment to gain insight that can be applied to multiple departments in an organization.
  • To get started, brainstorm a hypothesis for how you expect your audience to act and why. Then, build an A/B test to validate or invalidate that hypothesis.
    • Ex: A bad hypothesis would be — “The headline, ‘Don’t make these three massive mistakes’ will result in more conversions than the headline, ‘Use these three tips to amp-up your results.’”
    • This hypothesis is not audience-specific and is very specific to this piece of content.
    • Ex: A good hypothesis would be — “Mary (our persona) will be more likely to convert when presented with an offer that limits her risk because Mary prefers avoiding risk over new opportunity.”

For more information on the charity in this episode, please visit American Foundation for Suicide Prevention.

The Iterative Marketing Podcast,  a production of Brilliant Metrics, ran from February 2016 to September 2017. Music by SeaStock Audio.

Learn more about Iterative Marketing and listen to other episodes on Apple Podcasts, YouTube, Stitcher, SoundCloud and Google Play.

Get The Most From Us

Don’t miss a post! Sharing knowledge is part of what makes us special, and we take it seriously. Sign up below to continue to grow and walk up the marketing maturity curve!

Try Us On For Size

We know you’re not about to add or switch your agency on a whim. That’s why we offer a series of workshops to let you give us a spin and see what it’s like to work with us, while getting some serious value along the way.