Get ready to brush up on your least favorite class from high school. Statistical methods for marketers are easier than you think, and they are one of our most valuable assets in experimentation.
Episode Show Notes
Introduction To Statistics in Marketing
(0:00 – 3:38) Introduction To Iterative Marketing Podcast: Welcome to the Iterative Marketing Podcast, where, each week, hosts Steve Robinson and Elizabeth Earin provide marketers and entrepreneurs with actionable ideas, techniques, and examples to improve marketing results.
The topic of this episode is the importance of statistics in marketing, focusing on two methods: confidence (or statistical significance) and sample size. These techniques are crucial for successful marketing experiments and ensuring that the results are actionable. Ignoring these methods can lead to invalidated experiments or misleading results, which can negatively impact marketing efforts.
The resources discussed on the show can be found at brilliantmetrics.com, which includes a blog and a LinkedIn group for community interaction.
Statistical Confidence in Marketing
(3:38 – 4:37) Definition of Confidence in Marketing: In marketing experiments, confidence measures the likelihood of obtaining the same results if the experiment were repeated. This is essential when testing various aspects of marketing campaigns, such as the effectiveness of different ad creatives, subject lines in email campaigns, or calls-to-action on landing pages. A high confidence level indicates that the observed results are likely to be consistent and not just a random occurrence.
(4:37 – 6:20) Confidence Levels and Goals: Confidence levels are expressed as percentages. Marketers typically aim for a confidence level between 90% and 100%. A common benchmark is 95%, although the desired level may differ depending on the organization’s priorities. Choosing a lower confidence level may speed up the experimentation process, but it may also increase the risk of making incorrect decisions. Conversely, a higher confidence level may provide more certainty, but it may slow down the rate of iteration.
(6:20 – 6:48) Importance of Setting Confidence Goals: Establishing a confidence goal before starting an experiment is crucial to ensure that the results are statistically significant and not due to chance. By setting a predefined goal, marketers can avoid the temptation to declare a winner prematurely or to settle for a lower confidence level that could lead to inaccurate conclusions and ineffective marketing decisions.
(6:48 – 7:33) Monitoring Confidence Levels Throughout Experiments: It is important to consistently monitor and calculate confidence levels during marketing experiments. This practice helps maintain a high level of certainty in the results and reduces the likelihood of drawing incorrect conclusions based on insufficient or unreliable data. By keeping an eye on confidence levels, marketers can ensure that their experiments provide actionable insights and lead to more informed marketing decisions.
(7:33 – 8:45) Calculating Confidence Levels with Online Tools: Calculating confidence levels can be complex, as the formulas involved are mathematically challenging. However, there are numerous statistical confidence calculators available online that can help simplify the process. It is essential to pick a tool you are comfortable with and remain consistent with its use, as different calculators may yield slightly varying results due to the complexity of the underlying math and the existence of multiple formulas.
Managing Experiments
(8:45 – 13:17) Hitting Confidence Goal by Increasing Trials
- To achieve the desired confidence level, increase the number of trials by running the experiment for a longer duration.
- Running the experiment too long can introduce external factors that may affect the results, so strike a balance between increasing trials and avoiding external influences.
Recommended Experiment Duration
- Ideally, an experiment should run between 2 and 8 weeks to achieve reliable results.
- Running the experiment too long may lead to data contamination due to cookie manipulation or device changes. Ensuring the experiment duration is appropriate helps maintain data integrity.
Potential Issues with Short Experiments
- Reaching the confidence level too quickly (e.g., within a few days) may result in unreliable results due to the lack of a representative sample or the presence of anomalies during the test period.
- To obtain accurate results, the experiment should cover each day of the week at least twice. This ensures that any day-of-the-week effects are accounted for, resulting in more reliable conclusions.
Importance of Time Period in Experiments
- Special time periods, such as holidays or unique events, can skew experiment results. Be mindful of these factors when analyzing data.
- To make informed decisions based on the experiment, ensure the data is representative and reliable. This may involve extending the experiment duration or accounting for any special circumstances that could impact the results.
Charity Outreach
(13:17 – 14:01) Charity Break: Wildlife Conservation Society
Calculating Sample Size for Marketing Experiments
(14:01 – 14:40) Sample Size Definition: Sample size refers to the estimated size of an audience needed to achieve a statistically significant or high confidence result in a marketing experiment. Although discussed after confidence levels, sample size calculation is performed before running the experiment to ensure reliable and actionable insights from the data collected.
(14:40 – 18:21) Information Required for Sample Size Calculation
- Past success rate: Determine the historical success rate for the control in the experiment. This could be the conversion rate for a landing page, click-through rate for a banner ad, or open rate for an email campaign.
- Smallest detectable change: Decide on the minimum change that would be considered significant for your experiment. This value will influence the required sample size.
Understanding the Impact of Sample Size
- Larger sample sizes: Improve the ability to measure small changes accurately and minimize the effect of outliers on the results.
- Smaller sample sizes: May be more prone to distortion by outliers, leading to less reliable and actionable insights.
Example: Calculating Sample Size for Detecting a 10% Change in Click-Through Rate
- Control: Assume a 2% click-through rate based on historical data.
- Desired change: Aim for a 10% or greater improvement in the click-through rate.
- Test validity: To achieve statistically valid results, the variable must achieve a 2.2% click-through rate or higher.
Selecting the Desired Change
- Define the desired change based on the degree of improvement or alteration necessary for the experiment to provide valuable insights.
- Adjust the desired change to manage the required sample size, ensuring that the experiment yields statistically valid results.
Importance of Choosing the Right Sample Size
A carefully selected sample size is crucial for obtaining reliable results from your marketing experiments. Balancing the desired change and the required sample size will help ensure that your experiments yield actionable insights while maintaining statistical validity.
Combining Desired Change and Success Rate
- Calculate the sample size by considering the desired change (e.g., 10%, 20%, 50% lift) and the current success rate (e.g., conversion rate, click-through rate).
- These two factors will help determine the appropriate sample size for the experiment.
Example: Calculating Sample Size for a 10% Lift
- Success rate: 2% conversion rate
- Desired change: 10% lift
- Result: 95,000 trials are needed for reasonable confidence in the experiment’s validity
Defining Trials
Trials represent the units being measured, such as sessions, clicks, or impressions, depending on the goal of the experiment.
Using a Sample Size Calculator
(18:21 – 20:59) Finding a Reliable Calculator
- Optimizely offers a reliable sample size calculator. We can calculate sample size online using Optimizely’s sample size calculator.
- Other sample size calculators are available, but Optimizely is a trusted option.
Preparing for the Test
- Use the sample size calculator before running the experiment.
- Enter the two required values: conversion rate (from past data) and minimum detectable effect (10-20% if unsure).
- The calculator will provide the necessary sample size (number of trials or attempts).
Assessing the Feasibility of the Experiment
- Compare the calculated sample size with the expected traffic to determine if the experiment is realistic and worthwhile.
- Consider the time needed to achieve the required sample size based on the number of daily impressions or sessions.
Sample Size as a Guideline
- The sample size serves as a guideline rather than an exact requirement.
- If the calculated sample size is close to the expected traffic, it may still be worthwhile to run the experiment.
- High-margin results can sometimes be achieved with a slightly smaller sample size.
- If the calculated sample size is far from the expected traffic, it may not be worth running the experiment.
Handling Low Impressions or Sessions in Marketing Experiments
(20:59 – 22:39) Utilize Paid Media
- If the experiment lacks enough impressions or sessions, consider investing in paid media to boost the volume.
- The insights gained from a well-executed experiment can be worth the investment in paid media.
Focus on Insightful Tests
- Ensure that the experiments being conducted focus on valuable insights rather than superficial aspects like button color.
- Meaningful insights can help improve the effectiveness of marketing campaigns.
Accepting Limitations
- In cases where paid media is not a viable option, it may be necessary to accept the limitations and avoid running experiments with insufficient impressions or sessions.
- Running experiments without enough data can waste time and resources without providing valid results.
The Risks of Ignoring Confidence Calculators
(22:39 – 23:58) Wasted resources and time: If you don’t use a sample size calculator before running your experiment, you may end up investing time and resources in an experiment that is unlikely to produce meaningful results. This may occur because you’re not aware of the minimum sample size required for a statistically significant outcome.
Opportunity cost: Due to the limited number of experiments you can run concurrently on the same landing page, ad, or other marketing elements, ignoring sample size calculations can lead to occupying valuable experimental slots with potentially unsuccessful tests. This prevents you from conducting other experiments that could have yielded more useful insights.
Unreliable results: Without a sufficient sample size, your experiment’s results may be skewed by outliers or random chance, leading to incorrect conclusions and potentially detrimental decisions for your marketing strategy.
(23:58 – 24:55) Making decisions based on fluke data: Without using a confidence calculator, you may be tempted to base your decisions on individual data points or trends that appear significant but are actually just random fluctuations. This can lead to misguided actions and misplaced focus on aspects that may not have a real impact on your marketing performance.
Introducing invalid insights: By not testing for confidence levels, you risk incorporating unreliable or invalid insights into your marketing strategy. This can hinder your progress, as you may end up chasing after false leads and making adjustments that do not improve your marketing results or even harm them.
Join Us Next Time
(24:55 – 26:07) Conclusion: This week we discussed the statistical methods that make marketing experiments valuable. Join us next week as we jump back into buyer journey states with a deep dive into the See state.
Have a great week and we’ll see you next time. This concludes this week’s episode. For notes and links to resources discussed on the show, sign up to the Brilliant Metrics newsletter.
Iterative Marketing is a part of the Brilliant Metrics organization. If you would like more information on the marketing services provided by the expert team at Brilliant Metrics, reach out today for a free discovery call.
The Iterative Marketing Podcast, a production of Brilliant Metrics, ran from February 2016 to September 2017. Music by SeaStock Audio.
Learn more about Iterative Marketing and listen to other episodes on Apple Podcasts, YouTube, Stitcher, and SoundCloud.