Marketers can iterate on advertising and other paid media by running experiments and then mining the data to identify the most effective path to your audience.
Episode Show Notes
Introduction to the Continuous Improvement in Paid Media Podcast
(0:00 – 2:13) Introduction to Iterative Marketing Podcast: Welcome to the Iterative Marketing Podcast, where, each week, hosts Steve Robinson and Elizabeth Earin provide marketers and entrepreneurs with actionable ideas, techniques, and examples to improve marketing results.
The topic of this episode centers around continuous improvement in paid media. We’ll cover aspects of experimentation, suitable and unsuitable media for testing, keys to success, and optimizing the benefits of experimentation.
The resources discussed on the show can be found at brilliantmetrics.com, which includes a blog and a LinkedIn group for community interaction.
Two Strategies for Paid Media
(2:13 – 3:25) Direct Response Media vs Awareness Media: Divide your media into these two categories for better understanding and strategy formulation:
One: Direct Response Media
- Direct response ads ask the audience to take action.
- These ads work well for experimentation since different aspects of the ads can be adjusted while keeping the creative constant.
- This method allows for testing which strategies achieve the highest response rate, hence improving future consumer communication strategies.
Two: Awareness Media
- Awareness or brand ads aim to increase recognition of a brand or message and don’t typically ask the audience to take immediate action.
- Though not as directly measurable as direct response ads, the learnings from direct response ads can be applied to awareness ads to boost their effectiveness.
(3:25 – 4:34) Applying Experimentation to Different Advertising Methods: Successful experimentation is not limited to digital advertising. Traditional advertising can also be tested, given that it has a strong call to action and a method of measuring that action, such as a phone number or a vanity URL.
Experimentation Strategies
(4:34 – 10:28) Ad Placements: Direct placement ads, where you negotiate a specific spot in a specific publication, offer a level of certainty about where your ad will appear and who it will reach. Programmatic advertising, on the other hand, can use algorithms to optimize ad placement in real-time across a wide range of sites, potentially reaching the same audience at a lower cost. By comparing the performance of these two methods, you can determine which approach is more effective for your specific goals.
Targeting Methods: Different targeting methods can also be tested. For instance, contextual targeting, which places ads based on the content of the page, can be compared with behavioral targeting, which uses information about the viewer’s behavior to place ads. Lookalike audiences, which target people who are similar to your existing customers, and site whitelisting, where ads are only placed on pre-approved sites, can also be tested.
Creative Sizes and Ad Formats: The size and format of your ad can also have a significant impact on its effectiveness. For example, some sizes and formats may be more effective for mobile viewers, while others may be better suited to desktop viewers. Similarly, certain sizes and formats may perform better in certain industries or contexts. By testing different sizes and formats, you can identify which ones are most effective for your specific goals.
Animation vs. Static Ads: Experimentation involves testing the effectiveness of animated ads against static ads. Animated ads are more engaging and can convey more information, but can also be more expensive to produce. Static ads are simpler, cheaper, and faster to produce and can have more direct messaging. Testing both formats can determine which is most effective for specific advertising goals and audiences. This testing aids in more informed decision-making about ad budget allocation and design for maximum impact.
(10:28 – 12:12) Channel Experimentation: Finally, it’s possible to experiment across different channels. By comparing the performance of your ads on different platforms, such as Twitter versus Facebook or display ads versus video ads, you can determine where your ads are most effective and where your advertising budget is best spent.
Setting Up Experiments in Paid Media
(12:12 – 14:25) The Distinction Between Testing Creative and Media: When testing creative content, the A/B test (where you have two versions, A and B, with one change distinguishing them) is often used. This is because the change gives clear insight into the variable affecting the outcome. However, when testing media and reaching your target audience, A/B tests might not be as practical. In this context, multivariate tests, where you’re testing multiple variables simultaneously, can be more effective.
Utilizing Multivariate Testing for Media: When dealing with multiple variables in media experiments, such as different methods of targeting or different audience segments, multivariate testing can be more practical and efficient than A/B testing. In a client scenario where there were multiple third-party audiences to test, switching from A/B testing to multivariate testing enabled faster and more effective identification of the most beneficial audience lists.
Ensuring Statistical Significance in Multivariate Testing: A key challenge in multivariate testing is ensuring each variant has enough data to achieve statistical significance. If the audience size is not big enough to provide statistically significant results, one solution can be to group similar variables together to increase the overall data volume.
Charity Outreach
(14:25 – 15:12) Charity Break: Lighthouse Youth Center
Ensuring the Success of Your Media Experiments
(15:12 – 18:01) Potential Impact of Time: Running different versions of an experiment at different times can disrupt the results due to the mere exposure effect, which states that repeated exposure to a message over time increases the viewer’s trust or response rate to that message.
Importance of Concurrent Experiments: Running your experiments concurrently helps ensure that no single group sees the message more than another, helping to control variables and maintain consistency.
The Effect of Timing on Response Rates: Various temporal factors can impact your response rates, such as fiscal year ends, holiday periods, days of the week, and summer vacations. Accounting for these in your experiment design is crucial to avoid skewed results.
Ideal Experiment Duration: Depending on other factors and industry specifics, running an experiment for a minimum of two weeks to a maximum of eight weeks has been found to be an effective duration for media experiments.
(18:01 – 18:51) Impact of Frequency on Response Rates: If you are comparing two different channels, such as direct versus programmatic placements, the frequency with which each channel reaches the audience can affect response rates. High-frequency exposure might lead to higher response rates simply due to increased exposure, not because the channel is inherently more effective. To ensure a fair comparison and maintain the validity of your experiment, it is worth setting frequency caps where possible. This ensures that your ads are delivered at the same frequency to the audience, regardless of the medium used, helping to control for this variable in your experiment.
(18:51- 19:57) Challenges of Comparing Performance of Different Content Over Time: The landscape of ad exposure changes over time, making direct comparisons of different content challenging. Increased brand awareness from past campaigns can lead to higher response rates for new content. Other ads running between different campaigns can influence the audience’s response to new content. Due to these factors, direct comparisons of performance metrics between different contents can be misleading.
Market conditions are another significant factor that can affect response rates. Different times of the year (summer vs. fall) can influence how people interact with ads. Seasonality, changing trends, and varying consumer interests can impact the success of an ad campaign. These temporal elements must be considered when assessing the performance of different campaigns.
(19:57 – 22:00) Mindful of Time, Timing, and Frequency: Be mindful of the role time plays in your media experiments, including the timing and frequency of ad displays. Control these elements as much as possible to ensure the validity of your experiments.
Choosing the Right Metrics: Choose the correct metrics for determining the success of your media campaigns. Don’t only optimize based on click rate. The action the consumer takes after the click also matters. The nature of the audience interaction with the ads, including factors like conversion rates and engagement rates, can tell a more complete story.
Mobile vs. Desktop: Mobile click-through rates are typically higher than desktop due to factors such as ‘fat thumb’ clicks. Looking deeper into the data, including examining engagement and conversion rates, can reveal that not all clicks lead to meaningful interactions. The higher click-through rate on mobile devices may not necessarily translate to higher conversion rates or meaningful engagements.
Looking Beyond Surface Data: Don’t just look at surface-level metrics; dig deeper into the data. High click rates don’t necessarily mean that your campaign is successful if they don’t lead to conversions or help your business achieve its goals.
Applying Experiment Learnings
(22:00 – 24:18) Brand and Direct Response:
- General Application: You can apply what you learn from direct response experiments to your brand awareness ads. This is because these ad types typically target the same audiences and use similar channels. The insights gathered can help refine the targeting for both types of ads.
- Buyer’s Journey: Adapting your ad strategy based on where your potential customer is in their buyer’s journey can be effective. You can target different ads and messages depending on their progress through the journey. This includes using brand awareness ads without a call-to-action and using direct response ads with specific calls-to-action.
- Small Audiences: For smaller audiences in the later stages of the buyer’s journey, you can apply what you learn from the broader audiences in the early stages of the journey. This strategy is necessary when the later stage audiences are too small to gather enough data for isolated optimization.
(24:18 – 26:02) Other Marketing Efforts:
- Updating Personas: The insights gained from media targeting experiments can help you refine your buyer personas. This updated understanding of your target audience can improve future targeting efforts.
- Influencing Other Marketing Efforts: Information learned from media experiments can also guide other marketing activities, like PR efforts. Knowing which publications effectively reach your target audience can direct your PR strategy.
- Central Repository: It’s important to store the information gathered from these experiments in a central location, like within your detailed personas. This can be used to understand media consumption habits and preferences of your personas.
(26:02 – 27:07) Key Takeaways:
- Conduct experiments with Direct Response ads to understand what works best for your audience.
- Do not rely heavily on experimentation for brand awareness ads due to their different objectives.
- Apply lessons learned from direct response experiments to awareness advertising.
- Consider experimenting across different channels, placements, formats, and targeting methods within the same channel.
- When presented with multiple options to reach your audience, always run an experiment.
- Choose between A/B testing and multivariate testing based on your campaign’s complexity and available resources.
- Document and update your personas with identified media consumption habits from the experiments.
- Use the updated personas to improve your ad targeting and other marketing efforts, such as PR.
- Always aim to take what you learn from your experiments and apply it to future strategies.
Join Us Next Time
(27:07 – 28:31) Conclusion: In this episode, we discussed the importance and process of running media experiments to maximize the effectiveness of your advertising efforts. Regular media experiments help to continually refine your marketing strategy, ensuring that your ads are as effective as possible. By updating your personas with insights from these experiments, you’re able to create a more targeted and effective marketing strategy across all your efforts.
Next episode, we will be taking an in-depth look at the Do state, where buyers are committed to purchasing. We’ll discuss goals, content strategy, targeting, and measurement techniques relevant to this important buying stage.
Have a great week and we’ll see you next time. This concludes this week’s episode. For notes and links to resources discussed on the show, sign up to the Brilliant Metrics newsletter.
Iterative Marketing is a part of the Brilliant Metrics organization. If you would like more information on the marketing services provided by the expert team at Brilliant Metrics, reach out today for a free discovery call.
The Iterative Marketing Podcast, a production of Brilliant Metrics, ran from February 2016 to September 2017. Music by SeaStock Audio.
Learn more about Iterative Marketing and listen to other episodes on Apple Podcasts, YouTube, Stitcher, and SoundCloud.