If you work in advertising, you know about reach and frequency. Every day, you're looking for ways to reach your target audience with just the right frequency to make an impact. The concept of ‘effective frequency planning’ is at the heart of every campaign you run, whether you do it consciously or let your media buying algorithm do it for you.
On paper, it’s simple: reach and frequency are two sides of the same coin. They help you grow your customer base without driving them up the wall with too much repetition. But in practice, things get complicated real fast.
An impression on one channel (say, TV) doesn't necessarily have the same effect as an impression on a different channel (say, Instagram). As multi-touch attribution practitioners know, a consumer's response depends not just on the number of touchpoints with the brand, but also on their exact sequence along the journey, as well as their proximity to an upcoming shopping trip or purchase decision. To say nothing of people's demographic and lifestyle attributes, or their familiarity with your brand.
In recent years, third-party cookies, pixels and mobile identifiers have offered marketers some level of understanding (and, dare I say, control) over their campaigns' reach and frequency - at least in walled gardens like Facebook and on the open web. For a moment, the holy grail of effective frequency seemed within reach. Was it ever? And how will the death of the cookie (for real this time) affect your ability to measure reach and frequency in the future?
But first, let’s settle an old argument.
It’s time to give the three-hit-theory a rest
It’s been more than 50 years since the three-hit theory came out, and what started out as a relatively innocent position paper from public opinion researcher Herbert Krugman at General Electric produced the closest thing we’ve had to a holy war in the advertising industry.
The original idea had the merit of being relatively simple to understand: The first exposure elicits curiosity on the part of the consumer, the second an opportunity to gauge the ad’s relevance to their lives, and the third exposure is the first and only necessary reminder. “I stop at three,” Krugman explained, “because there is no such thing as a fourth exposure psychologically; rather fours, fives, etc., are repeats of the third exposure effect.”
But the three-hit theory has been accepted as gospel by the marketing community for too long. Even in today's fast-changing consumer marketplace and highly-fragmented media landscape, many planners blindly use three exposures as their default setting. And while that’s happening, real-life testing remains on the backburner.
That’s got to stop, and the death of the cookie might be the perfect incentive for marketers to finally question their long-held assumptions regarding optimal frequency—and start figuring out what truly works best for their brand.
No theory, only practice
Every market is different, every product category triggers different behaviors, and every brand requires a different advertising treatment. Procter & Gamble, for instance, determined that online search queries were a key sales driver for its brands, and the company decided to divert massive funds from what it considered to be excess frequency on other channels to reach more consumers on Google. Others are proposing that in a world where attention is increasingly divided, and more purchases are driven by emotion, 10+ exposures have become necessary for brands to have an impact.
If you’re confused, you’re in good company. WARC put out a call for papers on the topic a few years ago and came to the conclusion that “there simply isn’t a one-size-fits-all answer” to the question.
It’s time to stop looking for answers in other people’s homework. With cookies in limbo, marketers are now rolling up their sleeves and turning to data clean rooms for their measurement needs. They’re striking new data collaboration partnerships with media companies to access detailed exposure logs, online and offline (including TV), and matching those logs to their first-party outcome data (like new signups or sales transactions) securely and without infringing upon anyone’s privacy. That’s way more than was ever achieved with cookies alone.
The net result is a much clearer and fuller understanding of every piece of data that goes into the measurement puzzle. And with the right experimentation mindset in place, advertisers and their agencies can finally put their assumptions to the test and find a balance of reach and frequency that’s optimal for their brand across all channels, publishers and devices.
Interoperability is essential for effective reach and frequency today
In the wake of the cookie era, you can’t expect every media company to use the same ID scheme and provide an immediate connection to the way you identify your own first-party data. In fact, many publishers are currently making a point of adopting very unique identity schemes to differentiate their audiences from their competitors and maximize their monetization potential. Complexity in the identity space is growing by the minute, and most experts (like our recent guest on the InfoSum podcast Matthew Birkby) agree that the industry is unlikely to coalesce around a single solution anytime soon.
So you’re going to need a reliable system to match identities across datasets.
There are different ways to accomplish that feat, but most of them involve mapping tables, ID graphs, and the actual sharing of those tables and graphs between trading partners. In today’s environment, interoperability is essential for effective cross-platform measurement, but not if it comes at the expense of users’ privacy.
At InfoSum, we anticipated the need for privacy-safe ID interoperability early on, and created a solution we call Identity Bridge to perform ID matching across multiple datasets without moving data around, or modifying any of the partners’ original datasets. We can’t wait to show you how to use it to take your reach and frequency game to the next level.