MediaPost TVBlog Interview — The 3R’s of Advertising Effectiveness

December 11, 2012

The following is a repost of David Goetzl’s November 27, TVBlog post:


There’s no sense for advertisers to ever be fully satisfied with the effectiveness of their messaging and tactics. So, they’ll always be in search of more data and the finish line will keep moving.

But here’s at least one enticing landing spot: a gauge of how many consumers saw an ad on one of four screens (TV, PC, tablet, smartphone) and what it prompted them to do (or not) in real time — with the data coming in an easily digestible form, ripe for swift action.

David Goetzl Interview on the 3R's of Ad Effectiveness

David Goetzl Interview on the 3R’s of Ad Effectiveness

Randall Beard says Nielsen’s on that path with its “3R framework,” an umbrella term for its efforts to measure how reach and resonance lead to reaction. (It could be called 4Rs with an equation where reach + resonance + reaction = revenue.)

Helping in the kitchen are the 3Vs of Big Data, where there’s more volume and variety of information coming at greater velocity.

“There’s way more data, way faster that’s coming at advertisers and agencies,” said Beard, Nielsen’s global head of advertiser solutions. “And one of the big challenges is to have a simplified (platform) that brings it all together in a way that they can easily operationalize it.”

Beard will lead a webinar next week with details on how Nielsen envisions opportunities in a 3R playing field. In advance, he took some time to offer some thoughts:

–In the “reaction” area, the Nielsen Catalina Solutions services look to connect media consumption with purchase behavior. There are multiple research providers looking in that direction with TV advertising. What makes the offering different?

“The audience data is from the Nielsen people meter data set — supplemented by set-top-box data,” he said. “Most of the other players in the space are just simply using set-top-box data … The second thing is that we’re cross-platform, so we have this service not only with TV, but with online, with mobile, with print. So you can identify the most responsive buyer behavior group and then execute that across platforms.”

–If the same ad runs on live TV and online, which is more effective?

The data is not for the exact same ad on both platforms, but 2011 research found the “breakthrough” — percentage of consumers remembering an ad — for 15- and 30-second spots was about 50% higher in an online platform than TV. Hypotheses Beard offered include online platforms offering more of a lean-forward experience and usually a lesser ad load.

–Nielsen, of course, doesn’t determine what the currency is in a particular market. But can any of its “reaction” tools – Nielsen Catalina, Buyer Insights – offer the basis for one should advertisers want to trade on the data?

Beard indicated Nielsen believes it plays more of an advisory role, but there are opportunities for sort of one-off deals.

“We’re trying to bring data to the advertisers, agencies and media companies that they can use to be smarter about the way people plan buy, execute and ultimately optimize the advertising,” he said.

He said working with NBC, Nielsen has found the same ads in its Olympic programming have more “resonance” than when they run elsewhere and NBC has used that to demonstrate effectiveness in a sales process.

–One argument networks make is there is value in ads viewed as they zip by in fast-forward mode via DVRs. Logos might be seen or there may be some reinforcement if a viewer has seen the ad before in full. Has Nielsen developed any insight here through its research?

Not discretely. But it has found that ad recall is 30% lower for time-shifted viewing — whether an ad is skipped or not with a DVR — versus live TV.

“If you know that there’s a difference there, you can certainly assume that at least some part of a lower score in a DVR’d program is because of fast-forwarding,” Beard said. “How much? Couldn’t say.”

— One coveted metric in the “reaction” area would be: did an ad prompt a purchase the next day? (Helpful to consumer package goods and telecom marketers, among others.) Can a viewing-purchase link be available in a sort of overnight fashion a la the cornerstone Nielsen TV ratings?

Not yet. It takes time to connect the dots.

But in a “resonance” sphere – both with TV and digital ads – results come in much faster.

–But clearly the quicker the better – especially if there’s an emphasis on providing real-time insight to allow adjustments.

That’s a “big opportunity,” Beard said, across all platforms. That would give an advertiser with, say, three ads running a chance to determine certain effectiveness gauges for each and do some editing mid-flight.

That not only can improve “resonance,” but save money.

If a 15-second spot is doing just as well as a 30-second one, why stick with it?

“Why spend money on 30s,” Beard said. “Move all your spending to 15s … there’s lots of opportunities for advertisers, in particular, to measure “reach” and “resonance” in as close to real-time as possible and then make smart choices about how they allocate their spending, or improve their advertising  to get a better reaction outcome.”

Get free updates of Randall Beard’s Blog by RSS reader

Get free updates of Randall Beard’s Blog by e-mail

How Prediction Learning Curves Can Improve Your Ad Effectiveness

December 3, 2012

In his fascinating new book “The Signal and the Noise,” New York Times political blogger Nate Silver discusses the concept of the “Prediction Learning Curve.”

The prediction learning curve maps the relationship between effort and prediction accuracy. Not surprisingly, the relationship strongly resembles the well-known Pareto effect—e.g. about 20% of the initial effort (time, money, effort) yields about 80% of the prediction accuracy.

How Prediction Learning Curves can Improve Your Digital Advertising

Or, as Silver says:

“…getting a few basic things right can go a long way….The first 20% often begins with having the right data, the right technology, and the right incentives.”

What does this have to do with digital advertising? Optimizing your digital advertising in-flight is, essentially, a prediction exercise. The goal is to take existing data, using the right technology and incentives, and make changes to your plans based on a predicted outcome that is better than current. And, as with the prediction learning curve, the first 20% of effort can have a huge impact on digital ad effectiveness.

Right Data, Right Technology, Right Incentives

Data – Real-time on-line ad effectiveness data is here and now. Brands can now measure brand recall, likeability, persuasion, etc. on a daily basis across display and on-line video, even for smaller campaigns.

Technology – Today’s  ad effectiveness technology holds out a small amount of ad inventory, and then serves up single-question polls after the ad (exposed), or to a statistically comparable group not exposed to the ad. This test/control design enables advertisers to understand the single variable impact of individual creative unit performance, site performance and frequency of exposure.

Incentives – Collaborative optimization tools enable agencies and digital publishers to work together to deliver a better result for their advertiser clients. Agencies measure ad performance by digital publisher. Digital publishers know that either ads perform well on their sites or their sites get dropped from the campaign. They are motivated to work with the agency to improve performance—or else.

So What’s the 20% to Improve my Digital Advertising?

Four basic factors can deliver a dramatic improvement in your digital ad effectiveness—with only ~20% of the effort:

1.  Creative Rotation – Most advertisers don’t copy test their digital advertising. They blindly run multiple creative units without any real understanding of the differences in ad effectiveness across creative units. The first opportunity is to identify your bottom performing 20% of creative, and reallocate this media weight to the top performing 80% of creative.

2.  Site Rotation – Similarly, ad performance differs across web sites. So, the second opportunity is to quickly assess ad performance by web site, identify the bottom performing 20% of sites, and rotate out of these sites and into your higher performing sites.

3.  Exposure Frequency – Unlike TV, digital advertisers have the ability to cap their exposure frequency—e.g. 2, 3, 4, etc. exposures per consumer. The question for advertisers is this: at what frequency should I cap? The opportunity is to quickly identify where frequency of exposure yields little or no incremental ad performance, and then cap your digital ad exposure at this frequency.

4.  Collaborative Optimization – The last opportunity is to get key players in the advertising eco-system to collaborate toward a common objective of improving advertising performance. Everyone has a motivation to do so.

Advertisers can now evaluate digital ad performance across agencies. Agencies have a powerful motivation to improve performance, as advertisers now have performance specific metrics across agencies. Agencies need to perform—or risk losing business.

Publishers also have good reason to collaborate with agencies, as they know that agencies will pull advertising from their sites if they underperform.

Importantly, there are now technology platforms that enable all three parties to collaborate in real time to optimize ad performance.

Moving Up the Prediction Learning Curve

These four factors represent the 20% effort in the Digital Advertising “prediction learning curve.” Let me say it differently: if you do these and only these four things, you can expect to achieve a significant improvement in your digital ad performance.

Or, as Nate Silver says:

“getting a few basic things right can go a long way.”

The choice is yours—to execute digital the way many advertisers do today, or to move up the prediction learning curve and deliver improved results for your brand.

Get free updates of Randall Beard’s Blog by RSS reader

Get free updates of Randall Beard’s Blog by e-mail