« Back to Posts

At Least TV Measurement Is Getting More Accurate

I may be the only person who went down this path, but as I watched the political polling industry collapse on itself a few weeks back, all I could think of was the parallels to the mass of questionable information being churned out daily about the media and entertainment industries.

You know, all those polls, surveys, projections and expert opinions that wind up embedded in hundreds of Powerpoint slides as if they were facts.

I have a general rule of thumb on all that, which is if a stat does not come directly from a company’s quarterly report or similar financial statement, it’s probably not true.

Now “not true” spans a wide range from “completely fabricated” (and there is a decent amount of that) to “fairly educated guess” but they’re still just guesses and predictions, not facts.

I understand the desire for these sorts of guesses–at TV[R]EV, we are often taken to task for not providing enough of them—but I am always amazed at how many people desperately want to put their faith (not to mention money) in a survey that polled less than a thousand people to understand the behavior of fifty million.

It’s not that the people creating these stats are lying or trying to trick you (until they are), it’s just that consumers don’t fit into very neat boxes and so extrapolating their preferences out to everyone else in a similar demographic just leads to the sort of inaccuracy we just witnessed in the political arena.

And those are the good research studies, the ones asking people to confirm a specific fact (“Do you subscribe to Disney+?”) versus the ones asking people “How many streaming services do you think you’ll subscribe to this year?” which would seem to have all the accuracy of “Guess which card I’m holding in my right hand.”

One more rant before I get to the point here: the problem with many of these specious stats is that they get repeated in multiple articles and reports, each one citing the one that came before it, so that it becomes impossible to do the forensics necessary to determine the provenance of a particular stat, and it’s often assumed that “if Report Y provides a footnote with a link to Respected Publication X, then that stat must be true.”

Think Wikipedia but without editors.

Now for the pivot to TV measurement

One area where there’s been a lot of skepticism has been in the area of TV ratings. For as long as I’ve been in the business, people have been griping about Nielsen’s panel ratings, often the same people who will accept a survey of 400 people on their willingness to pay $20 to stream a first-run movie as gospel, but I digress.

Nielsen has heard them, and buried in the noise around the election was a very revolutionary announcement from Nielsen that they would now be including ACR data from VIZIO’s Inscape and set top box data from Dish and DirecTV in their measurement of addressable advertising. And not just addressable advertising on OTT either–the new platform will allow them to measure linear addressable too, even the new-style smart TV-based linear addressable that Nielsen and Project OAR are bringing to the market. 

That’s a huge development, especially because it’s likely the start of Nielsen opening itself up to alternate sources of measurement data that include much bigger samples–not strict 1:1 metrics, but galaxies closer than we were before… at least for households.

That’s been the problem with measuring TV: unlike digital, TV is frequently a group activity and determining who was actually watching can be tricky when you’re measuring households. That’s where Nielsen’s People Meters come into play, providing back-up for probabilistic measurement systems, a further check on the stats.

And that’s just linear.

The other area of TV measurement where we’re seeing movement is ad-supported streaming or AVOD.

For a long time now, advertisers have been content to get their stats from whoever sold them the ads, whether that was a DSP or SSP, a distributor like Roku or the app itself.

That made more sense than it may initially seem, in that the ads were all served up programmatically to audiences that were watching them on demand and so, absent outright fraud, there wasn’t much question of whether they ran or not, and streaming TV audiences are much more likely to watch ads to completion given both the low ad loads and difficulty of changing the channel. (Whether they were actually paying attention to the spots is another matter, but the TV was on and the ad was playing.)

As streaming continues its rapid growth and ad budgets are booming, advertisers are now demanding something more than just blind faith. (As one executive told me “when we’re spending ten million dollars we have very different standards than when we’re spending two hundred thousand.”)

The solution has been to attach pixels to the streaming ads, which can be read by third party measurement services like iSpot that use ACR to track what viewers are watching and what ads they’ve seen.

Here again, it’s not 1:1 measurement–not everyone is watching on a smart TV–but it’s independent verification for advertisers, many of whom have been burned by Facebook and other self-reporters and want a third party involved.

It’s unlikely we’ll ever get to 1:1 measurement for television, but I have not talked to many advertisers or programmers who regard that as a problem. Their attitude is that if we get most of the way there, that’s going to give them a level of accuracy that’s close enough to perfect and much, much better than anything they’ve had to rely on in the past.