The Confusion Between Data and Measurement
If you read enough about what is going on in our industry today, you will see that Data is an all-powerful presence. It seems to command every conversation and rule over every project. As statistician W. Edwards Deming once quipped, “In God we trust. All others must bring data.”
But in our effusive fealty to Data, we sometimes give it more unfettered control over decision-making than it deserves. Not all data is relevant and not all data is good and pure. It needs to be assessed, put into context and framed by measurement. While there is no measurement without some form of data, data alone is unusable unless it is put into a structured measurement framework.
So I knew you would ask for definitions. In the media world, data is essentially a rough and structure-less collection of different types of quantities that has the potential to be used to draw conclusions. Measurement enables us to draw accurate conclusions from the data by creating a logical structure for analysis. Data needs a logical and accurate measurement to be useful. Measurement without data is irrelevant.
"There are a lot of companies and agencies in the market cobbling together raw data or portions of data and mistaking that for actionable intelligence," noted Sean Muller, CEO of the TV measurement company iSpot. "In order to use data effectively, it's critical that the inputs into a model are cleaned, normalized and contextualized properly or entire framework will be off. Doing things like attribution using data that is not measurement grade or that is ad-hoc in nature is like building a house on the beach without preparing for the weather and tide patterns. Things will crumble pretty quickly," he added.
Today, there are all types of data being thrown off from devices, culled from the internet, collected by businesses and otherwise amassed through various means. Data takes on many forms and sizes. There is attitudinal data that is more qualitative (like a series of open-ended comments) and data that looks like a bunch of unrelated numbers (MAC addresses coupled with viewing levels).
In the old days of television measurement, companies such as Nielsen relied on small representative samples to project the full population of television households and viewers. They took their viewing data and created measurement using edit rules and algorithms that formatted and contextualized their dataset into a projectable larger behavioral context. And yes, it was fairly accurate … as long as your network had enough meters in the sample to get statistically reliable projections.
In 1985, Nielsen’s sample size of 5,500 represented approximately 84.9 million homes (or just less than 0.01 percent of all TV households) when media options were less fragmented. But as omni-channel options expanded and the number of viewing opportunities increased, this small a sample measurement just didn’t cut it anymore. Currently, Nielsen estimates 119.9 million U.S. TV households with a sample of 40,000, or just over .03 percent of total U.S. TV homes. To Nielsen’s credit, they continue to expand their sample to include more national meters, portable people meters and the incorporation of local meters into the national sample. All are steps in the right direction but it might still not be enough for the tiny members of our media club.
Today, ACR (Automatic Content Recognition) measurement in TV is shaking things up. Nielsen recently bought Gracenote, in part to help layer in a much broader sample set from TV makers, to collect information from the glass of TVs regardless of which device or service is being used to deliver content and ads to the home. But the challenge is how to add that specific data source into the traditional and profitable metric of GRPs. Independent companies like Inscape, which licenses glass level viewing detection from over 11 million VIZIO SmartTVs, are offering datasets that are being incorporated into new systems used by networks, agencies and brands.
“The world is awash in data but without context, it isn’t useful. And without proper measurement you can’t find proper context,” said Allison Stern, co-founder of Tubular Labs, which, she says, is the only company to measure social video on a truly global scale. “In Tubular’s case we organize a massive amount of data about social video and then using measurement we can find trends around content types, and the velocity, engagement rates, and even audience quality that helps people make decisions.”
The advancements of these new data sources, coupled with the exploration of new measurement metrics, is a healthy evolution for media from a delivery scorecard to business outcome attribution. Measurement companies such as iSpot contribute to that effort along with companies that offer cross platform planning tools such as Omnicom, VideoAmp and 4C. All of this is at the center of the new addressable advertising consortium, Project OAR, which currently has nine networks signed on. All of these efforts successfully move data from its raw form into a projectable measurement that can be applied to business intelligence.
Header photo by Markus Spiske on Unsplash