Offline
Some day.. buy me a drink and ask me my thoughts on the couple of thousand boxes around Toronto that measure the listening habits of millions of listeners and decide the fate of people at every level in a radio station..... PPM kills talk radio, and here's the 'fix'. If I was in a position to use it, I'd have it in my rack in seconds. It's not cheating, it's just 'fixing' the inherent flaw in PPM design. We can work on measuring morning listening in the shower later.
"It is likely that PPM will never be perfect. It will never capture all listening, but missing 20%, 50%, or more should be unacceptable."
NB: This is obviously an American report, but still worth the read...
(Via Radio InSights)
Rumors of double digit rating gains using Voltair, a new processor, are fueling growing confusion over its implications.
Is it really true that a new processor can boost ratings, and by double digits?
And if true, what does it say about the accuracy of Nielsen PPM rating estimates?
Much of the confusion is due to a lack a data.
Stations using the processor are reluctant to even admit they’re using one let alone share ratings information.
But it’s pretty hard to hide.
Every month now we see growing numbers of stations with pretty suspicious rating spikes. Are we to believe that all these stations just got smarter...and all at once?
And how many "best book ever" stories are we going to read in Nielsen’s monthly rating reports before somebody admits that maybe the box is behind many of these records?
Are we to believe that it's just coincidence that the success stories started appearing soon after Voltair's launch?
Given the lack of hard data, Harker Research decided to do something about it.
We set out to find stations about to install Voltair that would be willing to share ratings data with us so we could see for ourselves what impact Voltair could have.
What we are about to share is the first verified pre/post Voltair data, and what we found was just as amazing as all the rumors.
Two stations agreed to share their Voltair experiences with us.
We were able to track ratings as Voltair was installed. We were able to establish a pre-Voltair baseline of ratings, and then we were able to see what happened with the ratings after the launch.
We were also able to confirm that these markets were pretty quiet over this time, and that the two stations we followed changed nothing other than installing Voltair. These stations were also the first in their markets to get Voltair.
Like all other Voltair clients, the two stations insisted on anonymity, so we won’t be identifying the stations, markets, or formats. Both formats are highly competitive mainstream formats.
Station A was our first look at Voltair, so we looked at full week numbers.
For Station B we did something different. We looked at Voltair’s impact on morning drive.
Station B plays very little music in morning drive, and it has been reported that PPM is particularly cruel to talk shows.
Since the encoding process does a poor job with spoken word, we wanted to see what impact Voltair would have on morning numbers. We’ll report what happened with morning drive, but we’ll also touch on full week numbers for Station B.
Ironically, Voltair increased both stations’ market share by more than twenty percent, but how that growth came about differed between the two stations.
First, a little background.
Voltair is designed to process station programming to create more consistent encoding. More consistent encoding increases the likelihood that decoders will be able to identify the stations to which panelists are exposed.
So Voltair should have minimal impact on cume. All it takes is one successful code identification for a station to get credit for a cume listener.
The ratings benefit of Voltair should come primarily by increasing time-spent-listening (TSL).
If Voltair improves the encoding/decoding process, then listening normally missed by the meters will be picked up.
This might manifest itself as longer listening spans, but it could also increase TSL by increasing the number of episodes of listening, what Nielsen calls Occasions.
Station A
A 2011 analysis by Arbitron (since taken down) concluded that top performing stations in PPM didn’t have longer listening spans than weaker stations. Instead, leading stations had more Occasions of listening.
So to see the impact of Voltair, we began by looking at its impact on Occasions of listening.
As the graphs illustrate, both stations dramatically increased Occasions. Station A increased full week Occasions by 26%, and grew market share by a similar percentage.
Station B’s morning drive Occasions increased by a whopping 61%!
Full week TSL for Station A rose by 33%, but Station B’s full week TSL increased very little. Almost all the growth was in morning drive.
Station B saw its full week cume increase slightly more than Station A.
Station B’s talky morning show attracts a different audience from the rest of the day. We suspect that PPM was not catching the morning listeners, so the station was losing much more than just a little TSL.
Station B
Consequently, when the station started getting more credit in morning drive it gained Occasions, quarter-hours, and cume.
While this is the most definitive published analysis of Voltair to date, it comes with several qualifications.
We’ve only looked at two stations and two formats. We can say with confidence that the two stations’ gains were a direct result of Voltair.
However, as we have repeatedly emphasized, results will vary across formats and content.
The quality of PPM encoding depends on content. Some content encodes well, other content not so well. Voltair’s potential for improving a station's ratings ultimately depends on how poorly PPM is encoding the station's content.
Some stations will benefit more than others. Based on Station B’s morning gains, stations with talk formats should be particularly concerned about PPM’s "blind spots."
Some formats may not benefit at all, so don’t assume that your experience will be similar to either station.
The most important lesson here is that the rumors about Voltair helping improve ratings are true. This is the clearest evidence that PPM as implemented is flawed.
PPM does not capture all listening, so it under-estimates radio listening–potentially by a lot for some formats.
The solution is for Nielsen to admit that the problem exists and agree to a time-table to fix it.
It is likely that PPM will never be perfect. It will never capture all listening, but missing 20%, 50%, or more should be unacceptable.
We invite additional Voltair users to confidentially share their experiences so that we can have a clearer picture of PPM and its weaknesses. Collectively we can finally get answers to PPM questions never asked as PPM rolled out.
Last edited by ig (May 31, 2015 8:30 am)