Fusion and other futures
Nielsen Media Research
Currently, in the United States, multiple parties are developing fusion research. As a result, we're seeing competing ideas on how to best develop, measure, and use fusion.
Where did fusion come from?
For many years, at least in the United States, fusion was a dirty word. It was considered 'making up the numbers'. When the uniqueness of using daypart optimizers during the planning process began to wear off, many agencies in the States began looking for the next opportunity to introduce concepts of optimization to the planning process. The natural next step was to use decisionsupport systems like genetic algorithms to, if nothing else, at least increase the automation of the multimedia planning process.
At first, it was thought that multimedia optimization could be conducted by evaluating current singlesource databases like PMB in Canada, or MRI in the United States. But comparative analyses concluded that the correlation between program, network, and daypart audiences (as measured through survey research in the singlesource databases), was not sufficiently correlated with television ratings, as were being produced by People Meter panels. That is, if you ask people when and what they watch on television, their answers are correlated reasonably with the data collected from meters. But optimization is a game of inches, improving your efficiency by 5 to 10%. The surveybased television data did not correlate with the actual television data sufficiently to conclude that the optimized plan was actually 6% better than the nonoptimized multimedia plan. With that, the industry concluded that if multimedia optimization were to work, it would have to use a set of fused databases, each of which accurately reflected the audiences produced by the print and television currency databases.