'Burst' versus 'Drip'

Colin McDonald throws some light on the relationship between frequency and advertising effectiveness

Colin McDonald

IN AN IDEAL WORLD, advertisers would like both to boost the frequency with which ads are exposed in each home and to ensure continuity. Unfortunately, budget limitations mean that one must compromise between these objectives. There is thus often a problem about where to put the emphasis.

This is still a decision that goes mostly by hunch, since there is remarkably little general evidence. A number of things will affect the decision: especially, the status and position of the brand, its competition, its creative history. Common sense suggests that a new brand, a relaunch, or an unfamiliar advertising idea may have to 'shout louder to be heard', and therefore need an initial high frequency. Conversely, a campaign which has built up its 'affectionate franchise' in consumers' minds needs continuity, not necessarily at very high levels, to reinforce it. A number of the brands in the 'longer and broader effects' section of the IPA Effectiveness Awards (PG Tips, Andrex, Oxo) have shown the benefits of continuity.

A high frequency will increase response, but the problem is at what point that extra response becomes wasteful. In a famous experiment, Zielske (1) compared two groups who were mailed 13 advertisements respectively one week and four weeks apart. In the '1-week' group, awareness shot up rapidly up to week 13 and then fell steeply: in the '4-week' group, awareness rose in the week after an ad was sent, fell away again during the next four weeks, and then rose a bit more, describing a rising but diminishing curve (see Exhibit 1 ). Zielske used this to demonstrate how 'forgetting' works, without stating a preference. But Simon (2), re-analysing the same data, concluded that the '4-week' schedule was twice as effective in terms of what really matters, the total amount of 'recall' delivered (measured as the number of weeks x the appropriate recall rate). Assuming that recall is a sensible measure of effect, and for the specific product, ads etc. used in this experiment, this looks like a piece of quantified evidence on the side of continuity: according to Simon, writing in 1979, it is the only evidence then existing: 'until a wide range of data are produced, the results from these data must have some claim on our belief' (2).

Since then, there has been more evidence pointing towards continuity, at least for 'ongoing' brands/messages. Most response functions are 'concave-downward', which means that one exposure in a series has the highest 'effect' (3,4,5). This can be linked to the idea of advertising as reminding: most advertising is not about learning a new message; one new exposure, on occasion, may be enough to re-invigorate something already learned and even to trigger the next purchase (4, pp 234-5). Jones's recent work with single-source data from the Nielsen Household Panel (5) has again found that one exposure 'has the greatest influence', which leads him to argue for continuity ('drip-feeding'). He relates this to competition: the price of high concentration is that 'plentiful gaps' form in schedules because advertisers cannot afford to advertise all the year round: these gaps then provide opportunities for competitor brands to erode the advantage gained during the bursts.

None of this should be taken as gospel for all conditions. There have been reported cases where concentration appears to have been necessary: eg Roberts (6) discusses a relaunch brand measured on AGB's Superpanel (with single-source simulated by fusion with BARB) and tentatively concludes from this and other cases that bursts are more effective 'especially in very heavily advertised categories', and that they seem to impact on new trial rather than repeat purchasing. This does seem to support the commonsense view that something new may need more concentration, to cut through the noise, than something well-known.

The weight of competitive advertising (the 'share of voice') is plainly an important factor, plainly demonstrated in the 1974 AdTel study (7, Chapter 5). The ideal may well be to maintain advertising which is both continuous and at a level to overcome the competition. The big success can afford this, and at a level which is relatively low in comparison with their market share, but still high against the competition (4,8). Others may have to compromise.

There is no simple rule of thumb. It depends on the brand's history and particular circumstances, the type of product, and the type of response wanted.


1. HUBERT A ZIELSKE (1959). The Remembering and Forgetting of Advertising. Journal of Marketing, January 1959, 239-43. See also NAPLES (1979). Effective Frequency, Chapter 2.

2. JULIAN L SIMON (1979). What Do Zielske's Data Really Show About Pulsing? Journal of Marketing Research, vol 16, August 1979, 415-20.

3. JULIAN L SIMON & JOHAN ARNDT (1980). The Shape of the Advertising Response Function. Journal of Advertising Research, vol 20 no. 4, August 1980, 11-28.

4. JOHN PHILIP JONES (1986). What's in a Name? Lexington Books.

5. JOHN PHILIP JONES. When advertisements work: new proof that advertising triggers sales. Lexington Books (awaiting publication).

6. ANDREW ROBERTS (1994). Measuring Advertising Effects through Panel Data. Paper given at the European Advertising Effectiveness Symposium, Brussels, June 9-10.

7. MICHAEL J NAPLES (1979). Effective Frequency: the relationship between frequency and advertising effectiveness. Association of National Advertisers, New York.

8. JOHN PHILIP JONES (1992). How Much is Enough? Lexington Books, 315. See also SIMON BROADBENT (1989). The Advertising Budget. 114 ff, and COLIN McDONALD (1992). How Advertising Works. 45-6. n

l This is the first in a regular series of short résumés of the state of our knowlege about various topics that puzzle people in advertising and media. Next month: Response functions.


'Burst' versus 'Drip'