Regional newspaper readership research

- getting the questioning right

Michael Brown

The results of readership surveys are highly sensitive to the way in which the questions are asked, and for the national newspapers and magazines, violent controversy has for many years been the name of the game - much of it in Admap's pages. In the case of regional newspaper readership research, though very similar technical problems exist, they provoke less public debate - which does not mean that they are not of equal concern to jicreg, which plays an advisory role in this area. Michael Brown here describes experimental work carried out to determine how far changes in the prompt-aids used could serve to reduce the level of respondent error.

THE REGIONAL PRESS in Britain offers not only an amazing depth and diversity, evidenced by approximately 1,200 daily and weekly titles, but two features seldom encountered in other markets with highly developed local print media: a representative, industry body, setting and controlling standards of audience measurement; and a database, covering all co-operating titles and mixing survey-based readership estimates with modelled data for those titles unable to provide acceptable measurements.

The Joint Industry Committee for Regional Press Research, JICREG, sets out in its Survey Guidelines the principles of readership research design that a publisher and his research agency should follow, if the resultant data are to prove acceptable for inclusion on the JICREG database. These guidelines are not, however, mandatory, and the data from surveys which depart from the guidelines, to a greater or lesser degree, are considered on their merits.

As of 1993, the JICREG Research Guidelines largely reflected the research approach followed by the National Readership Survey prior to 1984. However, the changes instituted by the NRS in 1992 (in parallel with the introduction of Computer Assisted Personal Interviewing) prompted the technical sub-committee to re-examine its requirements. The sub-committee further agreed that any changes to the Research Guidelines subsequently introduced should be based on factual evidence, rather than only on opinion.

To aid this reappraisal, Research Surveys of Great Britain proposed a two-stage plan: a thorough review of existing evidence, followed by the collection of new data. The review was to focus on the questions of particular interest to the technical sub-committee - the prompts best used in readership estimation, to aid accurate identification of different titles and minimise confusion between them; the measurement of frequency of reading; the ordering of the questions making up a readership survey interview; and the inter-relationship of media list length (the number of titles covered in a survey) and 'order effect' (the dependence of readership estimates on the position of the publication concerned within the questionnaire). The plan was accepted and this article presents a necessarily abbreviated account of the design of the experimental survey and of its results.

Readership research results are well known to be sensitively dependent on a whole host of features of the particular design and technique used and, within the confines of a realistic budget, it would clearly have been quite impossible to investigate every aspect in which the existing Research Guidelines were capable of change and respecification.

However, in our view it was likely that the potential differences in research accuracy attributable to the precise wording of survey questions, or to their order within the interview, were likely to pale into insignificance in comparison with the effects of confusion, on the part of survey respondents, between one title and another.

Optimal prompting is critically important and our con- sequent recommendation to JICREG was for the research to concentrate on just two areas: the actual extent of title confusion between regional weekly newspapers; and the apparent effects on accuracy of employing coloured mastheads as prompts (rather than black-and-white ones), and of presenting confusable titles in groups (rather than singly). At the time of the research, the Guidelines recommended, but did not require, the use of coloured mastheads: however, they clearly laid down the presentation of titles one by one, as had been National Readership Survey practice until the introduction of its Extended Media List questionnaire in 1984.

In experimental readership research of the type with which we are here concerned, two frequently recurrent problems are questions on the generalisability of the results and the absence of any fully accepted 'yardstick' measure, against which the existence of biases in some other estimate may be assessed. In the present case, we were careful to research more than one area and adopted, as our criterion measure, the recognition of complete issues of newspapers as ones previously read. We would not, however, claim that this measure itself is absolutely error-free.

Within each of two survey areas, three parallel samples of 100 respondents were drawn, using a random location design; it should be noted that, given the questionnaire and analysis method employed, the effective bases were not respondents, but observations - the number of titles for which data were obtained, summed across respondents. The survey was conducted in March of this year.

In each area, a different set of prompt aids was used with each sub-sample: single title cards with black-and-white logos, as was then the current usual practice in JICREG-approved research; single title cards with logos in colour (the recommended practice); and grouped, reduced size black-and-white logos.

The media lists covered were as in Exhibit 1.

EXHIBIT 1. MEDIA LISTS COVERED

Area R

TEST TITLES
one paid weekly
three free weeklies

OTHER TITLES
one paid weekly
three free weeklies, including one fictitious title
all national dailies

Area O

TEST TITLES
three free weeklies