Are the Opinion Polls Ready for 1997?1

John Curtice

CREST and Strathclyde University

INTRODUCTION

The 1992 general election was a disaster for the opinion poll industry. Four polls published on polling day on average reported a one point Labour lead. When the ballot boxes were opened just hours later, the Conservatives proved to be as much as eight points ahead. The pollsters were widely seen to have got it wrong.

The ensuing five years have seen an extensive debate about why the industry got it wrong, including an official inquiry established by the Market Research Society (see especially Crewe 1992; Curtice 1996; Jowell et al. 1993; Moon 1993;  MRS 1994; Worcester 1996). A clear consensus that emerged from this debate was that although there was some evidence of late swing, enough indeed according to the MRS inquiry to contribute between a fifth and a third of the total discrepancy (MRS 1994, p. xi), there had also been weaknesses in the ways in which the polls were conducted.

The 1997 election will therefore be a crucial test for the polling industry. Opinion polls have come to dominate the reporting of election campaigns in Britain (Harrison 1992; Harrop & Scammell 1992; Semetko et al. 1994), but the continued willingness of most of the media to suspend their disbelief in polls is unlikely to survive another dbcle. The industry needs to re-establish its credibility by demonstrating that it can produce final polls which are reasonably close to the eventual outcome.

This paper considers whether the industry is likely to achieve that goal. It examines how far the methods of the opinion polls have changed since 1992 and whether these changes are likely to be sufficient to overcome the weaknesses identified as the cause of the polls problems on that occasion. It argues that the developments which have occurred since 1992 have resulted in as methodologically pluralist an industry as at any time in its (relatively short) history. This may make it more difficult for the industry to establish its collective credibility, but it does mean that the election will provide an important test of the relative effectiveness of the diversity of techniques now in use and thus which techniques should be used in future.

The problems of 1992

We begin by looking at what have been identified as the reasons for the polls failure in 1992. The MRS inquiry identified two main problems (MRS 1994). The first was the failure of the polls to identify the hidden Tories. There was considerable evidence to suggest that those who refused to say how they would vote or said they did not know how they would vote were in fact disproportionately Conservative inclined. In cross-section surveys those who failed to give an indication of how they would vote were more likely to report that they had voted Conservative in 1987. Most importantly, panel surveys showed that those not indicating a vote intention during the campaign were disproportionately likely to report having voted Conservative on polling day.

All the polls, however, had in effect assumed that the distribution of party support amongst those who did not give a voting intention was just the same as among those who did. They simply excluded the dont knows and wont says to the voting intention questions and calculated each partys support as the percentage of all those who did give a voting intention.

But of course such item refusal might only be the tip of the iceberg. If Conservative supporters were less reluctant to say how they would vote, perhaps they were also less willing to participate in polls at all. The MRS inquiry was unable to establish any clear evidence that this was indeed the case, but suggested that the balance of probability lay in its being so.

According to the theory of quota sampling, however, such differential refusal to participate should not be a problem at all. If someone with a certain set of social characteristics refuses to participate then s/he will be replaced by someone more willing but who otherwise has the same characteristics. So long as like is replaced with like, no harm should be done.

But this only holds true so long as the quota controls which are being used are sufficiently strongly associated with vote to ensure that, for example, reluctant Conservatives are replaced with more willing ones. But it was by no means clear that this was true of the quota controls which were commonly in use in 1992. These were age, sex, social grade and employment status. Age and sex are only weakly associated with vote. Meanwhile, the most strongly correlated of the four, social grade, suffers from a high level of unreliability in its measurement (OBrien & Ford 1988).

This argument in fact already brings us to the second of the two main problems identified by the MRS inquiry. This is that there were inadequacies in the operation of the quota controls. Apart from the possible impact of differential refusal, it was also suggested that the controls may not have been sufficient to counteract the possibility that those who are successfully contacted in the relatively short fieldwork period of a typical quota poll might be more Labour than the population in general. Although the inquiry found that there were weaknesses in the empirical analysis undertaken by the principal advocates of this claim that there was an availability bias (Jowell et al. 1993), it was not able to dismiss the theory entirely either.

But while it might have been difficult to ascertain the specific impact of differential refusal or availability bias, the inquiry did uncover clear evidence that, for whatever reason, the controls used did not ensure that interviewers contacted a representative sample of the population. It found that there was statistically significant variation between the campaign polls in the recorded level of both council tenants and car owners. Yet the true value of both characteristics of course changed little in the short three-week period of an election campaign.

As well as expressing doubts about the strength of the quota controls, the inquiry also argued that they were not sufficiently accurate. Most polls relied upon the National Readership Survey in deciding what their quota targets (and any post-interview weighting targets) should be. But the inquirys report demonstrated that the social profile of the nation painted by the NRS was noticeably more downmarket than that portrayed by the 1991 Census. That this did indeed result in the use of quota targets which produced a skewed sample was indicated by the fact that on average the campaign polls overestimated the proportion of council tenants in the adult population and underestimated that of persons with two or more cars.

Despite its extensive analysis of the problems with the polls in 1992, the MRS inquiry was however very cautious in its recommendations as to how they might be remedied. For example, it called for further exploration of better ways of estimating the voting preferences of those who fail to give a voting intention. It also welcomed the possibility of further experiments with secret ballot techniques as a means of reducing refusal. But in neither case did it make a firm recommendation.

Equally, even though it made some fundamental criticisms of quota sampling, the report sat on the fence on the debate about the use of quota rather than random sampling. True, it clearly called for improvements to the operation of quota sampling with the use of better sources of data for setting quotas and target weights, together with the use of variables more strongly associated with voting behaviour in setting quotas (and weights). But it fell short of expressing a view as to whether such changes might be sufficient to remedy the apparent defects in quota sampling or whether the industry should consider a switch to random sampling. Again it simply called for further research and experimentation into the merits of the two approaches.

How the polls have changed since 1992

Despite the cautious tone of the MRS report, there have in fact been significant changes in the way in which opinion polls are conducted compared with 1992, with many of them introduced since the reports publication in 1994. But the different companies have changed their methods at varying speeds and in different ways. As a result what in 1992 was for the most part a consensus about the major architecture of poll methodology has now significantly fractured.

The political climate has undoubtedly encouraged this process. Since the ERM fell out of the pound in September 1992 the polls have consistently recorded large and at times record Labour leads over the Conservatives. These leads have given the industry further reason to examine whether or not it might be exaggerating Labours strength. At the same time they gave the industry a climate in which each company has been relatively free to conduct its own experiments and publish diverging results without placing too much strain on the industrys collective credibility. It might be a little embarrassing for one poll to say that Labours lead is as little as 17 points while another suggests it could be as high as 35, but as the political import of those two figures is largely the same, a comfortable Labour victory, the divergence has not come under close media scrutiny. In a more even political climate, in contrast, such divergence would likely have seen the polls accused of being all over the place.

Differential refusal

Of the problems identified by the MRS inquiry, differential refusal, especially item refusal, posed least challenge to the industrys existing practices. Unsurprisingly it was also the thorn which the industry proved most willing to grapple. One possible solution that attracted the industrys early attention was the use of so-called secret ballots. Instead of asking respondents to say to an interviewer how they would vote, they were invited to complete a mock ballot paper and place it in a sealed envelope. It was suggested that the increased anonymity would help reduce refusal in general and perhaps that of Conservative supporters in particular, especially if one of the reasons for their reticence was the so-called spiral of silence whereby respondents might be reluctant to express what they considered to be unfashionable views (Noelle-Neumann 1986).

A number of companies experimented privately with these secret ballots within months of the general election. ICM concluded that not only did use of the secret ballot have a beneficial impact on the number of people who refused to declare their voting intention but that it also had a significant impact on the distribution of reported preferences (ICM 1992). As a result they introduced the secret ballot into their regular monthly polls for the Guardian from September 1992. Similar experiments by NOP and MORI concluded however that use of the secret ballot did not have any impact on the distribution of reported vote; they thus decided against its introduction.

In the first nine months after ICM introduced their secret ballot technique their average poll ratings were clearly more pro-Conservative than that of their rivals, apparently justifying the companys decision. But then over the next nine months the difference disappeared, suggesting that the technique did not have as much impact as it at first appeared (MRS, 1994 pp. 1012). In any event, the debate was subsequently overtaken by ICMs decision in November 1995 to switch to telephone polling, of which more anon.

The objective of the secret ballot was to reduce differential abstention. If that could not be done an alternative strategy was to estimate its extent and make an allowance for it accordingly. This has in practice proved to be the path down which most of the industry has decided to tread. Within months of the publication of the MRS inquirys final report just after the 1994 European elections, the use of techniques to estimate the likely political preferences of the dont knows and wont says became widespread.

ICM were again one of the first companies to make a change. In one of a number of experiments that they made at the time of the 1994 European Election, they estimated the likely political preferences of those who said they were certain to vote but failed to say how, on the basis of their report of how they had voted in 1992. ICM had recontacted the respondents to their final 1992 election poll and had found that amongst those who had not said for whom they would vote when they were first interviewed, 60% ended up voting for the party they said they had voted for in 1987. This finding was further corroborated by similar data from the British Election Study.

ICM first applied this 60% rule after including it as one of a number of adjustments that the company made to a poll it conducted after the European election polls closed on 9 June but published before the votes were counted on 12 June. Its result was thus publicly verified against the actual outcome. In the event the poll gave a highly accurate estimate of the Labour lead over the Conservatives, but only after the 60% rule and a number of other adjustments were made. In their absence Labours strength would have been overestimated.

ICMs application of the 60% rule since the Autumn of 1994 has regularly resulted in a narrowing of Labours estimated lead as former Conservative supporters have continued to prove more likely to say they did not know how they would vote in an election tomorrow (Sparrow & Turner 1995). Even so, the method still leaves open the question of what to do about those respondents who refuse to say how they voted last time as well as not say how they would vote in an election to-morrow.

Similar although not identical adjustments have also been adopted by the other companies. Although not incorporated in the figures which were published by their client, The Times, MORI also began in Autumn 1994 to produce figures in which those who did not say how they would vote were assumed to be likely to vote for the party they said they voted for in 1992. In contrast to ICM, however, they assumed a 100% rather than a 60% likelihood that they would do so; they also included those who say they would not vote in their calculation. Meanwhile, when NOP recommenced conducting regular published polls in July 1995 they adopted the practice of assuming that those who did not say how they would vote would do so in line with their party identification or, if they did not identify with a party, whichever party they said would best handle the economy. Since January 1997 Gallup have also been assigning those who do not declare a voting intention on the basis of who the respondent says would make the best prime minister or failing that, which party they think would best handle the economy. (Between Autumn 1994 and January 1997, Gallup believed the problem was being dealt with by the adjustment they were then making to cope with apparent pro-Labour bias more generally; see further below.)

The industry has then paid considerable attention since 1992 to the potential problem of differential abstention. While attempts to overcome it have not apparently been successful, the industry has developed techniques for estimating its possible impact. Note, however, that no two companies have adopted the same solution. And they have varied in whether they choose to headline the figures which make an allowance for differential refusal. So, even on this, the relatively more straightforward problem identified by the MRS inquiry, the reaction of the industry has generated methodological pluralism.

Weaknesses in quota sampling

The MRS inquirys finding with potentially the most far-reaching consequences, however, was its claim to have found weaknesses in the implementation of quota sampling. By the beginning of the 1980s all the major pollsters had opted for quota sampling after a series of elections in which random samples had usually proved less successful than quota polls at anticipating the outcome (Worcester 1991). Quota polls had the advantage of not only being relatively cheap but that could be implemented in a short period of time close to polling day, thereby providing an insurance against a repeat of the dbcle of 1970 when the errors in the polls were largely blamed on late swing. The implication of the MRSs inquiry into the dbcle of 1992, however, was that there were potential problems with quota sampling too.

How might these problems be solved? One possibility was clearly to improve the implementation of quota sampling. The MRS inquirys findings pointed to two areas that needed attention. One was the need to set accurate quotas (and target weights), the other to set adequate ones.

So far as setting accurate quotas (and target weights) are concerned, some companies at least have reduced the extent of their reliance on the National Readership Survey (NRS), in particular by making use of some of the major government surveys. Thus NOP have set their quotas with reference to the General Household Survey while MORI have used the Labour Force Survey. Gallup also set some of their quotas with reference to projections of the Office of National Statistics. But none have been able to end their dependency on the NRS entirely; this is particularly true in the case of social grade whose distribution is not measured by government surveys.

Moreover there is some variance between the companies in the social profile for which they are aiming. At the time of writing NOPs social profile in particular is noticeably more downmarket than that of its three rivals. NOP is currently aiming for 18% of persons in social grade AB, whereas the other companies aim for 2122%. In addition, NOPs target for council tenants is currently 21% and for those without a car 27%, whereas the other companies are aiming for 18% and 2324% respectively. At least part of the explanation for this discrepancy may be that NOP have adjusted the figures available from the General Household Survey in line with the undercoverage of certain sections of the population uncovered in the 1991 Census Validation Survey. Whether they are wise to do so given that the same sections of the population are also somewhat less likely to appear on the electoral register as well may be open to question (Smith 1993). The discrepancy certainly suggests that despite the use of a wider range of sources than in 1992, the setting of quotas and target weights is still subject to some uncertainty about what the correct distribution of some social characteristics should be.

What about the adequacy of the quotas as a means of ensuring that samples are representative of the general population? In 1992 no company set quotas on anything other than age, sex, working status and social grade (MRS 1994, p. 32), of which only social grade is reasonably strongly correlated with vote. In truth there has been little change in that situation. MORI now quota on housing tenure rather than social grade, but both they and NOP otherwise still use just sex, age and working status. ICM now do quota on tenure as well as social grade, but as we shall see later in their case and Gallups the role of quotas in their methodology is very different now from what it was in 1992.

Tightening quotas however is not necessarily the only mechanism which need be used to circumscribe the choice interviewers have about whom they might interview. Another possible mechanism is to control more carefully where interviewers are allowed to interview. This might indeed be as effective as controlling more tightly the kinds of individuals that interviewers interview. For example, it is sometimes suggested that interviewers are reluctant to walk up long leafy lanes at the end of which might be found some of the most affluent people in our society and that they are able to fill their quota of AB interviewees without having to do so (Kellner 1996a). Moreover, it is well known that voting behaviour is associated not only with the characteristics of the individual respondent but also with those of the area in which they live (Butler & Stokes 1974; Heath, Jowell & Curtice 1985). Middle-class individuals, for example, are more likely to vote Conservative if they live among predominantly middle-class neighbours than working-class ones.

In 1992 all companies told their interviewers in which parliamentary constituency they had to conduct their interviews, but beyond that they often had discretion about where to interview. As parliamentary constituencies typically contain nearly 70,000 voters, that discretion was clearly considerable. In their 1997 election polls, however, MORI will be selecting a stratified (using MOSAIC type) sample of census enumeration districts (a far smaller unit) and requiring their interviewers to work in these. In theory this technique should help considerably in ensuring that interviewers interview in a representative sample of places. Its success in practice cannot as yet be evaluated as the technique has not been used in the companys regular non-election monthly polls for The Times.

Of course even if a poll does not necessarily succeed in contacting a wholly representative sample, the researcher may still hope to remove any bias by reweighting the sample after interviewing is finished. Here there has been a clear move within the industry towards widening the range of variables on which this is conducted, and particularly towards the inclusion of more variables known to be associated with vote. In 1992, the most extensive range of weights was applied by NOP which included age, sex, social grade, working status, car ownership, and housing tenure. NOP themselves have not changed this list, but other companies have expanded theirs to include all or most of NOPs and sometimes more. Thus MORIs list is now similar to NOPs as a result of their dropping telephone ownership but adding car ownership. In addition they have added unemployment alongside full or part-time working status, have retained region (which is not used by NOP), and are considering adding self-employment. ICM which did not weight by either tenure or car ownership in 1992 now do so on both and have also added whether or not the respondent has taken a foreign holiday in the last three years, a variable which they find to be more strongly correlated with vote than any other. Gallup have also added car ownership and housing tenure to their list and still also weight by region.

But are these changes sufficient to ensure that quota sampling will avoid the pro-Labour bias that was apparent in 1992? Evidence that they may not be enough comes from the distribution of recalled 1992 vote recorded in quota polls. These have commonly found that Labour won the last general election quite comfortably! This can be seen in Table 1, which shows the recall vote recorded in quota polls conducted between March and May 1996.

TABLE 1: RECALL 1992 VOTE IN QUOTA POLLS

Con Lab LibDem Other
% % % %
Gallup 38 45 14 3
MORI 39 44 14 3
NOP 38 44 13 5
Source: Curtice & Sparrow (1997)

This would appear to be prima facie evidence of continuing pro-Labour bias. However, it is often argued that such results can be accounted for by faulty memory. In particular, respondents have a tendency to say that they voted in the past for the party which they support now (Himmelweit et al. 1978). So with Labour riding so high at the time in terms of current voting intentions, it is perhaps not so surprising that a plurality of respondents claimed to have voted Labour in 1992.

However, we are in the fortunate and unusual position of being able to assess the validity of this argument. The British Election Panel Study (BEPS) has been reinterviewing a group of about 2,000 respondents on a regular basis since 1992. These respondents were asked how they had voted immediately after the 1992 election and again in the spring of both 1994 and 1995. Similarly, the British Household Panel Study (BHPS) has also asked on two separate occasions a group of nearly 7,500 voters how they voted in 1992; the first occasion was in the Autumn of 1992, the second in the Autumn of 1995 (Kellner 1996b).

Both surveys indeed reveal some evidence of selective memory. But equally they also demonstrate that this selective memory is not sufficient to account for the discrepancy between the election result and the recall figures in Table 1. Despite the partys poor showing at the time in current voting intentions, 85% of those in the BEPS survey who said in 1992 that they voted Conservative in 1992 continued to give the same answer in 1995. Even amongst Liberal Democrat voters, usually the group most likely to misremember their previous behaviour, as many as 73% gave a consistent answer. Curtice & Sparrow (1997) estimate on the basis of the BEPS data that an accurate poll should still have been acquiring a two-point Conservative lead during the middle of the current parliament, while the BHPS sample points to almost exactly the same result, i.e. a Conservative lead of one point.

So there is still reason for uncertainty about whether the changes which have been made to quota sampling have wholly overcome the difficulties of the 1992 election. But perhaps there is another stratagem which could be used to overcome the remaining difficulties. If indeed the polls are interviewing too many people who voted Labour in 1992 then perhaps they should reweight their samples by recall 1992 vote so that it more closely matches the actual outcome in 1992. After all, how voters voted last time is a more powerful predictor of how they will vote next time than any of the social characteristics by which the pollsters regularly quota or weight their polls. Weighting by past vote would thus seem to offer an excellent mechanism for ensuring that their polls are representative of the political colour of the nation.

Indeed, doing precisely this was the second major innovation introduced by ICM to their polls after the 1994 European elections, an innovation which was also apparently crucial to their ability to produce figures close to the actual result in the poll they conducted immediately after the European election. It might be difficult to determine why there was apparently still a pro-Labour bias in the polls it might be availability bias, inaccurate or inadequate quotas, differential refusal to participate in polls, interviewing in the wrong places, or any combination of these but this technique apparently offered the prospect of eliminating the impact of any or all of these with simple calculations. Gallup quickly followed in ICMs steps in the Autumn of 1994 by producing adjusted figures which were simply the outcome after they had weighted their figures by past vote, believing that this step would help deal with the possible problem of differential item refusal as well. Unlike ICM however, Gallup chose not to make these figures the ones that were headlined in the reports of their client, the Daily Telegraph.

But it will already be apparent to the reader that there is potentially a problem here too (Kellner 1996a). If the polls weight their figures by recall vote so that they match the distribution of support at the last election, might they not be in danger of underestimating Labours strength? After all, both BEPS and BHPS still suggest that some voters have a faulty memory and as a result when Labour has a large lead in current voting intentions some people will claim to have voted Labour in 1992 who did not actually do so. Thus downweighting Labours recall vote so that it matches Labours actual vote in 1992 is likely to overcompensate for any tendency for Labour supporters to be over-represented. Perhaps it might be better to weight a poll so that recall vote matches a figure which makes reasonable allowance for the likely impact of selective memory upon the distribution of recall vote?

This indeed is the practice adopted by NOP when they restarted regular polling in 1995. They have weighted their figures so that the distribution of recall vote makes the Conservatives and Labour even. But clearly the choice of target distribution is at least as uncertain as that for any of the social characteristics which the pollsters use. Indeed although ICM decided in late 1996, in the wake of the evidence on selective memory, no longer to weight their recall vote to the actual outcome, they then opted to adopt a target of a five-point Conservative lead rather than no lead at all. Indeed, there can even be some disagreement about what target might be implied by the same source of data; whereas Curtice & Sparrow (1997) suggest that the BHPS data point to a one-point Conservative lead, Kellner (1996b) suggests they indicate a one-point Labour lead. Moreover, the most recent date for which there is any estimate for the impact of selective memory on the distribution of recall vote is the Autumn of 1995. There can be no guarantee that this estimate will continue to remain valid in an election held 18 months later, especially if there were to be a significant Conservative recovery in current vote intentions.

So despite its promise, weighting by recall vote still has its pitfalls. Perhaps then quota sampling cannot be fixed at all. That appears to have been the conclusion reached by no less than two companies, ICM and Gallup. Both will enter the 1997 election using some variant of random sampling rather than quota sampling. Equally dramatically both have switched from face-to-face to telephone interviewing. Indeed the two changes are undoubtedly interlinked. It is the switch to telephone sampling that has made the changeover to random sampling possible.

Until recently at least, telephone sampling has been widely believed to suffer from one significant problem. This was that it was apparently impossible to contact a politically representative sample of the population by telephone. The problem was not simply that those who did not have access to a telephone were more likely to vote Labour, but that they continued to be more likely to vote Labour even after one had controlled for a variety of other social characteristics (Husbands 1987; Miller 1987). Moreover previous attempts to overcome this problem by weighting by recall vote had proved unsuccessful (Clemens 1986).

However, the continued rise in telephone ownership means that this problem has become increasingly less important. With only 7% of the population now not contactable by telephone, the impact on the overall estimate of party support of any particular pro-Labour sympathies amongst this group above and beyond what is correctable by weighting must inevitably be small. On the other hand telephone interviewing, and especially computer-assisted interviewing, has a number of potential advantages. Interviewer discretion about where and whom to interview can largely be eliminated because it is a computer which chooses at random which telephone number to ring. Samples no longer need to be geographically concentrated, and those people at the end of long drives are as accessible as those at the end of short ones. Equally, telephone interviewing also helps to counteract availability bias as it is possible to try the same telephone number on a more than one occasion in order to attempt to make contact. Moreover, in contrast to face-to-face interviewing, this can be attempted on a number of occasions within a short period of time, thus helping to make random sampling feasible within a relatively short time period.

Even so there are still a number of issues that have to be addressed in the use of telephone sampling. Perhaps the most obvious is determining who should be interviewed when the telephone is answered. We cannot of course assume that those who pick up the telephone are a random sample of the population. In fact both Gallup and ICM have retained some of the practices of quota sampling in order to determine who should be interviewed. True, Gallups interviewers are instructed to interview the person whose birthday will be next, but only amongst those who are present at the time of the call. Moreover, each interviewer has a maximum quota of persons with a given set of characteristics who can be interviewed. ICM, however, found that attempts to implement the birthday rule resulted in an unacceptably high refusal rate. Their interviewers are simply given a quota of characteristics (age, sex, social grade, tenure and working status) and within that have freedom to choose whom to interview within a household. In truth quasi-random rather than random sampling might be a more accurate description of the two companies methods.

One clear implication of this is that while both companies are placing far less reliance on the adequacy of quotas, both still need to be able to set accurate quotas. Similarly, they still need accurate targets against which to assess the representativeness of their samples and set weights if necessary. Random telephone sampling clearly does not enable the pollster to escape from all the problems of quota sampling.

Equally, we also have to bear in mind that both companies undertake their surveys over only a few days, and may therefore still suffer to some degree from availability bias. Although both companies do attempt to ring each telephone number on more than one occasion (ICM five times; Gallup three) not only are they doing so within a limited period of time, but when they do ring not all members of the household will necessarily be present. Any pro-Labour availability bias may therefore be less than in the case of quota sampling, but not necessarily eliminated. Thus, consideration might still be given to the efficacy of weighting by past vote. Indeed, ICM have retained past vote weighting in their telephone polls; Gallup in contrast have dropped it.

In practice, however, the decision whether or not to weight by past vote appears to be less critical for these polls. For in contrast to the quota polls, these random polls report a recall 1992 vote much closer to the actual outcome anyway. For example, ICM polls conducted at the same time as the quota polls quoted in Table 1 reported a three-point Conservative lead, quite close to what would be expected from the evidence of BEPS and BHPS. The evidence that random telephone sampling can avoid the over-representation of Labour voters that still appears to be apparent in quota surveys is certainly encouraging.

Compared with its response to the problem of differential refusal, the polling industry has then been slower to respond to the problems which emerged from the 1992 election about the implementation of quota sampling. One company has indeed only changed its methods just weeks before the 1997 election has to take place. But in the event the response which has emerged has transformed the polling industry. All the companies have made at least one potentially important change to either their sampling techniques or to the procedures they use subsequently to overcome any remaining biases. And two have for the most part dispensed with quota sampling entirely. The industry which will enter the 1997 election is very different from that which endured the dbcle of 1992.

Conclusion: are the polls ready for 1997?

One thing at least is certain about the performance of the polls in the 1997 general election. If they should get it wrong again, then at least it will not be for the same reasons that they got it wrong in 1992. All the pollsters have made at least some changes in response to what went wrong in 1992. If there were any sense before 1992 that the industry had become complacent in the wake of what it believed was its relative success of its final campaign polls since 1974, that mood has certainly been dispelled.

But at the same time the pollsters have all made rather different changes. The industry has become methodologically pluralist, just as it was encouraged to do by the MRS report. So, if all the polls do get it wrong in 1997 as well, they are likely to do so for different reasons. True, despite their different methodologies they may all get it right. But, rather than providing evidence of the industrys collective failure or success, the performance of the polls in the 1997 election is more likely to prove a critical test of the relative merits of the different practices adopted by the various companies.

Indeed, the 1997 election will indeed provide the first significant re-examination since at least 1983 of some of the oldest debates in the methodology of political opinion polling. It will enable us to assess the merits of telephone interviewing versus face-to-face interviewing, and also the relative advantages of quota and random sampling. The dbcle of 1992 is far from being the only reason why these debates are worth revisiting. The advent of random digit dialling and the spread of telephone ownership have at least raised the possibility that the previous disadvantages of random sampling and telephone interviewing are now capable of being overcome or at least are now no more serious than the difficulties which apparently attend face-to-face quota polls. The new methodological pluralism of the polling industry which has followed in the wake of 1992 means that the performance of the polls in the 1997 election should provide what might in any event have been a timely reassessment of these issues.

The election will also provide a test of two other issues. First, if quota polls can match or outperform random polls, how far does it matter what methods are used? Do quota polls now only work if they are at least weighted to some estimate of past vote and an allowance made for differential abstention, as NOP would appear to believe, or do we simply need to set better quotas and pay close attention to where interviewers conduct their interviews, which seems to be the strategy that MORI have opted to adopt? Secondly, does weighting by past vote help or hinder? Here, as we have seen, we have both an adherent and a non-adherent within both the random telephone and quota face-to-face camps. As a result we have the prospect of being able to disentangle the impact of past-vote weighting from that of sampling and interviewing method.

So are the polls ready for 1997? If by that we mean an industry which is secure in the knowledge that it has all the solutions to what went wrong in 1992, then the answer must be no. But if we mean an industry which is well placed to use the 1997 election to learn how it might best place itself on a firmer footing in future, then the answer might just be yes.

Endnotes

1. This paper won the David Winton Award for the best technical paper at the Market Research Society Conference 1997.

Acknowledgments

I am grateful to Bob Wybrow and Rory Fitzgerald at Gallup, Nick Sparrow at 1CM, Nick Moon at NOP, and Roger Mortimore at MORI for freely and obligingly providing information on their current methods. Responsibility for any errors and omissions lies with the author.

References

Butler, D. & Stokes, D. (1974). Political change in Britain. London: Macmillan.

Clemens, J. (1986). The telephone poll bogeyman: a case study in election paranoia. In: Crewe, I. & M. Harrop, Political communications: the General Election campaign of 1983, Cambridge: Cambridge University Press.

Crewe, I.(1992) A nation of liars? Opinion polls and the 1992 election. Parliamentary Affairs 45, 4: 475-95.

Curtice, J. (1996). What future for the opinion polls? The lessons of the MRS inquiry. In: Rallings, C., Farrell, D., Denver, D. & Broughton, D. (Eds.), British elections and parties yearbook 1995. London: Frank Cass.

Curtice, J. & Sparrow, N. (1997). Do quota polls tell the truth? CREST Working Paper, No. 51 London and Oxford: ESRC Centre for Research into Elections and Social Trends.

Harrison, M. (1992). Politics on the air. In: Butler, D. & Kavanagh, D. The British General Election of 1992. London: Macmillan.

Harrop, M. & Scammell, M. (1992). A tabloid war. In: Butler, D. & Kavanagh, D. The British General Election of 1992. London: Macmillan.

Heath, A., Jowell, R. & Curtice, J. (1985). How Britain votes. Oxford: Pergamon.

Himmelweit, H., Biberian, M. & Stockdale, J. (1978) Memory for past vote: implications of a study in recall. British Journal of Political Science 8, 4: 365-84.

Husbands, C. (1987) The telephone study of voting intentions in the June 1987 General Election. Journal of the Market Research Society 29, 4: 405-11.

ICM (1992). Results of tests to improve voting intention polls. London: ICM.

Jowell, R., Hedges, B., Lynn, P., Farrant, G. & Heath, A. (1993) The 1992 British election: the failure of the polls. Public Opinion Quarterly 57, 2: 238-63.

Kellner, P. (1996a). Spiral of truth. British Journalism Review 

Kellner, P.(1996b). Why the polls still get it wrong. The Observer 15 September. 

Market Research Society (1994). The opinion polls and the 1992 General Election. London: The Market Research Society.

Miller, W.(1987) The British voter and the telephone in the 1983 election. Journal of the Market Research Society 29, 1: 67-82.

Moon, N. (1993). Why did the polls get it wrong in 1992? "I dont know" and "I wont tell you". London: NOP Research Paper.

Noelle-Neumann, E. (1986). The spiral of silence. Chicago: University of Chicago Press.

OBrien, S. & Ford, R. (1988) Can we at last say goodbye to social class? An examination of the usefulness and stability of some alternative methods of measurement. Journal of the Market Research Society 37, 4: 357-83.

Semetko, H., Scammell, M. & Nossiter, T. (1994). The medias coverage of the campaign. In: Heath, A., Jowell, R. & J. Curtice with B. Taylor (Eds.), Labours last chance: the 1992 election and beyond. Aldershot: Dartmouth.

Smith, S. (1993). Electoral registration in 1991. London: HMSO.

Sparrow, N. & Turner, N. (1995). Messages from the spiral of silence: developing more accurate market information in a more uncertain political climate. Journal of the Market Research Society 37, 4: 357-83.

Worcester, R. (1991). British public opinion: a guide to the history and methodology of political opinion polling. Oxford: Basil Blackwell.

Worcester, R. (1996) Political opinion polling 95% expertise and 5% luck. Journal of the Royal Statistical Society Series A, 159, 1: 5-20.<