Viewpoint - Survey research: two types of knowledge

I shall argue here that, in the UK, there is a major divide in the kinds of knowledge held by survey experts in research agencies and in academia, and that this works to the detriment of survey research. As befits a Viewpoint article, I shall perhaps portray this divide over-starkly, but I think it is right to do this – there is a serious point to be made.

To my mind, those of us working in agencies who claim survey expertise are strong on practice and weak in theory, while academic survey experts show exactly the opposite qualities. To borrow Gilbert Ryle’s terminology, agency practitioners are strong on knowing how while academics are strong on knowing that.[1] In the agencies we know how to write questionnaires, design samples, collect data, and report results efficiently and quickly. But we are often hazy in knowing that the accuracy of our results should be assessed in such and such a way according to advanced statistical theory. Furthermore, most of us know very little about the latest theories and findings relating to how questions are (mis-)answered – those concerning response order and question order effects, for example. On the other hand, survey specialist[2] academics have vices and virtues that are the mirror images of ours. They know the theory and the published findings, they have a rigorous framework for assessing survey error,[3] and can point to many weaknesses in the surveys we run. But – in my experience at least – many survey expert academics would be hard put to write a useable 45-minute interview questionnaire in two days flat, let alone swiftly set up and implement a survey that delivers acceptable results to a reasonable timescale. And with this practical exiguity sometimes comes a raft of unrealistic expectations about the sorts of data a survey can reasonably be expected to collect.