Rethinking data analysis – part two: some alternatives to frequentist approaches1

Ray Kent
University of Stirling


The goal of any science is to make predictions, but these can be of various kinds. A forecast, which is usually short term, is a prediction of events in a system that is not amenable to manipulation or intervention like a weather forecast. A prediction might be of the outcome, or likely outcome, of a deliberate act or intervention, like the effect on sales of a manufacturer's decision to raise prices. The prediction might be hypothetical – for example, of the 'if X happens then Y will result' variety. In the frequentist tradition, prediction is of two kinds. One is to construct a model based on observed patterns of relationships between variables and then use the model to make a statistical prediction of by how much one or more variables assigned the status of dependent variable will change as a result of changes in one or more variables assigned the status of independent variable. The other kind of prediction involves statistical inference. Frequentist inferential techniques assume that a random sample of observations has been taken and use probability in a distributional sense – for example, using a theoretical sampling distribution to calculate the probability of obtaining a result in a random sample from a population in which the null hypothesis is true. The null hypothesis is given, and it is the data that are evaluated by asking, 'How probable are these data, assuming that the null hypothesis is true?' While frequentists talk about testing a hypothesis, the mathematics they use actually evaluates the probability of the data.