Relying on the numbers just doesn't add up

Rory Sutherland

Decision-making systems based on computer models and statistical inference can be just as prone to error as any that preceded them.

During the O.J. Simpson trial, the prosecution made much of the fact that Simpson had a record of violence towards his wife. In response, Simpson's legal team argued that, of all women subjected to spousal abuse, only one in 2,500 was subsequently killed by the abusive husband. It was hence implied that, since the ratio of abusers to killers was so high, any evidence about the accused's prior violent behaviour was insignificant.

This sounds plausible at first. However, in the Simpson case, there is another way of considering the statistics. According to the German academic and author Gerd Gigerenzer, we are not trying to predict whether a husband with a record of violence will murder his wife: Simpson's wife inarguably had been murdered so, instead, we should ask the question backwards: given that a battered wife has been murdered, what are the odds that the husband was responsible for the killing. Gigerenzer calculates that "the chances that a batterer actually murdered his partner, given that she has been first abused and then killed, is about 8 in 9 or approximately 90%".