I've just come out of a dispiriting advertising research briefing and I wish I could time-travel back to 1974. The briefing was dispiriting because we're in danger of testing an early-stage, pre-production, TV advertising execution with qualitative research when we should be using the research to understand how the execution works in the hearts and minds of our target audience, and to provide objective feedback on the strengths and weaknesses of that execution. This research should be an aid to decision making, not act as judge and jury, handing down a 'Go/No Go' verdict. It's this idea of 'testing' that is so dangerous. And I've said all this.

While my colleagues and clients were all vigorously nodding their assent to the points I was making, I could tell (it's something in their eyes) that it was one of those 'yes, but' moments. Really, they're hoping that this research project will make the difficult decision about whether to proceed with this execution for them, and not with them.

The reason I'd like to escape to 1974 is because, back then, a British market researcher called Alan Hedges wrote a terrific short book in which he took a critical look at the uses of research in advertising. He called it Testing to Destruction. Written with the sort of confidence, clarity and common sense that only comes from thinking profoundly about his subject, Hedges exposes a tendency for marketing, advertising and research professionals to use research inappropriately to test, and mostly to destroy, early advertising execution. His central thesis was that it is simply not possible to test the effectiveness of a pre-production TV commercial in a laboratory situation like a focus group. "We are not testing the advertising," he says on page 55, "since we do not have, and cannot have, such a machine." Instead, he argues very persuasively that the most important contribution research can make to what he memorably calls the 'selling effectiveness' of advertising is at the planning stage, before anyone has even begun to think about particular advertising ideas.

Forty-two years later, many things have moved on, even improved, but not the way that early advertising ideas are put through the grinder. It seems to me that too often the perspective we adopt and the approaches we take to the challenge of testing advertising execution are fundamentally the same as they were back then and so still fundamentally flawed. The issues that Alan Hedges worried about definitely haven't been addressed. At least not in my, dispiriting, experience.

Transported back to 1974, my mission would be to track down Mr Hedges and break the time traveller's code of conduct, which is never to interfere in the past, by meddling in it. I know I could change the future of advertising research for the better, and make the future me less dispirited, as a result. I'd do this by making one simple suggestion. Change the title. I'd point out to the author that he's making it more difficult to "strike the word 'testing' from our advertising research vocabulary, because it gives a quite misleading impression of the proper aims and possible achievements of the operation" (page 12), by placing 'testing' front and centre in his title. I think this is more than just semantics. The offending word puts too much focus on the process and not enough on the flawed thinking and the wrong-headed motivation behind it.

I'd also tell him that I've worked with plenty of people over the decades (client-wide and agency-side) for whom testing to destruction is a positive approach because to them it means being uncompromisingly rigorous. In fact, this perspective on testing may not sound like a bad thing at all, especially to the ultimate decision-maker responsible and accountable for committing significant budget to a rough sketch of an idea at an early stage.

My alternative title would highlight what I call 'premature evaluation'. How doing the right thing (the evaluation) in the wrong way, and at the wrong time, ultimately leads to disappointment and frustration. It's a condition that can afflict anyone in marketing, advertising or research, at any time in their careers. It can creep up on you under the guise of rigorous testing and leaving no stone unturned. I've even seen it tarnish otherwise strong client-agency relationships as early idea after early idea is rejected in the evaluation, prematurely.

So my new title would be The problem of premature evaluation, to pathologise what I believe should be regarded as abnormal research practice. I'd say this stands more than a fighting chance of helping Alan Hedges' little book become better known, more widely read and much more likely to be acted on. After all, who wants to be seen as a premature evaluator, in the future or at any time?