A client recently briefed us on a project. As she outlined the questions she wanted answered, a feeling of déjà vu crept over us. We'd worked on this brand for some years. And a quick look through the files showed all her questions had been answered by recent research.

Within minutes of being briefed, we were able to get back to her with definitive answers. But far from being delighted by our speedy response, she was horrified. Her job was to commission research. She had an annual target number of projects to complete. So now we had answers to her questions, she would have to find a whole set of new ones to ask.

Clients and agency people talk about ‘accountability’ and ‘effectiveness’, but, in reality, both sides behave in ways which make accountability difficult and effectiveness less likely. Let's start with the briefing process. Common sense, and now empirical evidence, tells us that the clearer the objectives of a campaign, the more likely it is to be successful. Setting hard targets for behavioural change and business results are particularly important.

Yet many briefs are still vague about what communications are expected to achieve. Objectives, when set at all, are usually based on intermediate measures (awareness levels or response rates) rather than measures like sales or profit. Precise, timed targets are rarer still.

And there is a similar lack of rigour when it comes to evaluation. Performance metrics are chosen on easy-to-measure criteria, not importance for long-term brand health.

So advertising continues to be assessed in terms of ad awareness, even though the link with sales and profit is often tenuous, because it's a simple metric. Direct response rates are popular barometers of effectiveness, even though they are very short-term effects, don't necessarily represent incremental business, and are driven by factors besides marketing. Online activity is assessed by online responses, which are easy to measure, even though evidence shows real payback takes place offline for most brands.

Direct response activity becomes favoured over brand-building, because direct responses are easy to count. Rational communication is preferred to more emotional approaches, because it's easier to ask someone if they remember a proposition than to measure the warmth they feel towards your brand.

Poorly thought-out incentives can be dangerous. We had one client whose annual bonus was based entirely on one advertising pre-test score! Not surprisingly, his focus was on this score, regardless of wider business effects -an example of what Charles Channon called “organisational validity”, evaluation according to what is good enough internally to justify your decision. Even when good evaluation systems are in place, there is a tendency to use them to confirm existing beliefs. So billions continue to be poured into trying to grow brands by increasing loyalty, even though Ehrenberg showed it is more or less impossible.

But the biggest problem is probably the failure to learn. Our research client was a particularly bad example - she had all the answers she needed, but wasn't interested in them. But most companies have a problem in this area. Data gets lost, research reports forgotten, personnel come and go with astonishing rapidity. It's hardly surprising firms seem incapable of learning from their mistakes.

We read an interesting article discussing the rise of data analytics to guide the transfer fees of professional footballers - led by managers like Arsenal's Arsene Wenger. Football had long been using measures such as numbers of tackles per match to assess players - because this was the readily available data. But by the mid-2000s, it became clear that these numbers had no correlation with match outcomes. Subsequent analysis discovered the numbers that did differentiate players and correlate with match results. So clubs changed their evaluation approach to measure and value new things like players' ‘high-intensity output’ (their ability to make repeated sprints in lay terms)

Oh that our industry was similarly open to new learning when it comes to evaluation and effectiveness.

This article originally appeared in the September 2011 issue of Admap. Click here for subscription information.