Recently we've been helping some of our clients assess their latest ad campaign. It's a great little campaign, which seems to have helped boost sales and market share, but evaluation is complicated because of the number of media used. The bulk of the budget was spent on traditional media, particularly TV and outdoor, but the remainder was spent on a mix of digital channels, mobile messaging and PR stunts. Working out the contribution of each is a challenge.

At the first meeting, our clients presented a detailed review of each strand. And something immediately struck us as odd. Traditional media, which accounted for almost threequarters of the budget, were dismissed in about 15 minutes. Then nearly two hours was devoted to the smaller, newer media. In fact, it almost seemed that the less money was spent on a channel, the more attention it got.

One reason was that there was simply more data on the newer, digital channels. Slide after slide was presented, crowded with figures on the number of views, clicks, likes, shares, tweets, followers, comments, and uploads. Dwell times and conversion metrics were analysed in exquisite detail. But for TV, only one number was presented: the cost. This is a clear example of the data tail wagging the evaluation dog. Rather than focusing on what was important (i.e. the media where most money was at stake), we found ourselves focusing on what was easy to measure.

The other problem with this baffling array of metrics was that it was impossible to compare like with like. How does one compare tweets with likes? And how do those metrics relate to click-through rates? In the absence of any common currency, there was no way to assess relative performance, other than through the sheer volume of data. So it would have been very easy to come away from that meeting with the impression that digital and mobile channels were the main sales drivers, and that traditional media were just a sunk cost. However, some weeks later we saw another analysis which gave a rather different picture.

Rather than getting lost in all the many different metrics available for each channel, the new analysis simplified things. With quite a lot of digging involved, the analysts had managed to work out total impressions achieved for each element of the campaign, and these were then combined with the spend figures to calculate cost per impression. This simplified things immensely, although paradoxically it required a lot more work, as these metrics were not routinely reported in a comparable way.

The results of this comparative analysis were a bit of an eye-opener. Suddenly the less shiny traditional poster elements of the campaign were revealed as delivering by far the greatest number of impressions, and in a very cost-effective way. Meanwhile, some of the more innovative elements of the campaign – such as geo-targeted mobile comms – were revealed as delivering tiny levels of reach at eye-watering cost.

Of course, exposure analysis is no substitute for full econometric modelling, but it is a good place to start, and it can be very revealing. Did you know that YouTube can be twice as expensive as TV? Did you know that advertising on London bus sides for just one week can deliver more impressions than a three-month Twitter campaign? Did you know that mobile messaging is 120 times more expensive than Facebook advertising? Our clients didn't, and neither did we.

The clear learning here of course is that unless you take the time to compare apples with apples and pears with pears, true comparative analysis is impossible, and dangerously subjective elements creep in. So let's not allow our desire to be doing new groovy stuff blur the true picture of the media fruit bowl.