Les Binet and Sarah Carter get a little bit angry about some of the nonsense they hear around them… like 'attribution fraud'.

Years ago, when the internet was young, a digital strategist held forth at a party. "The thing about digital marketing," he said "is we know everything about customers and precisely how they respond to our ads. Online everything is measurable."

Back then, evaluating online advertising seemed simple: you counted click-through rates. The higher the rate, the more effective the ad. This is a simple and obvious way of measuring effectiveness – one with a long pedigree. It is essentially the same way direct response advertising has been evaluated since the early 1900s. And for all this time, the method has been deeply flawed.

First, it overstates short-term, advertising direct effects. Some people who clicked on your banner ad would have come to your website anyway, by another route. Last-click attribution is a bit like a retailer attributing sales to each of his shop doors. Second, the method ignores longer-term, indirect effects. Some people who didn't click on your banner ad will have remembered it, and it may have influenced purchase later on.

As click-through rates have plummeted, and Google has hoovered up much of the last-click business, online advertisers now pay more attention to indirect effects. Last-click attribution has fallen out of favour, replaced with analysis of the whole online journey.

But this analysis is still fairly crude. Instead of attributing sales to the last click, they are distributed between various 'touchpoints' along the way. It's as if our retailer now admits his sales are driven not just by the doors to his shop but also by the various Tube routes leading to those doors.

This is clearly still nonsense. When we buy something online, it's the result of many encounters with category and brand, often over several years, even if we're not fully aware of it. The 'customer journey' measurable with short-term cookie data is just the last part of that process.

In fact, online behaviour is largely driven by offline influences – hardly surprising, as most of our lives are spent offline. Conversely, a lot of online marketing works by making people buy things offline. Again, not surprising when you realise most retail sales (88% in the UK, 93% in the US) still take place in bricks-and-mortar shops. So any attribution method failing to integrate online and offline data is inevitably flawed.

The old dream that 'everything is measurable online' seems rather hollow now. We now realise it's not always clear how many people actually see the ads we run, or how many are actually human, let alone in the target audience. And this new scepticism now extends to evaluation. Recently, the CEO of Quantcast said it's time to expose overly simplistic attribution practices, calling them 'attribution fraud'.

So if online metrics aren't enough on their own – and they're not, because of interactions between offline and online worlds – how should we proceed? Some favour test-and-control. Take two randomly sampled groups of people but only expose one of them to your online ad. This is essentially the logic behind the traditional regional ad test, although digital technology allows some more sophisticated ways of doing it. Google favours this approach.

The alternative is to build statistical models which combine online and offline data. These are often very similar to the econometric models that offline advertisers have been using for years. And indeed, they often show that offline advertising is a major influence on online behaviour.

But these techniques are not easy to automate. They require skilled researchers with deeper statistical knowledge than most data analysts possess. So, as online marketing matures, it finds itself paradoxically reinventing the market research industry it sought to make redundant.