I’ve recently been reading an outstanding publication called ‘Thinking, Fast and Slow’ by Daniel Kahneman. The book is about ‘human rationality and irrationality’ and how the brain can often jump to the wrong conclusions. One reason for this cites Kahnenman is ‘because people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning’. I find this nowhere more apparent than in the world of digital media and particularly in the subject of attribution. The industry is forever trying to credit the media, the creative or the website etc. for the success or failure of a campaign. The ability to credit a campaign with success is fundamental to a marketing agency’s business model. Without being able to do this, agency’s would not be able to prove their successes to clients and justify their fees. Clients in turn would not be able to make business cases to their seniors for bigger budgets (and bonuses). Accreditation is one of the most important factors of Google’s success and Facebook’s failure. Google’s advertising model allows marketers to apply credit to media spend in a way that no other publisher has achieved. Accreditation has created an environment where anything that is difficult to measure or attribute value to has come under much greater scrutiny from the industry.
The true causal effect of an online sale for example, is dependent on so many factors, ranging from media effectiveness, to time of day, to the availability and pricing of the product, to the mood of the customer and so on. I was recently asked by a colleague to explain exactly how effective a particular creative asset was in driving sales in an online campaign. After analysing a specific data set I came to the conclusion that being able to isolate ‘creativity’ as a causal factor of the campaign was virtually impossible as there were so many other influences on performance to consider (see Fig A), many beyond one’s control.
Causation is a very difficult thing to prove in the creative world as it’s often impossible to isolate creativity as a factor, at least statistically. This is why the industry is still so reliant on panel research to understand creative effectiveness.
If creativity is burdened with being the most subjective factor when evaluating correlation to performance, surely search, as arguably the most measurable marketing channel we have, should be the easiest online marketing activity to assume ‘causal’ not correlated performance? – Unfortunately not. For many brands, over 50% of accredited sales come from branded keywords. Branded searches, and therefore sales are driven by a multitude of factors including above the line marketing activity, existing brand awareness, word of mouth, navigational searches, media spend and so on. This means that for many brands, over 50% of their search budget isn’t truly measurable.
The digital media methodology in which causality is a preoccupation is attribution. This is a ‘philosophy’ presupposes that correlation is causation, that because we can see large numbers of ‘views’ preceding ‘clicks’,(or ‘clicks’ preceding ‘clicks’) and because these two data sets correlate, that display (or another channel) has somehow caused or driven search volume or improved traffic to a website. I have previously questioned this assumption and in the many cases this scenario has played out, I discovered the real reason search volumes increased with display was more to do with the fact that a particular brand had been running more traditional media activity at the same time. In fact, I rarely see a greater correlation than that between media spend and traffic volume when evaluating what caused uplift in campaign performance. The trouble with this of course is that increased spend is often the result of a new campaign, a new creative execution, a new product or a key seasonal period. These multiple associated factors will always confuse causal explanations of successful performance.
When it comes to attribution and display it is of course statistically very likely that if you are serving millions of impressions you will see a correlation between search and display; often more likely than display being the actual ‘cause’ of the uplift. In every display campaign I run I recommend running a ‘true lift’ study and testing against a control as standard. This is one way of getting a more accurate view of performance.
The fact is that we cannot completely isolate the online world from its big brother. Macro factors (see Fig A) are often so dominant as to render the evaluation of online media performance redundant. This is not to undermine the role of digital media, but to appreciate its role in our ecosystem as often being a great facilitator or complement to other activity, or what I have often referred to as the greatest marketing ‘condiment’ that has ever existed. I have also wanted to illustrate that questioning and interrogating data is fundamental to what we do and assuming a causal relationship to performance whilst tempting, can often be misguided.