In the previous article, I covered the need for an accurate attribution model as THE tool for getting a reading on your return on ad spend. I’ve alluded to how bias can be an accuracy-killer when it comes to measuring performance. Let’s dig deeper into some of those challenges, starting with bias.
Let’s start with the one we’re most familiar with: Confirmation bias. That is, our tendency to select the truths that support our existing beliefs.
“Ha, I told you the new ad copy would result in more sales, and there it is!”
“I’ve been using Facebook Ads for years to get the best conversion rates, so we should run a campaign with Facebook.”
Confirmation bias is hard-wired into our brain. It’s part of being human. However, as with anything, it has the potential to cloud marketing decisions. Keeping confirmation bias top of mind helps us think critically and market more effectively. Confirmation bias often affects how we interpret KPIs and metrics, and even which metrics we choose to pay attention to or ignore when making decisions.
Correlation bias is another big one (https://en.wikipedia.org/wiki/Illusory_correlation). It’s the tendency to perceive a relationship between an event and an outcome even where none exists. A form of correlation bias is built into attribution models like Last Click. “It’s the last ad they clicked on before buying ,so therefore it influenced the purchase decision.” Correlation bias blurs the line between correlation and causation and can greatly influence marketing choices if left unchecked.
Platform bias is the inherent tendency of an ad platform to take credit (full or partial) for a sale because it was involved in the customer journey. It’s a type of correlation bias built into ad platforms.
Take, for example, Facebook view-through conversions. These are journeys such that a customer saw an ad and then (without clicking on said ad) went on to purchase. Facebook will attribute any purchase the customer made within a day of the ad view and within 28 days if the customer clicked on the ad. Regardless of what happens after the ad view or ad click, Facebook takes all the credit. But Facebook is not alone. Every ad platform you employ, from Google Display, Search Ads to Facebook, Outbrain, Tabula, etc. wants to take credit for a sale so it can inflate its reported RoAS, in turn driving you to allocate more of your ad budget. This is hardly a secret among digital marketers, but one worth talking about in the context of measuring RoAS. There’s no doubt that ad platforms like Facebook and Google provide valuable analytics that help influence marketing decisions; just remember their interests aren’t exactly aligned with yours.
Platform bias is the key contributing factor to double-counting. This occurs when two or more platforms take credit for the same sale. In the case where a journey to purchase involved a paid search ad and a Facebook Ad, both adWords and Facebook record the conversion. This is instantly visible when you sum up the sales across all ad platforms. It’s always much higher than actual recorded sales (in your e-commerce platform, for example) for the same period. The double-counting problem is exacerbated with each additional ad platform that participates in the customer journey. Double-counting manifests in a similar way when multiple agencies are in the mix. Each agency manages its own budget, the left hand doesn’t know what the right is doing, and conversions reported by each agency rarely add up to the real sales figures.
Finally, we come to the technical issue at the core of the attribution challenge. Measuring performance of our marketing is nearly impossible when data is siloed and fragmented. Without a single source of truth about the customer journey, we cannot accurately measure the performance of a multi-channel strategy. Traffic analytics, sales data, and campaign data tend to live in different systems. This split is further segmented by ad platforms and agencies. Only by unifying data can we understand which ad content touched a customer, how much we paid for those ads, and to what extent they influenced subsequent behaviours (i.e. purchase, signup for email, etc…).
More often than not, marketing teams sit in their own little siloes. What’s worse, It’s even more frequent that the way teams are measured makes them compete with each other! Audit your team’s incentives and understand whether there are conflicts of interest with the current setup. Think of the customer journey holistically instead of as a fragmented mish-mash of paid search, social ads, organic, email, etc. Encourage synergies between teams and reward them for actually delivering on those synergies.
At this point, you may be wondering if all hope is lost. Is accurate RoAS a Holy Grail attainable only by big brands with massive budgets and employing sophisticated (read expensive) tools? Stay tuned. In the next article, I’ll discuss viable solutions to these challenges so you can take your agencies to task, start trusting ad platforms again, and develop marketing strategies with confidence.