Ad Technology

Dismantling Conspiracy Theories About Voice Data and Targeted Ads

I can’t go a week without hearing about how some acquaintance of mine believes his devices are listening to him and using voice data to send targeted ads his way.

In the days before walled gardens and closed ad platforms, an ad targeting expert would have had little trouble convincing someone of the possibility that some other targeting method was being used and the ad they saw for Milk Bone five minutes after having a conversation about doggie treats was merely a coincidence.

Open platforms encouraged discussion of how ads were targeted. Many methods of targeting were standardized. For instance, there were several flavors of geotargeting, involving registration information or IP mapping. Digital media buyers, whether they worked for agencies or directly for advertisers, knew how their ads were targeted and could compare and contrast the methods used.

But in the age of Big Data, Real-Time Bidding and closed platforms, ad targeting methods are not always 100 percent transparent. Whereas a digital ad expert used to be able to perform a bit of analysis and definitively answer a question about how an ad was targeted, they can’t do the same today in all cases.

In many ways, what’s kept behind the curtain gives people reason to suspect that owners of closed platforms aren’t completely forthright with how ads are targeted. And this provides ample fodder for oddball conspiracy theories. I long for the days when these theories could be easily debunked – “Hey, you forgot that search you did for dog treats a few days ago,” or “You went to the Milk Bone website and forgot about it.”

With ads often targeted to concepts and interests, and with details the ad targeting obscured, there’s no way to say with certainty that, for instance, analysis of ambient conversations in a room isn’t being used to target ads. We have to take an ad seller’s word for it if they operate a closed platform.

There’s no shortage of potential listening devices in the average American’s home. Televisions, smartphones, voice assistants and other Internet-connected gadgets have long been suspected of gathering voice data. Some have been discovered by technologists doing deep forensic analysis. Others have tipped off consumers by making sudden changes to privacy policy language granting device manufacturers the ability to collect and analyze it.

I can’t say that targeting by voice data happens regularly as a matter of course. But that’s just the point.  Whereas the ad tech world of only a decade ago would allow me to disprove it, the nature of closed platforms today precludes my ruling it out from the outside. Truth be told, if Amazon, Google, Facebook or any other operator of a walled garden were to incorporate voice data as part of its targeting regimen and decide not to disclose that, advertisers might never know.

My point here is not to promote outlandish conspiracy theories. It’s to say that the usual methods by which we’d eliminate the possibility of data being surreptitiously incorporated into ad targeting schemes no longer exist. Targeting is both highly nuanced and highly personalized to each advertiser. Owners of walled gardens are less likely to pull back the curtains and show how the secret sauce is made. Nor are advertisers incentivized to dig deeply on ad targeting, as they might discover something that wouldn’t pass muster if presented to executive management. Ignoring the problem keeps tactical arrows from being removed from the marketer’s quiver.

The lack of transparency is almost certainly hurting digital advertising. We continually see new technological leaps forward in the area of keeping nosy marketers away from consumers’ digital identities. Continued opacity results in an increasingly accelerating arms race.

We should consider turning back, lest we lose the trust of the consumer entirely.