Advertising Technology

Poisoned Cookies

Andrew CalsaleThe cookie is the lynchpin of any successful display advertising campaign. The cookie allows marketers to collect data that they can in turn leverage to target their advertising to qualified prospects. Without the cookie, digital advertising just wouldn’t be as effective. We can say this with confidence because we see the problems inherent in mobile advertising, due to the fact that most mobile devices are not enabled for third party cookies. Display is fortunate to have the cookie on its side.

We are all familiar with the role of the cookie. But what is probably lesser known, are the methods of fraud that allow online scammers to profit at the expense of the buy side. There has been an increase in industry chatter about some of the more common methods of fraud and traffic generation schemes including browser plug ins replacing legitimate publisher display ads with their own, pixel jacking, and ad stacking to name a few. And then there is a phenomenon called cookie bombing: a method of fraud whereby fraudulent profiles are created for the sole purpose of attracting the attention of major advertisers. This is where the bad actor will inject these profiles with cookies, in an attempt at creating a false impression of value and purchasing intent to raise bid prices on ‘users’ that are then directed towards an invented site. While cookie bombing needs to be stopped, there is another variant of this fraud that is potentially far more disruptive.

There has been an ongoing trend where the practice of cookie bombing is being directed not just on invented, robotic profiles, but instead towards the general internet at large, manipulating the intent of real users. In combination with the cookies from a users’ legitimate internet browsing, this fraud becomes much more difficult to track. Suddenly users find themselves receiving advertisements for things they have no interest in. On a limited scale we are seeing this, and it is a concerning trend.

We began to notice this through the everyday operation of our platform. We allow publishers at large to apply to bring their impressions to us, but we are particular in who we accept. We receive thousands of applications but we only accept a small minority that meet our standards. Our editorial group reviews each site that attempts to join the platform and on certain sites they have found the existence of pixels being fired at users, unbeknownst to them. A common trend we see is that many of the “publishers” are pirates, publishing pirated content, with users—commonly minors—that aren’t very valuable to marketers. Operators see that these users are not valuable, so they manipulate their profiles by injecting them with intent, creating the impression that they have purchasing power, turning the profiles of minors downloading pirated content into corporate executives looking to buy a new luxury car, for instance. This in turn makes their media more sought after by the bidding algorithms accessing their impressions across the exchanges they are a part of.

The trouble is, as these poisoned cookies traverse the rest of the web, they continue to see the wrong targeted ads. And the marketers paying for the media have no way to know – as at first glance they look like real, and valuable prospects.

There are core components of ad tech that can be exploited, and we need to do everything possible to avoid online scammers from taking advantage. If we don’t, legitimate publishers who do attract the kind of audience worthy of a premium will see their value diluted. So what are we going to do?

To get rid of this issue before it becomes an epidemic, there are a few possible solutions. One would be that the industry needs another form of verification, specifically data verification. A pixel fired does not guarantee true intent triggered it. We need to know intent is actually happening. Data verification would come down to the source of the data, specifically how, when and where it originated from.

Awareness is a priority. Most buyers would not likely assume that an audience segment in a retargeted pool could be falsely represented. But it can, and second to awareness, anyone that is buying media on the back of data needs to at some point prioritize a mechanism that screens the data they use to look for abnormalities, which can ultimately harm their campaigns. This may be futurist, but it is a solution.

Lastly, we need to try to find a more sophisticated method of recording intent than just a very basic, static pixel. Perhaps a smart pixel that has a duplication mechanism, or in other words, if someone simply scrapes a pixel off a website to reuse at a later date, it won’t work due to an automated expiration date.

As industry professionals, we need to be thinking about the longevity of the industry, and protecting it from fraudulent activity should be at the top of our list.

 

Andrew Casale is a contributor to The Makegood and vice president of strategy at Casale Media, a media technology company that helps brands profit from online display.