Fortune 500 brand ads seen next to porn & racial slurs, with IAS & DoubleVerify in the mix
A new study underscores the urgent need for more effective brand safety tools to ensure that brands can reliably prevent their ads from appearing alongside harmful content.
A handful of the world's top brands have been observed advertising on webpages with offensive content, a new report finds / Adobe Stock
A report published today reveals that ads for hundreds of the world’s biggest brands – including Meta, Microsoft, Procter & Gamble, Amazon, Disney, Nestle, Mercedes, Walmart, Marriott and many others – have appeared online on pages with racial slurs, explicit sexual content and violent imagery.
The research, produced by ad quality firm Adalytics, sampled publicly available data to analyze ad placements, and their source code, on the open web.
According to the report, webpages with offensive content featuring pornographic images and titles including ‘gag penis,’ ‘N*****’ [Redacted], ‘horse cock,’ ‘decapitation,’ ‘big black n**** [Redacted] dicks’, ‘dildo’ and ‘Super Mario 3 masturbation’ were observed running ads for major brands.
Many of the campaigns were managed by ad agency holding companies, including Publicis, IPG, WPP and Omnicom.
Explore frequently asked questions
The Drum has reviewed more than 50 screenshots of such examples produced by Adalytics. In one example, Amazon advertised back-to-school products in a banner ad on a disturbing page titled, ‘L**** [Redacted] ALL BLACK PEOPLE.’ In another case, an HP ad for a desktop computer appeared next to a wiki page titled ‘Child pornography.’ A video ad for Apple’s Safari browser popped up on a page about anal sex.
These examples all appeared on Fandom.com, an entertainment-focused wiki platform, where users can self-publish and collaborate on content. Ads for major brands popped up next to potentially objectionable content across more than 25 other domains, too, though these were not all included in the full report.
A Fandom spokesperson shared a statement with The Drum in response to the report‘s findings: “We do not condone the posting of inappropriate and racially insensitive material anywhere on our [user-generated content] platform – this content is not allowed per our guidelines and it won’t be tolerated. Ensuring brand safety on our platform is of the utmost importance to us and we take matters like this very seriously. We also welcome the efforts of any company attempting to protect the rights of our advertisers.“
The spokesperson noted that the report “identified inappropriate content on old, extremely low trafficked wikis“ and said that's why “it was not flagged via our current moderation systems or Google’s Ad Server, both of which monitor our active wikis. “ However, they noted that, in response to the observations made in Adalytics‘ report, Fandom has “added additional safety measures to proactively turn off ads on low trafficked wikis that don’t trigger flag thresholds.“
The spokesperson acknowledged that the report sheds light on “an industry-wide problem“ and urged publishers “to be vigilant and proactive in this area and to work with key vendors‘ tools and resources to ensure industry standards are not only met but exceeded.“
According to Adalytics, multiple brands that were exposed had paid for pre-bid and post-bid brand safety tech as well as keyword blocking, yet their ads were still found in unsuitable contexts. Some also said they block all user-generated content. Due to limitations of the study’s methodology, Adalytics was unable to determine any post-impression corrections or removals of specific ads.
A global media leader at one affected Fortune 500 company, who spoke with The Drum on the condition of anonymity, said that their brand had employed all the necessary safeguards but was still exposed. The report’s observations, they said, are especially distressing against the backdrop of an intense election cycle in the US, when the risks of advertising alongside political misinformation feel especially poignant.
Lou Paskalis, chief strategy officer at media watchdog Ad Fontes Media, tells The Drum that he is “shocked” that Adalytics, “a company of [just a few] people, can so easily find ads of major advertisers in the most provocative and clearly unsafe environments.” Adalytics is finding this information “so fast,” he says, “that I wonder who’s minding the store when it comes to ensuring that advertisers’ ads end up in suitable placements.”
It’s not the first time Adalytics has rocked the boat with explosive claims about the safety of the online advertising ecosystem. In May, the company alleged that supply-side provider (SSP) Colossus was mis-declaring user IDs in ad exchanges – a claim that Colossus responded to with a lawsuit. Just a month prior, Adalytics outed Forbes for operating a secretive, spammy, ‘made for advertising’ subdomain, where brands like Disney, JPMorgan Chase, Johnson & Johnson and United Airlines were unknowingly transacting.
The adtech providers at the heart of the drama
In its new report, Adalytics noted that the source code for many of the ads include tags from ad verification vendors – including Integral Ad Science, DoubleVerify and, in a smattering of cases, Oracle Moat (though Oracle’s ad business, which includes Moat, will shutter in September).
For example, Adalytics saw tags referencing ‘maxDoubleVerifyBrandSafety’ and ‘integralBrandSafety.’ Additionally, IAS JavaScript from ‘adsafeprotected.com’ and DoubleVerify JavaScript from ‘doubleverify.com’ were also observed – code that could be related to viewability, monitoring, fraud detection or brand safety functionality.
These adtech providers are supposed to scan and block inappropriate content for advertisers – on the pre- and post-bid sides – ensuring that ads don’t appear in undesirable environments. The study’s observations, however, imply that some ad verification partners may be operating inadequate technology or failing to adhere to clients’ parameters – as well as brand safety standards set forth by industry trade groups like the Interactive Advertising Bureau (IAB) and the Global Alliance for Responsible Media (Garm).
Categories of content deemed the riskiest, according to IAB and Garm frameworks, include explicit sexual content, hate speech, terrorism, death, injury, military conflict, obscenity and profanity, illegal drugs, online piracy and more.
Advertisement
As it stands, both IAS and DoubleVerify are ‘Brand Safety Certified’ by the Trustworthy Accountability Group (Tag), an initiative established by three trade bodies: the IAB; the American Association of Advertising Agencies (4A’s); and the Association of National Advertisers (ANA). Tag’s Brand Safety Certified Program, per its website, aims to ensure “transparency, choice and control for buyers – enabling them to buy advertising inventory with confidence and creating a brand safety framework for sellers that increases the value of certified sellers’ inventory.”
DoubleVerify is also accredited for a handful of brand suitability, ad viewability and fraud avoidance measures by the Media Rating Council (MRC).
One top brand marketer who spoke to Adalytics said the study’s observations indicate that “the brand safety and verification technologies that claim to have been providing URL- and page-level protection have been nothing short of insufficient …” They emphasized the urgent need for full URL transparency and more robust brand safety and verification technologies.
The observations made in the report should also raise bigger questions about the incentive structures of the adtech ecosystem, said a Fortune 500 media leader who spoke to The Drum on the condition of anonymity. They said that most verification vendors like IAS and DoubleVerify are paid on a per-impression basis. This means that they’re likely to make money even when a brand’s ad runs alongside inappropriate content. And if they blocked more content, it’s possible that fewer impressions would be served, ultimately hurting their bottom lines.
The role of AI in ad verification and brand safety
Both IAS and DoubleVerify use machine learning and AI to detect and classify online content, part of their brand safety infrastructure designed to prevent ads from appearing near risky content.
IAS’s website says the firm classifies content at scale with the help of proprietary natural language processing tools for context comprehension. The company, which says it processes more than 280bn digital interactions every day, claims its technology is “42% more accurate than the next best provider at classifying online content.”
DoubleVerify’s website, meanwhile, says the company uses “leading artificial intelligence (AI) technology to provide advertisers with the most accurate classification while ensuring the broadest coverage and protection at scale.” The firm promises protection for advertisers “from programmatic avoidance to post-bid monitoring and blocking.”
In theory, then, it’s possible that automated systems have inaccurately classified some webpages or keywords, enabling ads to run alongside offensive content.
In its final recommendations, Adalytics advocates for increased transparency around the use of AI in the adtech ecosystem. It suggests that advertisers would do well to demand detailed URL-level data from DSPs, media agencies and verification providers to enable independent evaluations of the brand safety solutions being used in their programmatic supply chains.
Some industry leaders, however, brush off the role that AI could have played in the pattern observed by Adalytics. “Suggesting that AI will solve this or is to blame for this is obfuscating the problem,” says Jay Friedman, CEO of ad agency Goodway Group. “We don’t need AI to determine whether or not some of those pages are bad. If you get a few people in a room and say, ‘Think of the worst possible words and contexts that any advertiser could find themselves next to,’ almost any people in our industry could come up with 80%+ of the things that were found in that report to have gotten through [the cracks].”
Advertisement
He says: “I don’t think anybody at DoubleVerify or IAS would say that those [webpage titles or types of content] are OK under any circumstances.” The next question we need to ask ourselves, he suggests, is, “Why is the technology not doing what it was sold to do?”
Verification providers have a duty to be more transparent with advertisers, he says. “Be clear … about what works, what doesn’t and why. One of the things that’s been said for a long time is, ‘We can’t tell you exactly how it works because then people could get around it.’ Well, by not telling us how it works, people are still getting around it, so let’s throw that [justification] out.”
He also advises that brands and agencies should exercise caution before “teaming up with a large public company provider” because while it may “feel like a safe choice reputationally … the evidence suggests otherwise.” Goodway Group, as a default, does not work with IAS, DoubleVerify or Oracle unless clients “insist,“ Friedman says.
DoubleVerify on the counterattack: ‘report engineered for specific outcome’
DoubleVerify refutes many of the claims of Adalytics’ report, which it says “misrepresented” its classification system and presented “entirely manufactured” results.
Suggested newsletters for you
In a lengthy blog post published this morning, the adtech company wrote: “This latest report is part of a concerning trend where third-party research lacks the information, knowledge, and understanding necessary to evaluate the nuances of media verification.” DoubleVerify claims that “the results in this report are entirely manufactured, from the omission of client campaign setup information to the methodology itself, where the researcher arbitrarily searched for racist terms.”
DoubleVerify went on to explain that advertisers may employ a range of brand safety tactics, like pre-bid avoidance, post-bid monitoring and blocking, and that there is no way for Adalytics to have known what combination of approaches a given advertiser was employing – an apparent justification for why some brand messages may have appeared alongside unsavory content.
Plus, they said, adjustments to brand safety parameters are often made on the brand and agency side – not on the verification provider’s side. It also noted that, in instances of brand messages appearing on sites like Fandom, it’s possible that such a domain would have appeared on a brand’s ‘exceptions’ list – allowing ads to run there – which could supersede blocked keywords or other content avoidance parameters.
The firm also said that Adalytics has fundamentally failed to either understand or acknowledge the differences between the kinds of DoubleVerify code that might appear behind an ad. Bits of DoubleVerify code, it said, may be related to either advertiser services or publisher services – the latter being a suite of capabilities that aid publishers, like, say, the verification of video views. In a number of screenshots that the company was able to review from Adalytics, “all of the tags observed … are associated with our publisher services and have nothing to do with the advertiser,” it wrote.
No clients have complained about DoubleVerify’s content categorizations, the company wrote, and it noted that all the Adalytics-produced screenshots it has reviewed were “classified accurately for customers and partners.”
The firm said that the new report is “engineered to achieve a specific outcome.” It added that Adalytics’ research “has been repeatedly debunked” – an inaccurate claim, though some players in the adtech industry have challenged the validity of previous findings from Adalytics.
IAS declined The Drum’s request for comment on the grounds that it had not been given the opportunity to review the full report before it was published.
A blame game going nowhere?
Adalytics has refrained from offering conjecture about how ads for major brands may have appeared in such offensive contexts, despite guarantees brands of safety and suitability made to many brands by their media agencies, DSPs and ad verification partners.
Whether the issue represents a failure on the brand side to set appropriate guardrails, a technical shortcoming on the part of verification providers or something else entirely remains unclear.
One marketer who spoke to Adalytics posited that the issue might stem from misconfigurations on the advertisers’ side, suggesting that some brands could have invested in brand safety tech but employed lax content settings or failed to exclude specific keywords from their campaigns.
Ad Fontes Media’s Paskalis, for his part, believes that ensuring media quality and brand safety should, first and foremost, be the responsibility of the advertiser in question – not the verification partners.
“It’s the damn client’s fault,” he says. “My very first boss in the industry [when I worked client-side] said to me … ‘Sir, don’t expect what you don’t inspect.’ The number one problem is the client simply not doing due diligence.”
Modern brands, Paskalis says, tend to offload all of the work of ad verification and brand safety to third-party providers – often to their own detriment. “This gets outsourced to ad verification partners and nobody looks at it,” he says. “Nobody does a monthly audit because they’re too busy. They’ve got other problems. Their CMO doesn’t care. So there is no inspection. It [needs to start] with the client doing the due diligence, calling the agency and saying, ‘What the ‘f’ is going on here?’”
But others, like the media leader who spoke to The Drum on the condition of anonymity, say that the buck should ultimately stop with the verification providers – and that it’s not the job of either the advertiser or the publisher to ensure media quality and brand safety.
In any case, the world doesn’t run on print, TV and radio ads alone anymore. And, in an increasingly fragmented media ecosystem, paired with a complex adtech supply chain, getting quality, verification and brand safety right is often a tall order – one that may require careful collaboration across various stakeholders.
Ultimately, Paskalis says: “Brands want to do the right thing and would never knowingly – even in pursuit of almighty performance – run their ads on these sites. Marketers, either by errors of omission or commission, are incurring risks here that they would never accept normally.”