A marketer’s guide to deepfakes
Synthetic media is ushering us into a world where it will be increasingly difficult to discern fact from fiction. It’s also presenting new risks (and opportunities) for the advertising industry.
In late March, Pope Francis shocked the world with his inimitable drip.
Images depicting the pontiff strolling in a white Balenciaga puffer jacket (which gave one the impression of the Michelin Man attending the Met Gala) went viral on social media, alongside a handful of similar images in which the pope was shown wearing stylish sunglasses, gloves, slacks and sneakers – garb not typically associated with the Catholic Church’s highest office.
Pope in Balenciaga pic.twitter.com/WRYtVmz2rl
— MartyParty (@martypartymusic) March 25, 2023
Many people were convinced that the images were real. But eventually, word began to spread that they had been created using Midjourney, an AI model that generates digital images from text-based prompts. In an interview with Buzzfeed News, the man behind the images – a 31-year-old named Pablo Xavier who declined to share his last name – admitted that he was tripping on psilocybin when he decided to give the pope an AI-generated fashion makeover.
The Balenciaga pope images began making headlines less than a week after AI-generated images depicting former US president Donald Trump getting arrested, crying in a courtroom and lifting weights in prison also went viral.
While social media users are giggling at these and similar images, their proliferation raises a range of concerns and risks for the marketing industry – and for society at large – that is becoming increasingly urgent.
Explore frequently asked questions
A brief history of deepfakes
The images of a dripped-out Pope Francis and a convicted Donald Trump are two recent examples of deepfakes, or pieces of media that are tampered with or generated entirely using AI to depict a real person engaging in some kind of activity that never took place.
The history of deepfakes can be traced back to 1997, when Christoph Bregler, Michele Covell and Malcolm Slaney – three researchers who all currently work at Google – created a program called Video Rewrite, which could alter video footage of a real person to match up with synthetic audio, making it look like they said something they didn’t.
One short clip featured in a paper that accompanied the launch of Video Rewrite showed John F. Kennedy uttering five very simple, but highly significant words: “I call upon Chairman Kruschev…”
In 2014, computer scientist Ian Goodfellow (who, incidentally, currently works for DeepMind, an AI company owned by Google) developed the first generative adversarial network, or GAN, an AI model that essentially pits two algorithms against each other to create a lifelike digital image, ushering in a new era of synthetic media. (Goodfellow is now lovingly known as the ‘GANfather’ in the AI world.)
The term ‘deepfakes,’ however, did not truly enter mainstream consciousness until Vice editor Samantha Cole published an article in 2018 about a Reddit user under the name ‘Deepfakes‘ who was running a page devoted to AI-generated celebrity porn videos.
The following year, a report titled ‘The State of Deepfakes’ published by DeepTrace Technologies – an organization devoted to mitigating the risks of deepfakes - found that “non-consensual deepfake pornography” made up some 96% of all deepfake videos found online. The rise of deepfakes, it seems, was being driven by demand for porn. (Earlier this month, a deepfake app ran ads across Meta-owned platforms Instagram, Facebook and Facebook Messenger showing what appeared to be Emma Watson and Scarlett Johansson engaged in sexually provocative acts).
Advertisement
Much has changed since that original DeepTrace report was released. For one thing, generative AI has been catapulted into the zeitgeist, thanks largely to the phenomenal rise of ChatGPT, which, following its November 2022 launch by AI research lab OpenAI rapidly became the most popular consumer AI product in history. Other models, such as OpenAI’s Dall-E 2 and Stable Diffusion, have also become immensely popular. And many of these models are free to use.
“We're now in a moment where accessibility is meeting functionality and realism,” says Henry Ajder, a generative AI and synthetic media expert who contributed to the 2019 DeepTrace report. “And that's leading to this real tectonic shift in the [synthetic media] landscape.”
Not so long ago, photographs, video recordings and audio clips were generally regarded as irrefutable evidence that an event took place. That’s beginning to change. “What deepfakes and generative AI and synthetic content have done is they've introduced plausible deniability into a space that previously felt pretty sacrosanct in terms of reliability,” Ajder says.
That’s unsettling for several obvious reasons. Humans always could manipulate and deceive by telling lies, but we’re now entering into an era in which more or less anyone with an internet connection can create images using AI that look - at least to the casual observer - completely real.
The proliferation of generative AI and deepfakes has sparked a conversation among experts about potential modes of detection – mechanisms that could be deployed to ensure that all synthetic media is either clearly labeled or otherwise easily identifiable as such. But Ajder admits that detection is “really challenging.” Part of the problem, he says, is that platforms like Midjourney “are evolving incredibly quickly.” Bad actors can also clean up deepfakes in a kind of post-production process to make them more difficult to detect.
“It's not going to be the case in my view that you're going to have a silver bullet-style detection system which can be authoritatively relied on,” Ajder says.
Speaking with experts, there seems to be a growing feeling that the deepfake genie has been let out of the bottle. Just as the release of ChatGPT gave the world a startling glimpse into what the future of AI will look like, a new wave of deepfakes – including the images of the Balenciaga pope – is showing us that very soon, we could be living in a world where it’s close to impossible to discern the fake from the real.
So how should we proceed? And what can marketers do to maximize the benefits and minimize the risks surrounding the rise of deepfakes?
Advertisement
The role of marketers
The marketing industry is known for quickly and enthusiastically jumping onto new technological trends, sometimes with disastrous results. (Just look at how many brands bet big on crypto in recent years). That phenomenon now seems to be playing out as a flood of brands rushes in to capitalize on the collective craze surrounding generative AI and deepfakes.
“Advertisers were some of the early adopters in this space,” says Adjer. “Marketers have … played into and embraced the virality effect that these tools can make – they do have a ‘wow’ factor to them.”
Mountain Dew created a deepfake Bob Ross; Lay’s created a deepfake Lionel Messi; a deepfaked Elon Musk sat in a bathtub à la Margot Robbie in The Big Short to explain an obscure securities law for a real estate investing company; the list goes on.
As is often the case, the impacts of generative AI and deepfakes upon pop culture are happening more quickly than the law can keep up with. The traditional media landscape has long-established laws in place to prevent brands from non-consensually using a celebrity’s name, image or likeness, but what happens when that celebrity’s face is a digital rendering created by AI? This presents legal challenges “that we don’t have existing legal nuances for yet,” says attorney Brenda Leong from BNH.AI, a law firm that specializes in the legal landscape taking shape around AI.
Leong compares deepfakes to stalking: When you tease stalking apart into individual actions – sitting in a parking lot or driving to someone’s house, say – they may not be technically illegal in and of themselves; it’s only when they’re added together that a pernicious picture starts to come into focus. Similarly, she says, “individually creating a video or representing [someone’s] speech might or might not be illegal as a freestanding action, but when you put them together with other things, or with intent, it's going to have new impacts that we haven't had to deal with” before the relevant technologies became available.
She also believes that deepfake videos in particular could cause much more “emotional distress” (a legal phrase often invoked in the deliberation of lawsuits), potentially landing brands in much hotter legal waters if they were to use AI-generated videos of celebrities without their consent. “It's one thing to see [a celebrity's] picture on the side of an offensive product,” she says. “It's a whole other thing to see [them] speaking and supporting something very offensive. And so his claim [of emotional distress] is probably going to be stronger because it's a deeper deception.”
Suggested newsletters for you
She’s careful to clarify that this is just informed speculation and that as far as she’s aware, these kinds of cases have yet to be adjudicated. But she says she “would expect that those aspects of privacy harm and reputational harm that have to do with distress and public perception are going to be impacted because people respond more strongly to video and spoken [words] that they do to just an image.”
That’s not to say that there aren’t going to be ways in which brands can leverage this technology in a manner that's both ethically and legally sound. While there are dangers that come with the use of deepfakes, they also present some valuable opportunities that are difficult to ignore. For example, imagine that a brand eventually works out a deal with Dua Lipa to use her AI-generated image and likeness in a series of deepfake video ads. Both parties explicitly agree to what the deepfaked Dua Lipa will do and say in the ads. With the stroke of a pen, that company has just saved many hours of expensive production time, and Ms. Lipa has just landed herself some passive income. It’s plausible that many such deals will arise in the future.
According to Dan Gardner, co-founder and CEO of Code and Theory, consensual deepfake deals with celebrities could be “like influencer [marketing] on steroids … it just opens up doors, because time and ability isn't the limiter anymore.” The new limiting factors, he says, will be “rights, ownership and execution.”
What advice can Leong, the lawyer specializing in AI, offer to brands that are interested in launching their celebrity deepfake campaign? “The best advice is to get the person's permission,” she says. “Pay them for their image … [If], for whatever reason, that’s not desirable or possible, be very clear that it's fake. Don't try to make it look like [it’s] real … make it spoofy, so to speak.”
She’s careful to add: “This is not legal advice to do that or legal advice about how to do that.”
For more on the latest happening in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter here.