Here's how tech companies are policing AI-generated political misinformation
With a record number of voters around the world heading to the polls this year, the risks of AI-aided misinformation has become more pronounced than ever.
Both Gemini and ChatGPT will decline to answer certain queries related to elections and voting. / Adobe Stock
The spread of misinformation in politics – a perennial problem – is being supercharged by the rise of popular generative AI tools, which enable just about anyone to create deepfakes – lifelike images, video and audio – from a single prompt. And with a record-breaking 49% of the global population heading to the polls this year, the technology’s potential for abuse is under increasing scrutiny.
Americans got a taste of these risks in January, when an AI-generated robocall imitating the voice of President Biden went out to New Hampshire voters, imploring them to sit out the state’s presidential primary election. Then, in March, AI-generated images that appeared to show former president Trump sitting and smiling with a group of Black Americans went viral online.
To the trained eye, there are some telltale signs of AI-generated content. (The aforementioned images of Trump, for example, included incoherent text and at least one missing finger). But to most people quickly scrolling through social media, manipulated media can be convincing.
And it isn’t just deepfakes that pose a threat. Text-generating tools like ChatGPT are also known to hallucinate, or generate false information, further compounding the dangers during such a momentous election year.
Explore frequently asked questions
In the absence of robust federal regulation of the technology in the US (though a California law prohibits the creation and dissemination of misleading deepfakes depicting politicians within 60 days of an election), the best we can currently hope for is action on the part of tech and social media companies.
To be sure, this hasn’t always been a winning bet, and the hard lessons of 2016 and the Cambridge Analytica scandal evidence the pernicious and decisive role that platforms can play in the outcome of elections.
Here’s a brief overview of the steps that major tech companies have been taking to mitigate the spread of AI-generated misinformation in the lead-up to the 2024 global elections.
OpenAI
Since the monumental release of ChatGPT in 2022, the AI program’s parent company has shipped one breakthrough after another. In February, for example, it unveiled Sora, a model, currently in beta, that can generate photorealistic video from text prompts.
But each of these landmarks has upped the political stakes: Sora can be used to create a slightly disturbing but more or less harmless Toys R Us ad campaign, but it – or a tool like it – could also hypothetically be used to create dangerous misinformation. In other words, as OpenAI’s technical prowess grows, so too does its accountability.
So what steps is the company taking to safeguard the integrity of the upcoming worldwide elections?
For one thing, some of its tools have been engineered to refrain from responding to politically sensitive prompts. The image-generation tool Dall-E, for example, “has guardrails to decline requests that ask for image generation of real people, including candidates,” according to a company blog post published in January.
Similarly, users aren’t able to build GPTs designed to imitate real people or institutions, and ChatGPT will not answer certain elections-related queries, buy will instead direct users to third-party sources for more information.
The company has also begun using cryptographic digital credentials from the Coalition for Content Provenance and Authenticity (C2PA) – a common set of standards for the verification of digital content – for images generated by Dall-E 3.
Advertisement
Meta
Meta – owner of Facebook, Instagram and WhatsApp – announced in November that advertisers using its platforms would be required to label some politically charged content that has been digitally modified through the use of AI or another technology.
Image-sharpening, color-correcting, or other minor touch-ups that “are inconsequential or immaterial to the claim, assertion or issue raised in the ad” would not need to be disclosed, Meta wrote in a blog post at the time.
Digitally modified ads that haven’t been properly disclosed may be taken down, the company said, and repeat offenders could incur penalties.
X
Formerly Twitter, X prohibits the sharing of “misleading media,” which it defines on its website as “synthetic, manipulated or out-of-context media that may deceive or confuse people and lead to harm.” The platform also may apply a label to posts that contain misleading media.
Under that definition, X says it includes any “media depicting a real person [that’s] been fabricated or simulated, especially through use of artificial intelligence algorithms.”
Like Meta, X added that it does not label or remove media that has been digitally altered in ways that don’t fundamentally alter the message behind the post, such as minor retouches to photos.
“Voter suppression or intimidation” is also not allowed on the platform.
X’s Community Notes tool, which rolled out globally in December 2022 following Elon Musk’s $44bn acquisition of the platform, may also help to expose media manipulation. The feature allows everyday users to add additional context or correct misleading information in posts. The notes, displayed directly beneath the original post, may help mitigate political misinformation on the app.
Advertisement
Perplexity
Perplexity, an online search engine that leverages large language models (LLMs) to engage with users, made headlines earlier this year after it received funding from Amazon founder Jeff Bezos (along with a cohort of other VCs).
As far is its credibility as a source of information goes, Perplexity emphasizes its use of citations in the responses that are generated by its system (although some publishers have accused the platform of plagiarizing its content without attribution).
“Since the company’s inception, we’ve built citations as a key product feature so users have the option to investigate source material and dive deeper into a topic,” a Perplexity spokesperson tells The Drum. “That’s also why we present sources before an AI-generated response, so users can trust the answer and know exactly what information informed the response.”
For elections- or voting-related search queries, the spokesperson added, ”we prioritize authoritative sources like local, state, and federal government websites when generating responses.”
Suggested newsletters for you
TikTok
The app requires creators to label any AI-generated content shared on the platform that “contains realistic images, audio, and video,” according to the company’s website. Creators can themselves apply a label to content that’s “been completely generated or significantly edited by AI,” or TikTok might automatically apply a label if it detects AI-generated content in a post.
The platform has also banned some forms of AI-generated content, even that which has been explicitly labeled as such. “We do not allow content that shares or shows fake authoritative sources or crisis events, or falsely shows public figures in certain contexts,” according to its website. “This includes being bullied, making an endorsement or being endorsed.” AI-generated likenesses of minors, or the likenesses of adults published without consent, is also prohibited.
This is an assurance for users, especially considering that TikTok has not only played host to some of the most egregious use cases of political deepfakes (as recently as this week, in fact, when a faux clip of Australian politician and current premier of Queensland Steven Miles dancing circulated online), but is also leaning into AI-generated content. Last month, the platform debuted hyper-realistic AI avatars, enabling creators to essentially dupe themselves to share sponsored content.
Paid promotion of political content, regardless of whether or not it was generated by AI, is also prohibited on TikTok.
In March, Google announced that Gemini – the AI model that replaced Bard a couple months prior – would be restricted from answering questions related to upcoming global elections.
“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” the company wrote in a blog post. “We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections.”
When asked about voting locations in the upcoming 2024 US elections, Gemini responded: “I can’t help with responses on elections and political figures right now. While I would never deliberately share something that’s inaccurate, I can make mistakes. So, while I work on improving, you can try Google Search.”
For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.