Fake images made to show Trump with Black supporters highlight concerns around AI and elections
WASHINGTON — At first glance, images circulating online showing former President Trump surrounded by groups of Black people smiling and laughing seem to be ordinary campaign photos, but a look closer is telling.
Odd lighting and too-perfect details provide clues to the fact they were all generated using artificial intelligence. The photos, which have not been linked to the Trump campaign, emerged as Trump seeks to win over Black voters who polls show remain loyal to President Biden.
The fabricated images, highlighted in a recent BBC investigation, provide further evidence to support warnings that the use of AI-generated imagery will only increase as the November general election approaches. Experts said they highlight the danger that any group — Latinos, women, older male voters — could be targeted with lifelike images meant to mislead and confuse as well as demonstrate the need for regulation around the technology.
In a report published this week, researchers at the nonprofit Center for Countering Digital Hate used several popular AI programs to show how easy it is to create realistic deepfakes that can fool voters. The researchers were able to generate fake images of Trump meeting with Russian operatives, Biden stuffing a ballot box and armed militia members at polling places, even though many of these AI programs say they have rules to prohibit this kind of content.
State of the Union addresses seldom matter much. This one did: It was Biden’s first big opportunity to convince voters that he’s still up to the job.
The center analyzed some of the recent deepfakes of Trump and Black voters and determined that at least one was originally created as satire but was now being shared by Trump supporters as evidence of his support among Black voters.
Social media platforms and AI companies must do more to protect users from AI’s harmful effects, said Imran Ahmed, the center’s CEO and founder.
“If a picture is worth a thousand words, then these dangerously susceptible image generators, coupled with the dismal content moderation efforts of mainstream social media, represent as powerful a tool for bad actors to mislead voters as we’ve ever seen,” Ahmed said. “This is a wake-up call for AI companies, social media platforms and lawmakers — act now or put American democracy at risk.”
The images prompted alarm on both the right and left that they could mislead people about the former president’s support among Black people. Some in Trump’s orbit have expressed frustration at the circulation of the fake images, believing that the manufactured scenes undermine Republican outreach to Black voters.
“If you see a photo of Trump with Black folks and you don’t see it posted on an official campaign or surrogate page, it didn’t happen,” said Diante Johnson, president of the Black Conservative Federation. “It’s nonsensical to think that the Trump campaign would have to use AI to show his Black support.”
Experts expect additional efforts to use AI-generated deepfakes to target specific voter blocs in key swing states, such as Latinos, women, Asian Americans and older conservatives, or any other demographic that a campaign hopes to attract, mislead or frighten. With dozens of countries holding elections this year, the challenges posed by deepfakes are a global issue.
In January, voters in New Hampshire received a robocall that mimicked Biden’s voice telling them, falsely, that if they cast a ballot in that state’s primary they would be ineligible to vote in the general election. A political consultant later acknowledged creating the robocall, which may be the first known attempt to use AI to interfere with a U.S. election.
Such content can have a corrosive effect even when it’s not believed, according to a February study by researchers at Stanford University examining the potential impacts of AI on Black communities. When people realize they can’t trust images they see online, they may start to discount legitimate sources of information.
“As AI-generated content becomes more prevalent and difficult to distinguish from human-generated content, individuals may become more skeptical and distrustful of the information they receive,” the researchers wrote.
Even if it doesn’t succeed in fooling a large number of voters, AI-generated content about voting, candidates and elections can make it harder for anyone to distinguish fact from fiction, causing them to discount legitimate sources of information and lose trust, undermining faith in democracy while widening political polarization.
While false claims about candidates and elections are nothing new, AI makes it faster, cheaper and easier than ever to craft lifelike images, video and audio. When released onto social media platforms like TikTok, Facebook or X, AI deepfakes can reach millions before tech companies, government officials or legitimate news outlets are even aware of their existence.
“AI simply accelerated and pressed fast forward on misinformation,” said Joe Paul, a business executive and advocate who has worked to increase digital access among communities of color.
Paul noted that Black communities often have “this history of mistrust” with major institutions, including in politics and media, that make Black communities more skeptical of public narratives about them and of fact-checking meant to inform the community.
Digital literacy and critical thinking skills are one defense against AI-generated misinformation, Paul said. “The goal is to empower folks to critically evaluate the information that they encounter online.”
Brown and Klepper write for the Associated Press.
More to Read
Get the L.A. Times Politics newsletter
Deeply reported insights into legislation, politics and policy from Sacramento, Washington and beyond. In your inbox three times per week.
You may occasionally receive promotional content from the Los Angeles Times.