Images of former President Donald Trump posing with Black supporters circulated on social media over the weekend. The images were shared by real Trump supporters, but the actual photos were fake, created by generative AI.
One image, created by conservative radio show host Mark Kaye and his team, showed Trump with his arms around a group of Black women. Kaye shared the image on social media, where he has more than one million followers, according to the BBC.
Another image, created by a user identified as “Shaggy,” featured the former president sitting on a porch with a group of young Black men. That photo was also posted on social media where it received thousands of likes and 1.4 million views.
A Trump campaign official told The Hill that the campaign had no involvement in the creation or sharing of the images. However, the former president has been actively courting Black voters—which could be a critical voting block in the general election.
“Often, a lot of successful approaches for political candidates have not been facilitated by their campaigns at all. Social media has enabled a user to have some impact, especially if they already have a large following. Trump and other politicians have been known to share content regardless of its basis in truth or reality, so these efforts are not new,” said Dr. Julianna Kirschner, lecturer in the Annenberg School for Communication and Journalism at the University of Southern California.
AI content is more of the same, but the barrier to entry is much lower. Anyone with a modicum of understanding of AI tools can potentially create similar content.
“This relatively benign example of Trump with supposed Black supporters will be an expedited blurring of the line between truth and falsehood,” added Kirschner. “In prior years, Trump and other candidates could make claims and expect their followers to believe them. Now, they can create ‘evidence’ to support such claims.”
Breaking No Rules—At Least Not Yet
There are currently no regulations as how such manipulated images or even videos can be employed in a political campaign. It is likely lawmakers will have to act sooner than later.
“Election security is a bigger issue than ever because GenerativeAI and deepfakes threaten our civil discourse, the democratic process, and election integrity,” warned technology industry analyst Susan Schreiner of C4 Trends. “GenerativeAI is a potent new tool for disinformation campaigns to sway voters, deliberately spread false information, and enable organized coordination and incitement of violence and related problems.”
Such AI-created content could be seen as concerning because this new AI technology is so realistic-looking and sounding. It could result in something that looks and sounds authentic.
“In today’s climate, nefarious actors could easily convey a dangerous false narrative that hurts the reputation or support for a candidate,” Scheriner continued. “They could spread false claims that seek to erode trust in public institutions, such as claims of voter fraud, claims that your vote won’t count in a primary, misrepresenting the date of an election, or creating a false narrative by misrepresenting a candidate’s support and position on certain issues. It’s become easy to spread falsehoods with the perception that it’s the truth!”
Thus, the greater danger is that deepfakes could stretch the truth so far that rivals can be framed in a bad light.
“This is especially concerning when that framing is based on lies,” said Kirschner. “As AI improves over time, it will be hard for people to discern the difference between a real and a fabricated event. It will not matter what policies and ideals candidates stand behind anymore. The political landscape will then be based on who can harness AI better than their opponent.”
Media ethics will become more critical in the coming years, because misleading AI content should not be the predominant way that political discourse is carried out.
“Disclosures of AI usage must be provided, as we currently do with revealing the sponsor of political advertisements. As we learn more about advances in AI, people will need to be educated in media and digital literacy, so they can navigate this new political reality,” Kirschner continued.
Not Just a U.S. Problem
Though deepfakes could certainly play a role in the 2024 election, the problem is only going to get worse as the technology improves. Moreover, this isn’t just about the race for the White House in November,
“Globally more voters will head to the polls than ever in history, representing 49% of the world’s population,” said Schreiner. “This is likely to be the most consequential election, with at least 64 countries plus the European Union, expecting to hold national elections in 2024.”
Fortunately, many of the world’s leading tech companies—including Microsoft, Meta, Google and others—have voluntarily pledged to adopt shared practices for detecting and labeling AI-generated deepfake content aimed at misleading voters in elections.
“A total of thirteen firms signed onto the pledge, signaling their intent to monitor their platforms for deceptive election-related deepfakes and provide swift, proportional responses when such content is identified,” Scheiner noted. “However, the measures agreed to are non-binding, leaving some questioning whether this amounts to meaningful progress or merely virtue signaling in an era of heightened regulatory scrutiny.”