Since generative AI went mainstream, the amount of fake content and misinformation spread via social media has increased exponentially.

Today, anyone with access to a computer and a few hours to spend studying tutorials can make it appear that anyone has said or done just about anything.

While some countries have passed laws attempting to curtail this, their effectiveness is limited by the ability to post content anonymously.

And what can be done when even candidates in the US presidential election are reposting AI fakes?

To a large extent, social media companies are responsible for policing the content posted on their own networks. In recent years, we’ve seen most of them implement policies designed to mitigate the dangers of AI-generated fake news.

But how far do they go, and will it be enough? And is there a risk that these measures themselves could harm society by impinging on rights such as free speech?

Why Is AI Generated Fake Content A Big Problem?

We’ve seen a huge increase in the use of AI to create fake information with the aim of undermining trust in democratic processes, such as elections.

AI-generated deepfakes can appear highly realistic. Video and audio content has been widely used to damage reputations and manipulate public opinion. The vast reach of social media makes it possible for this fake content to go viral very quickly, reaching a great many people.

For example, this year, thousands of Democrat-registered voters in New Hampshire, USA, received calls urging them to abstain from voting. The voice, purporting to be that of President Joe Biden, informed recipients that the upcoming state primary was going to be an easy victory and that they should instead save their vote for future polls that would be more closely contested.

And in Bangladesh, deepfaked videos of two female opposition politicians wearing swimming costumes in public caused controversy in a society where women are expected to dress modestly.

This is just the tip of the iceberg – researchers estimate that more than half a million deepfaked videos were in circulation on social media in 2023. And with access to the technology widening, it’s a problem that’s only going to get worse.

What Are Social Media Networks Doing About It?

Most of the big social media companies have said that they have implemented measures designed to protect against the rising tide of fake content and disinformation.

Meta – owners of Facebook and Instagram – employ a mix of technological and human-based solutions. Algorithms are used to scan every piece of uploaded content, and those that are flagged as AI-generated are automatically labeled as such. This involves adding an “AI Info” tag warning that the content might not be everything it purports to be.

The company also employs humans and third-party fact-checking services to manually check and flag content. And it prioritizes reputable and trusted sources when recommending content in users’ feeds on the basis that established news organizations are less likely to allow their reputation to be damaged by publishing fake content.

X (formerly Twitter), on the other hand, takes a user-generated approach. It relies on a system it calls “community notes” which allow users with a paid subscription to flag and annotate content they feel is misleading. X has also, on occasion, taken action to ban users from the platform who have been found to be misrepresenting politicians. Its Synthetic And Manipulated Media Policy states that users must not share synthetic (AI-generated) content that may deceive or confuse people.

YouTube – owned by Google – states that it actively removes misleading or deceptive content that poses a risk of harm. With “borderline” content, which may not explicitly break the rules but still poses a risk, it takes steps to reduce the likelihood that it will appear in lists of videos that it recommends users to watch. As with Meta, this is policed through a combination of human reviewers and machine learning algorithms.

And TikTok, owned by ByteDance, uses technology it calls Content Credentials to detect AI-generated content in text, video or audio format, and automatically apply warnings when it appears in users’ feeds. Users must also self-certify any content they upload that contains realistic-looking AI video, images or audio, stating that it is not intended to mislead or cause harm.

Is It Working And What Else Can Be Done?

Despite these efforts, AI-generated content clearly designed to mislead is still widely distributed on all of the major platforms, so it’s clear that there is some way to go.

While the technological and regulatory solutions implemented by the networks and governments are essential, I think it’s unlikely that they alone will solve the problem.

I believe education will ultimately be more important if we don’t want to live in a “post-truth” world where we can no longer trust what we see with our eyes.

Developing the critical thinking skills necessary to determine whether the content is likely to be real or is probably designed to deceive us will be a key part of the puzzle.

The fight against fake content and disinformation is an ongoing battle that will require collaboration between content providers, platform operators, legislators, educators, and ourselves as users and consumers of online information.

It’s certainly worrying that even more sophisticated AI tools, capable of more convincing fakery and deception, are sure to be on the horizon. This means that developing effective methods to counter these risks will be one of the most pressing challenges facing society in the coming years.

Share.
Exit mobile version