The Superior Electoral Court in Brazil has outlined rules around the use of artificial intelligence in political campaigning ahead of municipal elections set to take place in October this year.
Voted and approved by the majority of the Court yesterday (27), the rules follow a series of public hearings and consultations on the theme, the rules state that use of deepfakes – false content using the voice or likeness of a real person – are strictly forbidden.
“Synthetic content in audio, video format or a combination of both, which has been digitally generated or manipulated, even with authorization, to create, replace or alter the image or voice of a living, deceased or fictitious person cannot be used to harm or to favor candidacy”, says the resolution voted by the Court, in relation to deepfakes.
In addition, any material that has been “fabricated or manipulated” through use of AI should be explicitly labeled as such, according to the new rules.
According to digital law professor at the University Center of Brasília, Carolina Jatobá, the relation between technology and democracy “has always been complex”, but the latest advances take the debate on deepfakes to a new level.
Jatobá stresses that deepfakes can fuel polarization and undermine trust in the election process, causing irreparable damage even before the falsehoods are exposed. “This ability to manipulate the truth in real time poses a significant challenge for defenders of democracy,” she adds.
The accountability issue
The rules set out by Brazil’s Superior Electoral Court also cover the utilization of chatbots and avatars as channels to support campaigning: according to the Court’s resolutions, the communication taking place through these tools cannot simulate a conversation between a candidate and a real person.
In addition, the Court is calling for greater accountability from digital platforms in regulating content shared during elections. The rules voted yesterday also relate to the responsibility of technology companies who fail to take down content in the run-up to the elections that represents risk.
Such cases would be, according to the new rules, hateful behavior or speech, including promotion of racism, homophobia, as well as nazi, fascist or hateful ideologies against a person or group through prejudice of origin, race, sex, color, age and any other forms of discrimination.
What major technology companies such as Meta and Google have argued is that the responsibility for disseminating disinformation, especially content generated by AI, should fall on political parties and candidates.
According to digital law expert Jatobá, “perfect artificial counterintelligence” can only be offered by those who create the technology – and, until that happens, the public must remain vigilant and critical of the content they consume.
“Ultimately, defending democracy against deepfakes will require a multifaceted approach that involves governments, technology companies and individuals around the world,” she argues.
The broader scenario
The risks posed by deepfakes meant that regulating AI beyond the elections has become a priority for Brazilian politicians. Leaders across both the Lower and Upper Houses of the Congress – the Federal Senate and the Chamber of Deputies – have expressed concerns over the advances of AI.
There are dozens of projects awaiting analysis by the Brazilian Congress focusing on artificial intelligence since 2019, however the majority of bills have been submitted in 2023. Themes of the proposed legislation includes use of AI to manipulate the likeness of deceased individuals, intellectual property for art created with the technology, specific sanctions for crimes perpetrated with AI, and regulation for autonomous vehicles.
However, the main project in discussion in Brazil was authored by the president of the Senate, Rodrigo Pacheco, which establishes a framework for the development and application of AI systems, as well as the rights of those affected by the technology and the risks involved. The bill is expected to be voted in April 2024.