Technology, AI and social media have been developed for the common good, to make our lives easier, one way or another. Technology, AI and social media can and are being used for good – there is no doubt there. However, they can also be abused – and this is precisely what we have seen across contemporary cases of genocide.

The Yazidi genocide

Back in 2013- 2014, when Daesh was advancing in Syria and Iraq, and before some of the worst atrocities perpetrated by the terror group against Yazidis and other minority groups, Daesh members mastered social media platforms. They mastered the use of technology and various forms of AI-enhanced software. Daesh focused on social media and AI for propaganda purposes – to recruit new fighters and supporters – but also to spread propaganda against Yazidis, Christians and other ethnic and religious minorities. Online magazines, such as Dabiq, have been used by Daesh to spread hatred against the communities before some of the most horrific atrocities – including the attack on Sinjar in August 2014.

In 2024, after the commemoration of the 10th anniversary of the Yazidi genocide, hate speech targeting the Yazidi community was skyrocketing, sending waves of fear across the community. Fearing what was to come, many Yazidis started leaving the internally displaced people (IDP) camps and heading to Sinjar, even though the security situation there was highly concerning.

The Rohingya genocide

Social media have been widely used to incite ethnic and religious hatred and tensions, including against the Rohingya Muslim minorities in the country. According to in-depth research by Amnesty International, “In the months and years leading up to the atrocities, Facebook’s algorithms were intensifying a storm of hatred against the Rohingya which contributed to real-world violence.” Actors linked to the Myanmar military and radical Buddhist nationalist groups flooded the platform with anti-Muslim content, posting disinformation “claiming there was going to be an impending Muslim takeover, and portraying the Rohingya as ‘invaders’.”

In response, but first in 2018 and after some of the worst atrocities against the community, Facebook removed 18 Facebook accounts, one Instagram account, and 52 Facebook Pages in Myanmar, and banned 20 individuals and organizations from Facebook, including Senior General Min Aung Hlaing, commander-in-chief of the armed forces, and the military’s Myawady television network. This action was to prevent them from using Facebook to further inflame tensions.

The Uyghur genocide

The Uyghur communities are skillfully targeted by all three horsemen – technology, AI, and social media. Artificial intelligence and data-driven mass surveillance systems with genetic information and facial recognition capacities – have been widely used to identify and target Uyghur Muslims and other Turkic minorities in Xinjiang, before they were ultimately sent to so-called “re-education camps.” Among others, police enforcement has been using an app to collect personal information, report on activities or circumstances deemed suspicious, and prompt investigations of people the system flags as problematic. Suspicious behavior was broadly understood to include lawful, everyday, non-violent behaviors such as “not socializing with neighbors, often avoiding using the front door.”

Furthermore, social media and bots have been widely used to target any information on the atrocities, and anyone speaking on the atrocities and exchanging them with CCP-approved content propagating Xinjiang as a holiday dream destination.

The Tigray genocide

In the case of the atrocities against Tigrayans during the Tigray war in Ethiopia, the region was subjected to over two years of internet and communication shutdowns. Such deliberate shutdowns were used to prevent communities from communicating with family and friends, accessing potentially life-saving information, and letting the world know what’s happening in their region.

Russia’s war on Ukraine

Russia, in their war against Ukraine, has been manipulating information and spreading disinformation online as a means of information warfare, including by the use of AI and deepfake. Josep Borrell, the then Vice-President in charge of coordinating the external action of the European Union, explained that “Disinformation and propaganda outlets are today a weapon of the Kremlin. And this weapon is a weapon – it hurts, it kills. It kills the capacity of the people to understand what is going on, and, as a consequence, the position of governments and the decisions of international organizations. (…) Russia has built networks and an infrastructure to mislead, to lie and destabilize in an industrial manner.”

These are only a few examples of the very strategic use and misuse of technology, AI and social media for the commission of atrocity crimes.

What does it mean in terms of preparedness and capacity to monitor, analyze and respond to atrocity crimes? In short, we are behind. We are behind the perpetrators in realizing the significance and the power of the three horsemen. We are also behind in addressing the risks they pose in exacerbating atrocity crimes.

As it stands, the key framework for analysis of atrocity crimes – currently 10 years old – is outdated as it does not address these developments in the use of technology, AI and social media in the commission of atrocity crimes. This failure means that – as it stands – by default, our capabilities to identify early warning signs and risk factors are significantly diminished. This – before we even consider that – as it stands – only a few countries in the world monitor and analyze early warning signs and risk factors of atrocity crime – the fundamental work for prevention.

Technology, AI and social media can and must play a role in atrocity prevention. It is key to identify best practices how the power of these three can be used to ensure we give prevention a chance.

Share.
Exit mobile version