Having recently passed the Artificial Intelligence Act, the European Union is about to bring into force some of the world’s toughest AI regulations.

Potentially dangerous AI applications have been designated “unacceptable” and will be illegal except for government, law enforcement, and scientific study under specific conditions.

As was true with GDPR, this new EU legislation will create new obligations for anyone who does business within the 27 member states, not just the companies based there.

Those responsible for writing it have said that the aim is to protect citizens’ rights and freedoms while also fostering innovation and entrepreneurship. But the 460-odd published pages of the Act contain a lot more than that.

If you run a business that operates in Europe or sells to European consumers, though, there are some important things you need to know. Here are what stands out to me as the key takeaways for anyone who wants to be prepared for potentially significant changes.

When Does It Come Into Force?

The Artificial Intelligence Act was adopted by the EU Parliament on March 13 and is expected to soon become law when it is passed by the European Council. It will take up to 24 months for all of it to be enforced, but enforcement of certain aspects, such as the newly banned practices, could start to happen in as little as six months.

As was the case with GDPR, this delay is to let companies ensure they’re compliant. After this deadline, they could face significant penalties for any breaches. These are tiered, with the most serious reserved for those breaking the “unacceptable uses” ban. At the top end are fines of up to 30 million euros, or six percent of the company’s global turnover (whichever is higher!)

Potentially even more damaging, though, would be the impact on a business’s reputation if it’s found to be breaking the new laws. Trust is everything in the world of AI, and businesses that show they can’t be trusted are likely to be further punished by consumers.

Some Uses Of AI Will Be Banned

The act states that “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.”

In order to do that, the EU has prohibited the use of AI for a number of potentially harmful purposes. Those specifically listed include:

· Using AI to influence or change behaviors in ways that are harmful.

· Biometric classification to infer political and religious beliefs or sexual preference or orientation.

· Social scoring systems that could lead to discrimination.

· Remotely identifying people via biometrics in public places (facial recognition systems, for example.)

There are some exemptions. There’s a list of situations where law enforcement organizations can deploy “unacceptable” AIs, including preventing terrorism and locating missing people. There are also exemptions for scientific study.

The Act says “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” So it’s good to see that limiting the ways it could cause harm has been put at the heart of the new laws.

However, there is a fair amount of ambiguity and openness around some of the wording, which could potentially leave things open to interpretation. Could the use of AI to target marketing for products like fast food and high-sugar soft drinks be considered to influence behaviors in harmful ways? And how do we judge whether a social scoring system will lead to discrimination in a world where we’re used to being credit-checked and scored by a multitude of government and private bodies?

This is an area where we will have to wait for more guidance or information on how enforcement will be applied to understand the full consequences.

High-Risk AI

Aside from the uses deemed unacceptable, the act breaks down AI tools into three further categories – high, limited and minimal risk.

High-risk AI includes use cases like self-driving cars and medical applications. Businesses involved in these or similarly risky fields will find themselves facing stricter rules as well as a greater obligation around data quality and protection.

Limited and minimal-risk use cases could include applications of AI purely for entertainment, such as in video games, or in creative processes such as generating text, video or sounds.

There will be fewer requirements here, although there will still be expectations regarding transparency and ethical use of intellectual property.

Transparency

The Act makes it clear that AI should be as transparent as possible. Again, there’s some ambiguity here—at least in the eyes of someone like me who isn’t a lawyer. Stipulations are made, for example, around cases where there is a need to “protect trade secrets and confidential business information.” But it’s uncertain right now how this would be interpreted when cases start coming before courts.

The act covers transparency in two ways. First, it decrees that AI-generated images must be clearly marked to limit the damage that can be done by deception, deepfakes, and disinformation.

It also covers the models themselves in a way that seems particularly aimed at big tech AI providers like Google, Microsoft and OpenAI. Again, this is tiered by risk, with developers of high-risk systems becoming obliged to provide extensive information on what they do, how they work and what data they use. Stipulations are also put in place around human oversight and responsibility.

Requiring AI-generated images to be marked as such seems like a good idea in theory, but it might be difficult to enforce, as criminals and spreaders of deception are unlikely to comply. On the other hand, it could help establish a framework of trust, which will be critical to enabling effective use of AI.

As far as big tech goes, I expect this will likely come down to a question of how much they are willing to divulge. If regulators accept the likely objections that documenting algorithms, weightings, and data sources is confidential business information, then these provisions could turn out to be fairly toothless.

It’s important to note, though, that even smaller businesses building bespoke systems for niche industries and markets could, in theory, be affected by this. Unlike the tech giants, they may not have the legal firepower to argue their way in court, putting them at a disadvantage when it comes to innovating. Care should be taken to ensure that this doesn’t become an unintended consequence of the act.

What Does This Mean For The Future Of AI Regulation?

Firstly, it shows that politicians are starting to make moves when it comes to tackling the huge regulatory challenges thrown up by AI. While I’m generally positive about the impact I expect AI to have on our lives, we can’t ignore that is also has huge potential to cause harm, deliberately or accidentally. So any application of political will towards addressing this is a good thing!

But writing and publishing laws is the relatively easy part. It’s putting in place the regulatory, enforcement and cultural frameworks to support the change that takes real effort.

The EU act is the first of its kind, but it’s widely expected that it will be followed by further regulation across the globe including in the USA and China.

This means that for business leaders, wherever they are in the world, taking steps to ensure they’re prepared for the changes that are coming is essential.

Two key takeaways from the Act are that every organization will have to understand where their own tools and applications sit on the risk scale and take steps to ensure that their AI operations are as transparent as possible.

On top of that, there’s a real need to stay informed on the ever-changing regulatory landscape of AI. The relatively slow pace that law moves at means you shouldn’t be taken by surprise!

Above all, though, I believe the real key message is the importance of building a positive culture around ethical AI. Ensuring that your data is clean and unbiased, your algorithms are explainable and any potential for causing harm is clearly identified and mitigated is the best way to make sure you’re prepared for whatever legislation might appear in the future.

Share.
Exit mobile version