23 July 2023

The Brussels Effect meets AI: What to Expect from the EU’s AI Act?

People are often fascinated by the journey of discovering individual identity. At some point in your life, you’ve probably taken a personality test, figured out your astrological sign, or taken an online quiz to figure out what flavor of Doritos you are. All of these try to tell us who we are and where we belong in the world.

As the development of artificial intelligence (AI) continues to accelerate, governments around the world are under increasing pressure to regulate the technology. The most tangible legislative proposal on the topic so far is the draft AI Act of the European Union (EU), a landmark proposal which is on track to be adopted by the end of 2023 or early 2024. This article summarises the current state of play of the EU negotiations and the likely direction of the legislative proposal.

Key provisions of the draft AI Act

The EU’s draft legislation aims to foster AI development and innovation and mitigate potential risks in a responsible manner. It seeks to lay out a regulatory framework that builds trust with the EU citizenry and sets out explicit guidelines for AI-related activities for the public and private sectors. This approach is already attracting global attention and will likely inform other countries’ approaches to AI regulation – though it is yet to be seen how strong the so-called ‘Brussels effect’ will be in the AI domain.

One of the core elements of the legislative proposal is that it classifies AI systems into four risk categories:

  • The first category (unacceptable risk) is banned outright and includes systems like social scoring that manipulate social behaviour in ways that may be harmful to human rights and development.
  • The second category (high-risk) includes data governance issues, human oversight, law enforcement uses, and usage related to critical infrastructures, and essential private and public services.
  • The third category (limited risk) includes AI systems such as chatbots (e.g., ChatGPT) and imposes transparency and disclosure requirements on them.
  • The fourth risk category (minimal or no risk) is afforded near unlimited use of AI, so long as the technology remains classified as minimal risk.

The AI Act allows fluidity in the classification and re-classification of given activities in respective risk categories, enabling the legislation to adapt to the evolving landscape of AI systems. Furthermore, it establishes a European Artificial Intelligence Board to facilitate EU Member States’ adherence to the incoming regulations. However, it remains to be seen what enforcement capabilities this board will possess.

Legislative process: EP adds amendments, and negotiations to follow

 On 11 May 2023, the IMCO and LIBE committees of the European Parliament (EP) adopted a draft report on the legislation, which includes significant amendments, compared to the original proposal. The draft report includes a ban on predictive policing, a number of additions to the list of stand-alone AI categorised as high-risk and a strong and inclusive role for the new AI Office. The EP committees have also pushed for a stronger alignment with the EU’s GDPR framework, as well as the introduction of specific provisions related to general purpose Artificial Intelligence.

This draft negotiating mandate needs to be endorsed by the whole EP which is expected to happen at the plenary session between 12-15 June. As the next step, negotiations between the EP and the Council will start.

A balancing act?

Proponents of the draft legislation argue that it is an essential tool in promoting the safe use and development of AI and that it will eventually boost public trust in AI systems. Critics on the other hand have warned that the AI Act will stifle innovation, particularly the robust regulations on high-risk categories. With other major players in AI yet to propose concrete legislation, others have voiced concerns that the AI Act will create a lagging effect for developers.

Such considerations are represented in the EU negotiations: the EP so far has endorsed stricter provisions, while Member States like France have stated that the EP’s position could harm the EU’s ability to compete with the US or China. Sam Altman, the CEO of OpenAI (the developer of ChatGPT) has recently toured Europe and paid visits to various Member States (including France, Spain, Germany, and Poland) and pledged to comply with EU rules and to establish a European headquarter in a Member State. 

The EU’s AI Act will likely influence regulatory and policy approaches across the globe as it sets a legal framework for regulating AI and provides a case study that can aid other countries’ decisions on how to regulate AI. In an increasingly interconnected world, even if the EU’s approach fails to gain widespread adoption, it could serve as a steadfast and enduring benchmark for regulatory frameworks in AI.


share this Publication on your social

Share by: