Dec 13, 2023

EU agrees on AI Act and sets the first rules for AI in the world

Following almost 3 years of work and a 3-day ‘marathon’ trilogue consultations, negotiators of the Council presidency and the European Parliament (EP) have reached a provisional agreement on the world’s first comprehensive artificial intelligence (AI) law, known as the AI Act. 


The AI Act, which was originally presented in April 2021 by the European Commission, fits into the EU’s “rights-driven” regulatory agenda and aims to balance protecting fundamental rights and stimulating investment and innovation in AI in the EU. The act achieves this by following a ‘risk-based’ approach, applying stricter rules for the higher-risk applications and use cases of AI. 


As the first legislative proposal of its kind in the world, EU leaders hope that the AI Act will leverage the so-called Brussels effect and set global standards for AI regulation while promoting the EU’s approach to tech regulation across the globe. 


The main elements of the provisional agreement are the following: 


  • Definition of AI 


The regulation took all the main elements from a recently revised OECD definition of AI, and with minor differences in language defines it as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. 


  • Classification of AI systems 

Applications that have been banned due to the threat they pose to democracy and citizens’ rights include biometric categorization based on sensitive characteristics, scraping of facial images for recognition databases, emotion recognition in workplaces and educational bodies, social scoring based on personal characteristics and AI systems that manipulate human behaviour or exploit vulnerabilities. 


High-risk AI systems such as those for vehicles, education, voting systems, critical infrastructure, emotion recognition, biometric identification, law enforcement applications, public bodies etc., face several obligations. Some of the most important requirements are fundamental rights impact assessment and conformity assessments, registration in the public EU database, implementation of risk management and quality management systems, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity. 


  • Law enforcement exceptions 

Exceptions for law enforcement have been established at the insistence of Member States, including the use of real-time biometric identification systems in public spaces subject to strict conditions (judicial authorization and specific crime lists). 


  • General-purpose AI systems and foundation models 

General-purpose AI systems that can be used for different purposes are subject to less significant obligations than high-risk systems, including transparency requirements, such as technical documentation, training data summaries, copyright and IP safeguards. Some additional requirements for high-impact models with systemic risks are model evaluations, risk assessments, adversarial testing and incident reporting. For generative AI systems, such as chatbots, users must be informed when interacting with AI and AI-generated content must be labelled and detectable. Models above a certain threshold of computing power will automatically be categorised as “systemic”. The provisional agreement also addresses the specific cases of general-purpose AI (GPAI) systems. 


After intense debates between the Council and the EP, the provisional agreement sets specific rules for foundation models, “large systems capable of competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code”. The agreements created stricter rules for ‘high-impact’ foundation models, “which are foundation models trained with large amounts of data and with advanced complexity, capabilities, and performance well above the average”. Most importantly, foundation models must comply with certain transparency obligations before their commercialisation starts. 


  • Governance 

The AI Act has assigned the enforcement and supervision of the most advanced AI models to a new AI Office within the Commission. The AI office will rely on the support of a scientific panel of independent experts, advising specifically on GPAI models and contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high-impact foundation models, and monitoring possible material safety risks related to foundation models. 


The AI Board will comprise Member States’ representatives and will serve as a coordination platform and an advisory body to the Commission and will give an important role to Member States in the implementation of the regulation, including the design of codes of practice for foundation models. An advisory forum for a wide range of stakeholders, including industry representatives, SMEs, start-ups, civil society, and academia, will provide technical expertise to the AI Board. 


  • Sandboxes 

The provisional agreement promotes sandboxes and real-world testing for AI development, and it also requires foundation models and generative AI systems to provide all the necessary information to comply with the AI Act’s obligations to downstream providers that create an application falling in the high-risk category. This should streamline requirements for SMEs piggybacking on these models, as well as fuel innovation. 


  • Penalties 

There will be fines of up to 7% of global annual turnover (GAT) or €35 million for violations of the AI Act, up to 3% of GAT or €15 million for the majority of other violations, up to 1.5% of GAT or €7.5 million for supplying incorrect information. For SMEs and startups, fines will be capped to not disproportionately hurt smaller companies. 


  • Enforcement 

Enforcement will be shared between Member States and EU authorities. An EU AI Office and AI Board will be established at the EU level to oversee national authorities’ enforcement and to coordinate policy at the EU level. Market surveillance authorities at the national level will enforce the AI Act in their respective countries. Any individual can make complaints about non-compliance, giving some potential redress to EU citizens. 


  • Open-source software 

Free and open-source software will be excluded from the scope of the legislation unless they are deemed high-risk systems, prohibited applications or a system with risks of causing manipulation. 


Next steps 


Formal adoption 

Discussions focusing on the finalisation of the details of the regulation will continue at the technical level, and foreseeably the Belgian presidency of the Council will submit the final text to Coreper ambassadors for approval. The final act is expected to be adopted in early 2024. 


Implementation

The implementation for most provisions will be two years after the enactment of the AI Act, with a shorter period of 6-months for certain requirements like high-risk AI systems and governance infrastructure and training. 


Reaching an agreement on such a consequential piece of legislation is a major achievement by the EU, led by the Spanish presidency, which was able to close this dossier despite fierce debates and last-minute position changes. With this agreement, the EU has become the first actor to formally regulate AI in the world, and to develop a future-proof rulebook on this technology. Hopes are high among EU legislators that the Brussels effect will again take hold in the realm of AI, cementing the EU as a global regulatory trend-setter. 


There will likely be significant opportunities in the legal and policy arenas starting in the coming months, with various AI companies of all sizes looking to explore how to be compliant with the relevant obligations under the AI Act.

share this Publication on your social

Share by: