EU ambassadors have approved the world’s first comprehensive rulebook for Artificial Intelligence (AI), solidifying the political agreement reached in December. The AI Act, a flagship bill designed to regulate AI based on its potential to cause harm, faced significant opposition from key European players, most notably France, which sought a more lenient regulatory approach for powerful AI models like Open AI’s GPT-4.
The journey towards consensus on the AI Act was far from smooth. The then-Spanish presidency faced cold receptions from Paris, Berlin, and Rome, with EU countries attempting to influence the text as technical work progressed on the law’s preamble. Despite these efforts, the operative parts of the text were locked, limiting the space for concessions.
As the Belgians assumed the presidency in January, they presented a ‘take-it-or-leave-it’ scenario to member states, resisting significant changes. France engaged in back-room manoeuvring, aiming to gather opposition and secure concessions or even reject the provisional agreement.
However, on January 24, the Belgian presidency presented the final version of the text, leaked exclusively by Euractiv, leading to reservations from member states due to insufficient time for comprehensive analysis. The Committee of Permanent Representatives lifted these reservations on February 2, securing the adoption of the AI Act.
France, Germany, and Italy, primarily concerned with powerful AI models supporting General Purpose AI systems, advocated for a lighter regulatory regime, emphasizing codes of conduct over hard rules. The compromise, however, adopted a tiered approach, imposing horizontal transparency rules for all models and additional obligations for compelling models with systemic risks.
The balance tilted against France as Berlin threw its support behind the text, isolating the German Digital Minister in his opposition. Italy, lacking a leading AI startup to defend, also chose not to oppose the AI Act despite internal discontent.
France ultimately agreed to support the text with strict conditions, seeking to ensure competitive AI model development, balance transparency and protection of trade secrets, and avoid overburdening companies with high-risk obligations.
What is the act about?
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Biometric identification and categorisation of people
- Real-time and remote biometric identification systems, such as facial recognition
Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into specific areas that will have to be registered in an EU database:
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
General purpose and generative AI
Generative AI, like ChatGPT, would have to comply with transparency requirements:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
As the AI Act progresses, EU countries retain the opportunity to influence its implementation, with around 20 acts of secondary legislation to be issued by the Commission. The AI Office, responsible for overseeing AI models, will be significantly staffed with seconded national experts.
The next steps involve the European Parliament’s Internal Market and Civil Liberties Committees adopting the AI rulebook on February 13, followed by a plenary vote provisionally scheduled for April 10-11. The formal adoption will be complete with ministerial endorsement, and the AI Act will enter into force 20 days after publication in the official journal. Prohibited practices bans commence after six months, and AI model obligations after one year, with all other rules taking effect after two years, except for the classification of AI systems undergoing third-party conformity assessment, which is delayed by an additional year.