The EU takes the lead and adopts the first major AI legislation   

On May 21, 2024 the Council of EU voted on and approved the Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act). This vote comes two months after the EU Parliament approved the draft legislation prepared by the EU Commission in 2021. From a procedural point of view, the next steps would be for the document to be signed by the presidents of the EU Parliament and of the Council and then published in the EU’s Official Journal. The regulation will enter into force twenty days after this publication and it shall apply two years after its entry into force, with some exceptions for specific provisions.

The adoption of the AI Act marks an important step in both the legal and the technology sectors. This regulation is the first attempt to put AI into a legislative framework and it marks the clear intent of the EU to be a pioneer in the field. It is clear that this new set of rules will have a major impact both within the EU and abroad. Also, it is likely to be used as a model for any future AI legislation to be developed in other parts of the worlds. As per the recitals of the regulation, the purpose of the document is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of AI systems in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. The length of the documents is a clear indication of the complexity of the issues being addressed – a total of over 400 pages with 180 recitals and 113 articles.

The document is divided into 13 Chapters – General Provisions, Prohibited AI Practices, High-risk AI Systems, Transparency obligations for providers and deployers of certain AI systems, General-purpose AI models, Measures in support of innovation, Governance, EU database for high-risk AI systems, Post-market monitoring, information sharing and market surveillance, Codes of conduct and guidelines, Delegation of power and committee procedure, Penalties and Final provisions. In the center of the text lies the definition of AI system - a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

The main features of the AI Act may be summarized as follows:   

Classification of AI systems

The AI Act puts the different types of AI into several categories each subject to different rules and limitations. AI systems considered to pose a great threat are explicitly banned as their risk level is considered unacceptable. Such systems include cognitive behavioural manipulation and social scoring as well as AI for predictive policing based on profiling and systems that use biometric data to categorise people according to specific categories such as race, religion, or sexual orientation. High-risk AI Systems are permissible but subjected to several requirements and obligations. Lastly, AI Systems of limited risk are subject to a very light transparency obligations. General-purpose AI models not posing systemic risks are subjected to limited requirements, for example with regard to transparency, but those with systemic risks have to comply with stricter rules.   

Transparency and protection of fundamental rights

The Council reports that before a high-risk AI system is deployed by some entities providing public services, the fundamental rights impact will need to be assessed. The regulation also provides for increased transparency regarding the development and use of high-risk AI systems. High-risk AI systems, as well as certain users of a high-risk AI system that are public entities will need to be registered in the EU database for high-risk AI systems, and users of an emotion recognition system will have to inform natural persons when they are being exposed to such a system.

As per Art. 50 of the AI Act, providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.   


The recitals to the AI Act explicitly state that Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement, and to respect the ne bis in idem principle. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down. Fines for violations range from 7.5 million euros or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.   

Foster innovation

The AI acts sets the goal to support innovation, respect freedom of science, and not undermine research and development activity. To ensure a legal framework that promotes innovation, is future-proof and resilient to disruption, Member States should ensure that their national competent authorities establish at least one AI regulatory sandbox at national level to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.   

New governing bodies

To ensure proper enforcement, several governing bodies are set up: An AI Office within the Commission to enforce the common rules across the EU; A scientific panel of independent experts to support the enforcement activities; An AI Board with member states’ representatives to advise and assist the Commission and member states on consistent and effective application of the AI Act; An advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission

The latest version of the regulation which has not yet been published in the Official Journal can be found here.