Artificial intelligence (“AI”) is one of the fastest-growing technology sectors, with the potential to revolutionise numerous industries including healthcare, transport and manufacturing. Yet, society is quite reluctant to place its trust in the development of AI. As part of its commitment to digitalisation, and more specifically, to legislation in this field, in April 2021 the European Commission (“The Commission”) submitted its official proposal for a regulation on AI. The aims of the proposed Draft AI Regulation (“the Regulation”), also referred to as the Artificial Intelligence Act, include increasing people’s confidence and encouraging businesses to invest in the development of artificial intelligence by making sure that AI is safe, ethical, and under human control. Its importance is enhanced by the fact that, if adopted, it will be the first piece of legislation which is specifically designed to regulate artificial intelligence in the world.
The Regulation is relevant for both start-up and established businesses, as well as for multiple governments and their respective bodies. The current article examines the most important aspects of the proposed text and thus provides a glimpse into the envisaged regulatory measures in respect of AI.
The Regulation envisages a broad definition for AI system: “‘Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. The Commission has aimed to provide a technology-neutral and adaptable definition. Annex I, which the definition refers to, includes an extensive list of techniques and approaches for which the Regulation envisages a mechanism for adaptation in view of future technological developments.
The Regulation will apply to a very broad range of subjects, consisting of both public and private bodies, including persons, companies, and agencies. Quite importantly, the territorial application of the Regulation is perhaps much broader than expected, as it focuses not only on where the provider or user of an AI system is located but rather on whether an AI system has any impact within the European Union.
Consistent with its aims for increasing society’s confidence and ensuring the safety and ethics of AI application, the draft Regulation puts forward a “risk-based” approach. It distinguishes between unacceptable, high risk and low/minimal risk uses of artificial intelligence.
All AI practices which are deemed unacceptable are prohibited. Those include, amongst others, remote biometric identification systems (ex. facial recognition systems), systems that identify or classify people’s trustworthiness (ex. social scoring systems), and AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.
The Regulation is specifically focused on high-risk AI systems (“HRAIS”), which are subject to extensive obligations, including technical, compliance and monitoring. They are divided into two groups:
(i) systems where the AI is a product or part of a product that is already subject to certain EU safety regulations, and;
(ii) AI systems which have been designated by the Commission as high-risk. Examples of HRAIS can be found in many industries including healthcare, transportation, aviation, etc.
Finally, for low-risk AI systems the Regulation mostly encourages self-regulation through implementation of some requirements applicable to HRAIS, or through conduct codes. It nevertheless imposes certain transparency requirements to AI systems which are low risk but fall within one of the following:
(i) have interactions with humans;
(ii) apply emotion recognition or biometric categorization, or;
(iii) are deep fakes.
Non-compliance with the obligations which arise under the Regulation carries exposure to high penalties, with the highest fines envisaged reaching a threshold of EUR 30 000 000 or 6 % of a company’s worldwide annual turnover, whichever’s higher.
For businesses, what is most important is to consider whether they fall within the scope and whether they will be impacted by the Regulation. While the current text is only a draft which is currently at the stage of first reading by the European Parliament and the Council, the core ideas and the main provisions are unlikely to be significantly amended. Considering the hefty penalties envisaged under the Regulation, it could prove beneficial for businesses in the AI and AI-adjacent sectors to consider early impact assessments and to make sure that they stay up-to-date in relation to any further developments concerning the AI Regulation.