“Europe should become the global centre for trustworthy artificial intelligence (AI).” This is the ambitious target set by the European Commission (the “Commission”) when presenting its proposal for an Artificial Intelligence Act on 21 April 2021, in which it intends to regulate the use of AI, not the technology itself. The proposed AI Regulation is primarily a prohibition law which prohibits the use of AI systems in certain application scenarios or makes it subject to technical and organisational requirements. The proposal does not regulate civil-law issues arising from the use of AI (e.g. liability, attribution of declarations of intent, creation of intellectual property).
Brussels aims to be at the forefront of regulating AI and hopes to have a similarly far-reaching impact in this field as with the General Data Protection Regulation (EU) 2016/679 (GDPR), which very quickly became the norm for many of the world’s largest companies.
Shaping the rules for AI is an important priority for Commission President Ursula von der Leyen and a key element of the ambitious European data strategy. The proposal is the result of several years of preparatory work in Brussels: the foundation for the European AI strategy was established in April 2018 and confirmed in the February 2020 White Paper on AI.
Below we set out the key points of the proposed AI Regulation and predict the impact of the project, the legislative process and international aspects.
Definition of “AI systems”
The proposed AI Regulation is based on a broad definition of “AI systems”. Under Article 3(1) and Annex I, an “AI system” is said to exist already if software has been developed using one of the following techniques:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- Statistical approaches, Bayesian estimation, search and optimisation methods.
Purpose of the Regulation
The Commission’s aim is to create a framework for strengthening trust in AI: “Trust is a must and not an optional extra.” The importance of the proposed AI Regulation cannot be underestimated, since a balance must be found between protecting fundamental rights and promoting innovation. Not only must potential risks caused by AI be prevented, but at the same time incentives for innovation must be created or maintained. The drawbacks of unbalanced regulation would be enormous: inadequate protection would allow the use of AI that threatens fundamental rights, while excessive regulation could hamper research and innovation capacity in the EU, especially as the proposed AI Regulation shall apply to all economic and industrial sectors.
It is to be welcomed that the proposed AI Regulation, like the Commission’s 2020 White Paper, is founded on a risk-based approach. The prohibition criteria are linked to the security risks posed by the AI system. The higher the potential hazards, the stricter the requirements imposed on the AI system. Below we set out the individual criteria by risk category:
Under Article 5 of the proposed AI Regulation, the use of AI is to be prohibited in the following areas in particular:
- Use of real-time remote biometric identification systems for in publicly accessible spaces for purposes of law enforcement, unless strictly necessary for specific purposes (e.g. targeted searching for victims of crime, prosecution of a perpetrator);
- Use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- Use of certain AI systems that exploit vulnerabilities of a specific group of persons due to their age or physical or mental disability;
- Certain forms of detrimental or unfavourable treatment of persons related to social behaviour or predicted personal or personality characteristics.
In addition, the proposed AI Regulation introduces the concept of high-risk AI systems used in the following areas:
- Critical infrastructure (e.g. transport) that could endanger citizens’ lives and health;
- Education or vocational training, where an individual’s access to education and professional life could be adversely affected (e.g. assessment of examinations);
- Safety components of products (e.g. an AI application for robotic assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV analysis software for recruitment procedures);
- Important private and public services (e.g. credit check, thus preventing citizens from obtaining a loan);
- Prosecution of criminal offences, which could interfere with basic human rights (e.g. assessment of the reliability of evidence);
- Migration, asylum and border control (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. application of legislation to specific facts).
High-risk AI systems are to meet strict requirements under Article 8 et seq., prior to marketing authorisation:
- Appropriate risk management systems (Article 9)
- High quality of the data used in the AI system, in particular to avoid discrimination (Article 10)
- Technical documentation of the AI system and its purpose (Article 11)
- Documentation and logging of operations, in particular to enable the traceability of AI results (Article 12)
- Transparency and provision of information to users (Article 13)
- Appropriate human oversight to minimise risks (Article 14)
- High level of accuracy, robustness and cybersecurity (Article 15)
These are abstract technical requirements which require further clarification in practice and by the courts. From a technical point of view, it will have to be clarified whether the strict requirements for AI training data to be representative and free of errors (see Article 10(3)) can be met at all. It should also be borne in mind that for some AI methods already in use the strict requirements for the transparency and accountability of decisions are virtually impossible or difficult to implement for technical reasons. The proposed AI Regulation, if implemented, will thus have a large impact on the technical design of processes and business models.
In certain AI systems with a low perceived level of risk only transparency obligations are imposed, for example if there is a risk of user manipulation when using chatbots. In particular, in certain situations users should be informed that they are communicating or interacting with an AI system.
Moreover, the vast majority of AI systems currently used in the EU are likely to continue to be permitted to be used in accordance with existing laws.
Focus on AI compliance
The proposed AI Regulation provides for fines of up to EUR 30 million or 6% of total worldwide annual turnover for breaches of the prohibition criteria. Companies should therefore deal with the planned requirements at an early stage in technical, organisational and legal terms.
Robots and machine manufacturers are also increasingly considering the issue of AI compliance. The proposed AI Regulation is complemented by a proposal for a new EU Regulation on Machinery Products to replace the existing Machinery Directive 2006/42/EC. The new Regulation on Machinery Products aims to ensure the safe integration of AI systems throughout the machinery pool. The aim of this holistic regulatory approach is to allow companies to carry out only one conformity assessment.
The requirements of the proposed AI Regulation pose major challenges for companies using AI. The extent to which the terms currently listed in Article 3 of the proposed AI Regulation (in particular “user”, “provider”, “software”, “AI system”) can be further clarified, thus reducing remaining interpretative ambiguities, is likely to be of crucial importance to the success of the proposed AI Regulation and thus to the status of Europe as a hub of AI innovation.
As part of the EU legislative process, the drafts will now go through the European Parliament and the Council. We expect that the proposed AI Regulation will see significant changes before it becomes law.
- The European Parliament is likely to push for a stricter position on the exceptions contained in the proposal. Last week, a cross-party group of 40 MEPs sent a letter to the President of the Commission, calling for a total ban on the use of facial recognition and other forms of biometric surveillance in public places. A second letter called for a stronger emphasis on non-discrimination and a ban on predictive policing. MEPs also want to prohibit automatic recognition of sensitive features such as gender, sexuality and origin.
- At Council level, the prosecution exemptions in the prohibitions are expected to be well received by security-conscious countries such as France. Last year, a group of 14 Member States led by Denmark (Belgium, Czech Republic, Finland, France, Estonia, Ireland, Latvia, Luxembourg, Netherlands, Poland, Portugal, Spain and Sweden) published a position paper calling for a more flexible framework with voluntary labelling schemes. On the other hand, there are countries such as Germany that have a stronger focus on data privacy.
- In the coming months, many interest groups will try to influence the legislative process and the regulatory content of the draft legislation.
The Commission has avoided providing a timeframe for the adoption of the new legislation. However, experience has shown that it can take from 18 months to two years for a regulation to be ratified and enter into force.
In parallel, the Commission is working on the implementation of a data strategy to reduce barriers and increase data exchange (a proposal to regulate data exchange will be presented later this year). A special liability initiative is expected to be published at the end of this year.
Foreign policy dimensions of the proposed AI Regulation
On 2 December 2020, the Commission and the High Representative of the Union for Foreign Affairs and Security Policy published a joint communication on a new EU-US agenda for global change. One of the priorities is to recommend that the EU and the US conclude a transatlantic AI agreement to strengthen multilateralism and regulatory convergence in the digital economy.
Indeed, European and US political leaders are increasingly worried by China’s use of AI and its far-reaching technological ambitions. So far, however, there has been no agreement between the US and the EU on uniform rules for AI. The Commission has made it clear that it intends to advance its digital sovereignty agenda, to promote the EU approach to AI as a middle course between the US and China, thereby strengthening the EU’s global influence.
- Dr Jens Peter Schmidt, Partner, NOERR
- Dr Dr Claus Zimmermann, Associated Partner, NOERR
- Dr. David Bomhard, Senior Associate, NOERR
- Marieke Merkle, Associate, NOERR
- Giovanna Ventura, Legal Adviser, NOERR
Compliments of Noerr – a member of the EACCNY.