Following its White Paper on AI in 2020, the European Commission released in April of this year a proposal for a regulation laying down harmonised rules on artificial intelligence (the “Regulation”).
The Regulation builds on various resolutions adopted by the European Parliament in relation to AI and contributes to the EU’s objective of becoming a global leader in secure, trustworthy and ethical artificial intelligence. It aims to ensure, on the one hand, legal certainty to facilitate investment and innovation in AI and, on the other hand, the safety of AI systems placed on the market and respect for fundamental rights and European values.
It is important to note that although the proposal may generate concerns amongst stakeholders, it contains specific measures to reduce the regulatory burden on SMEs and start-ups.
In this newsflash, we provide an overview of the key points of the Regulation, including its material and territorial scope, the rules applicable to providers and users of AI systems, measures for start-ups, the relationship with related legislation and the substantial sanctions for violations.
Scope and definitions
The Regulation has a very broad material and territorial scope. An “artificial intelligence system” is defined, in short, as software that can generate outputs, such as content, predictions, recommendations or decisions, influencing the environments they interact with and that is developed with machine-learning approaches, logic- and knowledge-based approaches and/or statistical approaches, Bayesian estimation, and search and optimization methods. The first annex to the Regulation clarifies the aforementioned approaches and methods.
Like the GDPR, the Regulation has extraterritorial effect as it applies to providers and users of AI systems located in third countries when the output produced by the system is used in the EU.
In general, the Regulation also resembles the regulatory framework applicable to product safety as it relies on harmonised standards and conformity assessments and provides different rules for providers, importers, distributors and users.
Generally prohibited AI practices
The Regulation follows a risk-based approach. At the top of the pyramid are prohibited AI practices, such as AI systems that (i) use subliminal techniques or exploit vulnerabilities of a specific group in such a way to cause or be likely to cause persons physical or psychological harm, (ii) are used by public authorities to provide social scoring systems (under certain conditions) and (iii) entail “real-time” remote biometric identification in publicly accessible spaces for the purpose of law enforcement, unless an exception applies.
High-risk and other AI systems
The second level in the pyramid consists of so-called “high-risk” AI systems, such as those that are intended to be used in the context of a critical infrastructure, for recruitment purposes, to determine access to educational institutions, to establish credit scores, for certain law enforcement purposes, in the context of migration, asylum and border control management, etc.
Such high-risk AI systems are not prohibited per se but are subject to substantial requirements and legal obligations with regard to risk management, data governance, technical documentation, record-keeping, transparency to users, human oversight, accuracy, robustness and cybersecurity. The Regulation imposes specific obligations on (i) providers of high-risk AI systems (relating to quality management, conformity assessment, corrective actions, cooperation with authorities, etc.), (ii) importers, (iii) distributors and (iv) users of high-risk AI systems and even other third parties under certain conditions. The Regulation clearly imposes more obligations on providers than on other operators.
The Commission will set up an EU-wide database for the registration of stand-alone high-risk AI systems before such systems are placed on the market or put into service. Providers of high-risk AI systems will need to establish and document a post-market monitoring system and will be subject to reporting obligations in the event of serious incidents or malfunctioning.
Certain other AI systems that are not deemed high risk will be subject to mere transparency obligations, for example chatbots and systems that can generate so-called deep fakes.
Measures in support of innovation: what’s in it for start-ups?
To foster innovation, enhance legal certainty and remove barriers for SMEs and start-ups, the Regulation enables Member States’ competent authorities and the European Data Protection Supervisor to establish AI regulatory sandboxes. Such sandboxes are intended to provide a controlled experimentation environment for the development, testing and validation of innovative AI systems, under strict regulatory oversight, before their placement on the market or putting into service. Sandboxes are already known in the fintech sector and can help companies manage their regulatory risk during the development or testing phase.
Regulatory sandboxes will, however, not affect the supervisory and corrective powers of the competent authorities, whose guidelines must be followed at all times in good faith in order to mitigate any significant risks to safety and fundamental rights that may arise during experimentation in the sandbox. Participants also remain liable under applicable laws for any harm inflicted on third parties as a result of sandbox experimentation.
Furthermore, Member States will have to take specific measures for SMEs and start-ups, such as (i) provide priority access to AI regulatory sandboxes, (ii) organise specific and tailored awareness-raising activities about the application of the Regulation, (iii) establish a dedicated communication channel to provide guidance and answers about implementation of the Regulation, and (iv) reduce the fees for conformity assessment in proportion to the company’s size and market share.
Finally, Member States will have to take into account the interests of SMEs and start-ups and their economic viability when laying down rules on penalties, including administrative fines, for violations of the Regulation. The national authorities will moreover have to take into account the size and market share of the operator when determining fine and penalties.
Relationship with the GDPR and other EU acts
The Regulation must be consistent with existing EU legislation applicable to sectors where AI systems are already used or likely to be used in the near future. For example, it may not affect the application of the provisions on the liability of intermediary service providers, laid down in the e-Commerce Directive and the recent proposal for the Digital Services Act, which provide for an exemption from liability for mere conduit, caching and hosting activities under certain conditions (such as not altering the content and removing or disabling access to illegal content as soon as they become aware of it).
The Regulation should not prejudice or derogate from the GDPR. It is intended to complement the GDPR where it provides for restrictions on certain uses of remote biometric identification systems or where it regulates the design, development and use of certain high-risk AI systems. In addition, the Regulation enables, under specific conditions, the re-use of personal data (lawfully collected for other purposes) for the development and testing of certain AI systems in a regulatory sandbox, provided the AI system concerned is developed to safeguard a substantial public interest listed in the Regulation.
Finally, the Regulation is also linked with other legislative initiatives under the European Strategy for Data. The Data Governance Act, for example, creates a framework for the re-use, sharing and pooling of (protected) data which are essential for the development of high-quality data-driven AI models.
Implementation and sanctions for noncompliance
Member States must designate one or more competent authorities for the implementation and application of the Regulation and one national supervisory authority. Where EU institutions, agencies and bodies fall under the scope of the Regulation, the European Data Protection Supervisor will act as the competent supervisory authority. With respect to AI systems provided or used by regulated credit institutions, the financial supervisory authorities may for instance be designated as the competent authority in order to ensure the coherent enforcement of obligations under various pieces of applicable legislations. Unlike the Data Governance Act, the Regulation does not however organise or regulate cooperation with other relevant sector authorities, such as those in charge of cybersecurity or personal data protection (except in the context of AI regulatory sandboxes).
The Regulation also establishes a European Artificial Intelligence Board, which mirrors to a large extent the European Data Protection Board in terms of its tasks and operation. The European Artificial Intelligence Board will be composed of the national supervisory authorities and the European Data Protection Supervisor and will collect and share expertise among Member States, contribute to uniform administrative practices in Member States, and issue opinions and recommendations in relation to the Regulation.
Violations of the Regulation may be subject to a variety of sanctions, including administrative fines of up to 30 million euros or 6% of a company’s total worldwide annual turnover.
The proposal for the AI Regulation is likely to give rise to widespread discussion, as it is the first-ever statutory framework for AI worldwide. Throughout the legislative process, stakeholders will have the opportunity to express their positions. As a result, amendments are very likely. Like the GDPR, the final version of the AI Regulation is expected to have a significant impact on regulatory AI approaches elsewhere in the world.
- Vincent Wellens, Partner | +352 26 12 29 34
- Carmen Schellekens, Counsel | +352 26 12 29 74 06
- Sigrid Heirbrant, Associate | +352 26 12 29 74 50
Compliments of NautaDutilh – a member of the EACCNY.