Member News

Eversheds Sutherland | EU AI Act – Considerations for global employers

New and upcoming obligations

Why should I read this?

The first obligations under the EU AI Act are now in force.

Any business that places on the market, puts into service or otherwise uses artificial intelligence (‘AI’) systems within the EU will have obligations under the AI Act. The AI Act seeks to create a cohesive legal framework to safeguard against the adverse impacts of AI systems, while at the same time fostering innovation.

In this briefing, we examine the most recently implemented obligations from an employment perspective, the timeline of what is to come and the steps that employers should take now.

What do I need to know?

The AI Act, which came into force on 1 August 2024, applies to providers, deployers and affected persons of AI systems.

Most employers using AI systems in their operations will be classified as ‘deployers’. If their establishment or location is within the EU, or outside the EU but with AI system output used in the EU, they will fall within the scope of the Act. This includes employers who use AI systems for hiring, assigning tasks, or monitoring employees, as well as those providing AI systems for employees to use in their daily work.

For deployers, the obligations apply to a businesses’ own staff as well as “other persons” dealing with the operation and use of AI systems on their behalf. This means that the requirements extend beyond the immediate workforce and can also apply to third parties such as contractors, service providers and other outsourced arrangements.

As illustrated by the timeline below, the obligations under the AI Act are being implemented in tranches. The initial obligations are now in force, with subsequent phases to follow, with guidelines and Codes of Practice also expected along the way. Notably, the Act does not require transposition into national law, so the requirements are applicable immediately on the effective dates.

The first obligations that are now in force focus on prohibiting certain AI practices and “AI literacy”.

To safeguard individuals’ health, safety, and fundamental rights, the AI Act follows a risk-based approach and introduces different risk categories for AI systems. Those AI systems considered to be the highest risk are banned, with different safeguards applying to those systems considered to be at lower levels of risk.

Many AI systems in employment will be considered “high-risk”, including those used for recruitment, job ad placement, application filtering, and candidate evaluation. These systems are allowed if they do not fall into the banned category (see below) and meet certain substantive and procedural requirements, including informing affected workers and employee representatives about their use in advance.

With limited exceptions, the ban prevents both the placing on the market, the putting into service, and the use of:

  • AI systems that deploy subliminal, manipulative, or deceptive techniques that materially distort the behavior of a person or group of persons by appreciably impairing their ability to make an informed decision
  • AI systems that exploit vulnerability characteristics of a person or group of persons (including their age, disability, or socio-economic status) with the aim of materially distorting their behavior;
  • AI systems that use social scoring techniques to evaluate or classify a person or group of persons over a period of time based on their social behavior or known, inferred, or predicted personal or personality characteristics
  • AI systems that use profiling techniques or assessment of personality traits and characteristics to predict the risk of criminal behavior of individuals
  • AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage
  • AI systems that are used to infer emotions of persons in the workplace or education
  • Biometric categorization systems that are used to categorize individuals based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, or sex life or sexual orientation; and
  • Biometric real-time identification systems that are used in publicly accessible spaces for law enforcement purposes.

Significant penalties apply in the event of breach of the banned AI systems provisions, of up to EUR 35 million or up to 7% of total worldwide annual turnover, whichever is higher. Such penalties will come into force on 2 August 2025, thereby giving companies a grace period until that date to cease their use or development of such systems.

What does AI literacy mean?

While the prohibition on certain AI systems is intended to prevent their use, the AI literacy requirement ensures that allowed AI systems are used responsibly and in an informed manner. AI literacy involves having the necessary skills, knowledge, and understanding to use AI systems effectively and responsibly. This includes understanding the basic principles and functions of AI systems, being able to operate and interact with them effectively, and recognizing both their risks and opportunities.

Organizations that develop or use AI systems must ensure their staff and others involved are sufficiently AI literate, with the measures necessary to achieve that varying depending on the specific context in which the AI systems are used.

The requirements for AI literacy and the obligation to inform employee representatives prior to the implementation of high-risk AI systems are closely connected and often overlap. This obligation is frequently reinforced by national regulations that necessitate informing and consulting employee representatives, and in some cases obtaining their agreement, prior to the introduction of new or updated workplace technology systems. As part of this process, adequate information must be provided to explain the actual and potential functionality of the AI system being introduced or updated. This typically involves providing training and information sessions to enhance AI literacy.

What are the obligations towards employee representatives?

The AI Act makes clear that “Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives.”.

The involvement of employee representatives in the implementation of new and updated technology is not a new requirement across the EU, with many Member States already requiring such involvement in various forms, particularly where that technology has the ability to monitor the behavior or performance of workers (see our Representation in the workplace: Technology briefing for more information). However, the AI Act now introduces a baseline requirement across the EU specifically in relation to AI systems.

On 2 August 2026, the majority of the EU AI Act’s requirements will come into effect, including provisions related to high-risk AI systems used in an employment context. Prior to this date, guidelines are expected to be issued to assist with the practical application of the Act, including on the involvement of employee representatives.

What should I do next?

Companies operating in the EU should ensure that they:

  • establish a working party to oversee compliance with the requirements of the AI Act and any applicable wider requirements in operating locations;
  • map out the AI systems being used, including the purpose of such use, the functionality and any risks associated with such use, to identify any prohibited AI systems and to identify any high-risk systems;
  • bear in mind the extra-territorial reach of the AI Act, including in the approach to procurement of AI technologies for workplace use;
  • assess the current level of AI literacy within the organization, design measures to ensure to the “best extent” a “sufficient level” of AI literacy, and integrate AI literacy into broader governance frameworks;
  • ensure that there is in place an AI policy with clear guidelines on the use of AI within the company and consider updates to existing policies to reflect the use of AI;
  • do not overlook obligations to inform employee representatives, both in compliance with requirements under the AI Act and any additional national law and practice requirements on the introduction of new or updated AI systems (regardless of any legal obligations to do so, keeping the workforce informed about the use of AI can support good employee relations and foster workforce trust and confidence in AI);
  • keep an eye on developments coming down the track.

The next major date as the provisions of the AI Act continue to be implemented is 2 August 2025, when requirements for notified bodies responsible for assessing the conformity of AI systems come into force, as well as the rules for providers of general-purpose AI (GPAI) models. In addition, Member States must establish rules for penalties and fines related to non-compliance with the AI Act by this date (in line with the minimum requirements under the AI Act), as well as the designation of national competent authorities and their reporting obligations.

However, the key date for employers will be 2 August 2026, which is the date when the majority of the AI Act’s requirements will come into effect relating to AI systems used in an employment context.

Further reading/resources

For further information the requirements of the EU AI Act watch our Illuminating the EU AI Act series and for more information on AI regulation around the world, see our briefing Global AI at work – Regulating responsible AI use.

See also our AI Literacy Unlocked eLearning product, which is designed to support measures in ensuring the AI literacy of workforces.

 

Compliments of Eversheds Sutherland – a member of the EACCNY