European Commission’s new AI strategies
The European Commission (“Commission”) has announced two complementary strategies this month to accelerate AI across EU industry and science. The “Apply AI Strategy” is the EU’s overarching AI sectoral strategy which focuses on deploying AI in key sectors such as healthcare, energy, mobility, manufacturing, and public services.
In parallel, the Commission’s “AI in Science Strategy” aims to position Europe at the forefront of AI-driven research and scientific innovation by supporting the development and use of AI by the European scientific community. At its core is the Resource for AI Science in Europe (RAISE), a virtual European institute that aims to pool expertise and coordinate AI resources, which will be launched in November 2025.
Collectively, these measures are designed to enhance competitiveness and promote trustworthy AI, which supports the EU’s “AI Continent Action Plan” as we covered in April’s edition. It is expected that the proposed Data Union Strategy will join these measures later this month, which is a future initiative aimed at ensuring the availability of high-quality, large-scale datasets for training AI models.
Consultation on digital simplification announced by EU
The Commission launched a short public consultation on 16 September 2025 on the Commission’s omnibus proposal for digital simplification, which aims to simplify digital regulations for businesses, with a particular focus on the recently enacted EU AI Act. It comes amidst increasing pressure from business and lobby groups to pause the implementation of the EU AI Act which is under consideration by the Commission. This digital simplification initiative seeks to clarify and optimise the application of the EU AI Act, which entered into force on 1 August 2024 and will be fully applicable on 2 August 2026, subject to certain exceptions, including rules for high-risk AI systems which have an extended transition period until August 2027.
The consultation, which closed on 14 October 2025, is part of the Commission’s broader Omnibus IV Simplification package.
EU AI Act Serious Incident Guidance Consultation launched
The Commission also launched a public consultation on 26 September 2025 on draft guidance and a reporting template for serious AI incidents under the EU AI Act. This initiative is designed to help providers of “high-risk AI systems” (which may include general-purpose AI models as we covered in August’s edition) comply with upcoming mandatory reporting requirements under Article 73 of the EU AI Act, which will take effect from 2 August 2026. The Commission states that Article 73 is intended to support early risk detection, improve accountability, enable prompt intervention, and build public trust in AI technologies.
Key aspects of the draft guidance include:
• Definitions in the EU AI Act: The guidance explains key terms related to serious AI incidents and outlines the associated reporting responsibilities.
• Illustrative scenarios: Practical examples are included to demonstrate when and how incidents should be reported, such as cases involving misclassifications, notable drops in accuracy, interruption of AI systems, or unexpected AI behaviour.
• Reporting requirements and timelines: The guidance details the specific obligations and deadlines for various stakeholders, including providers and deployers of high-risk AI systems, providers of general-purpose AI models with systemic risk, market surveillance authorities, national competent authorities, the Commission, and the AI Board.
• Interplay with existing laws: The guidance clarifies how these AI-specific requirements align with other legislative frameworks and reporting requirements, such as the Critical Entities Resilience Directive, the NIS2 Directive, and the Digital Operational Resilience Act.
• International alignment: The guidance aims to harmonise reporting practices with international reporting regimes, including the AI Incidents Monitor and Common Reporting Framework of the Organisation for Economic Co-operation and Development.
The consultation is now open for stakeholders to review the draft guidance and reporting template and provide feedback by 7 November 2025.
New Californian AI safety law announced
On 29 September 2025 the Governor of California, Gavin Newsom, signed the Transparency in Frontier Artificial Intelligence Act (“TFAIA”). This law will come into effect on 1 January 2026.
In the absence of comprehensive federal legislation that specifically governs AI, a growing number of individual states, including California and Illinois, have started implementing their own AI rules and regulations focused on issues such as algorithmic transparency, biometric data, and consumer protection.
Newsom has suggested that the transparency-led TFAIA will act as a blueprint for other federal states when it comes to developing AI legislation.
The TFAIA will apply to AI developers that create and train frontier models (e.g. LLM developers) with an additional set of rules for “Large frontier developers” which have an annual gross revenue of more than $500 million. The TFAIA does not explicitly restrict scope to California-based developers. It mandates disclosure and documentation for these large frontier AI developers and will require them to record their safety measures and report safety incidents to the California Office of Emergency Services.
Ireland becomes pioneer in EU AI Act rollout
On 16 September 2025, Ireland’s Department of Enterprise, Tourism and Employment reached a significant milestone in implementing the EU AI Act by establishing a single central coordinating authority, and designating a further seven additional national competent authorities, to enforce the EU AI Act.
Article 70 of the EU AI Act mandates that every EU Member State must appoint at least one notifying authority and one market surveillance authority to oversee AI regulation. These authorities are required to operate independently, impartially, and without bias and must be equipped with sufficient technical, financial, and human resources, along with the necessary infrastructure, to effectively carry out their duties under the EU AI Act.
Ireland has implemented a distributed regulatory framework, with 15 regulatory bodies as competent authorities making up the National AI Implementation Committee, supported by a central authority to coordinate certain centralised functions. The 15 competent authorities, which met for the first time on 16 September 2025, will each oversee the application of the EU AI Act within its specific area of responsibility.
Ireland will also establish a new body, the National AI Office, by 2 August 2026 to ensure consistent and effective implementation of the EU AI Act. This body will have four critical functions:
• coordinate activities of the competent authorities to ensure consistent implementation of the EU AI Act;
• serve as the EU AI Act’s single point of contact;
• facilitate centralised access to technical expertise by the other competent authorities; and
• drive AI innovation and adoption through the hosting of a regulatory sandbox.
Meanwhile, an interim single point of contact has been established within the Department of Enterprise, Tourism and Employment to coordinate activities among Irish regulators and act as a liaison with the public, the Commission, and other key stakeholders. Out of the 27 Member States, Ireland is one of only seven that have established a single point of contact to date.
Italy announces new AI framework
On 10 October 2025, Italy’s national AI framework (Bill S. 1146-B) (the “Italian AI Law”) entered into force after it was signed into law last month. Intended to be complementary to the EU AI Act, Italy is the first country in the EU to implement its own AI laws. The Italian AI Law, which aims to ensure “human‑centric, transparent, and safe AI use”, introduces sector‑specific rules for areas deemed high risk; safeguards for minors; and sets out governance and enforcement mechanisms. This includes a new provision for mandatory AI age verification, ensuring children under 14 are only able to access AI with parental consent.
Copyright protections are also clarified in the new law, in particular its assertion that works created ‘with genuine human intellectual effort using AI assistance’ are eligible for protection.
The Italian AI Law is aligned with the EU AI Act’s position on text and data mining (“TDM”); but introduces targeted amendments to confirm that the exceptions for TDM under EU law cover the ‘development and training’ of generative AI models (i.e. subject to lawful access in accordance with copyright law and the owner’s opt‑out rights, reproduction and extraction of lawfully accessible works for the purposes of TDM, via the use of AI models, is permitted). It amends Italian copyright law by attaching criminal liability to unlawful TDM; elevating what used to be a solely civil liability.
Finally, tougher penalties are also imposed under the Italian AI law, including prison sentences of up to five years, for those who have unlawfully distributed harmful AI-generated content (including deepfakes), with increased penalties for crimes being committed using AI such as fraud and money laundering.
Compliments of Stephenson Harwood – a member of the EACCNY