Member News, News

Stephenson Harwood | Neural Network- February 2026

In this edition of the Neural Network, we look at key AI developments from January and February.

In regulatory and government updates, Ireland has announced an AI Bill; South Korea’s extra-territorial AI law has taken effect; and the ICO raises the possibility of data protection law derogations to facilitate AI development.

In AI enforcement and litigation news, the European Commission joins Ofcom and the ICO in announcing further investigations into Grok; and a Chinese court has ruled that a service provider was not responsible for AI hallucinations.

More details on each of these developments are set out below.

REGULATORY AND GOVERNMENT UPDATES

ENFORCEMENT AND CIVIL LITIGATION

REGULATORY AND GOVERNMENT UPDATES

IRELAND ANNOUNCES ARTIFICIAL INTELLIGENCE BILL

On 4 February 2026, Ireland published its General Scheme of the Regulation of Artificial Intelligence Bill 2026 (the “Bill”), marking a significant step for its national implementation of the European Union Artificial Intelligence Act (“EU AI Act”). Whilst the EU AI Act has direct legal effect across all EU member states, several provisions still require national transposition, including those that provide for the supervision and enforcement of the obligations set out in the EU AI Act.

Below, we highlight three noteworthy aspects of the Bill:

  1. Distributed enforcement model
    The Bill sets out a distributed enforcement model, instead of a centralised structure like that of other Member States (as we reported on in October’s edition here). The Bill empowers existing sectoral authorities (the “Authorities”) to supervise AI activities within their respective sectors, with a designated central authority to provide coordination and a number of centralised functions. For example, the Central Bank of Ireland will supervise the use of AI systems in the financial services sector. Meanwhile, the Data Protection Commission will supervise AI systems that process personal data.

    Oversight and coordination of the Authorities will be managed by a newly established body, Oifig Intleachta Shaorga na hÉireann (“OISE”, or the AI Office of Ireland). The OISE will act as the national Single Point of Contact with the European Commission (the “Commission”) and facilitate the enforcement and implementation of the EU AI Act. The OISE, led by a Chief Executive Officer, will be operational by 1 August 2026.

  2. Administrative fines
     The Bill provides procedural safeguards for the administration of fines in line with the EU AI Act. Under the EU AI Act, fines of up to €35 million or 7% of global annual turnover can be imposed for non-compliance with prohibited AI practices, with lower thresholds for breaches of other provisions. The Bill empowers independent adjudicators to impose such fines after investigation and due process. Importantly, all sanctions require High Court confirmation before taking effect, and appeals may be made to the High Court within 28 days of receiving an adjudicator’s decision. These safeguards are intended to ensure fairness, transparency, and judicial oversight.
  3. Digital Omnibus Package
     The Bill notes that the publication of the Digital Omnibus Package (“Digital Omnibus”) (as we reported on here) will continue to inform Ireland’s implementation of the EU AI Act. If revisions need to be made to the Bill as a result of changes introduced by the Digital Omnibus, the Bill confirms that further proposals will be submitted to the Irish government for consideration.

NEXT STEPS

The Bill is now under review by the Committee on Enterprise, Tourism and Employment (“the Committee”). The Committee will produce a report outlining its recommendations for the Bill before presenting it to the Houses of the Oireachtas.

SOUTH KOREA’S EXTRA-TERRITORIAL AI LAW HAS TAKEN EFFECT

South Korea has enacted a comprehensive law regulating artificial intelligence. The AI Basic Act, which took effect on 22 January 2026, provides high-level transparency requirements, including the requirement for companies to label AI-generated content, and mandates human oversight for “high-impact AI” systems used in sectors such as healthcare, hiring, and financial services. The AI Basic Act applies to businesses that develop and provide AI, as well as businesses that provide products or services incorporating AI,  with certain narrow exceptions relating to national security. The law has extra-territorial effect and applies to AI systems outside of South Korea if the systems affect users or markets within the country.

Government officials maintain that the legislation is intended mostly to promote innovation rather than restrict it, however, the new framework has faced widespread criticism. Civil society groups and the country’s human rights commission have criticised the legislation, with concerns surrounding the lack of clear definitions of high-impact AI, and limited provisions protecting people from AI-related harms.

Penalties for companies that violate the new rules include fines of up to 30 million won (approximately £15,000), however, the government has promised a grace period of at least one year before penalties are imposed. This contrasts with fines under the EU AI Act as outlined above, which can reach up to €35 million or 7% of total global annual turnover.

Businesses that may fall within scope of the AI Basic Act should exercise particular caution with cross-border transactions to ensure compliance, especially given the emergence of new AI legislation, and increasingly divergent approaches to AI regulation, globally.

THE ICO RAISES POSSIBILITY OF DATA PROTECTION LAW DEROGATIONS TO FACILITATE AI DEVELOPMENT

On 30 January 2026, the Information Commissioner’s Office (“ICO”) published a letter addressed to the UK government to provide a one-year progress update on its economic growth commitments which were made in January 2025.

The letter outlined the ICO’s achievements and plans relating to economic growth. Of particular interest were the comments highlighting its regulatory sandbox regime, which could offer AI developers a valuable opportunity to test emerging technologies in the UK. The ICO refers to the possibility that businesses in its sandbox programme could receive time-limited derogations from specific data protection and regulatory requirements, enabling real-world testing that is not currently permissible under existing data protection rules.

This initiative, along with the ICO’s forthcoming statutory code of practice for developing and using AI products, aims to position the UK as an attractive environment for AI developers seeking to test ideas before they enter the European market. However, the relaxation of data protection rules, albeit under strict governance controls and supervised by the ICO, would not be without controversy and could increase the tension between AI innovation and maintaining high data protection standards.

ENFORCEMENT AND CIVIL LITIGATION

EUROPEAN COMMISSION JOINS OFCOM AND THE ICO IN ANNOUNCING FURTHER INVESTIGATIONS INTO GROK 

On 26 January 2026, the Commission announced a formal investigation into social media platform X under the Digital Services Act 2022, following global outcry over its AI chatbot, Grok, being used to generate sexually explicit images. This investigation follows the ongoing investigation opened by Ofcom into X’s duties under the Online Safety Act 2023 (as we reported on in our January edition here), with the ICO’s investigation under UK data protection law having been announced earlier this month.

X responded to initial pressure regarding Grok’s functionalities by limiting the ability to create and edit images to paid subscribers, and has since imposed measures to prevent all users (including paid subscribers) from editing images of real people in revealing clothing.

The Commission’s investigation will assess whether X properly assessed and mitigated risks associated with deploying Grok’s functionalities in the EU, particularly concerning the sharing of illegal content such as sexualised deepfakes of women and children.

The Commission has also extended a separate investigation into X’s recommendations algorithm, which began in 2023, and separately fined X €120 million in December 2025 for transparency violations (as we reported on in our January Data and Cyber update here).

CHINESE COURT RULES PROVIDER IS NOT RESPONSIBLE FOR AI HALLUCINATIONS

A recent ruling by the Hangzhou Internet Court in China has set an early precedent for AI liability: a generative AI service provider is not automatically responsible for AI hallucinations, unless users can prove that the service provider was both at fault in the content-generation process and that the error caused actual harm.

This is the first time a generative AI service provider’s liability in generating AI hallucinations has been considered in China. The case involved an AI system that fabricated a non-existent university campus and promised compensation to the user if the content was incorrect. The claimant, having relied on these AI-generated statements, sought to sue the service provider of the generative AI system (“Defendant”) for damages for being misled by the AI-generated content, and claimed the AI-generated content constituted a binding “compensation promise.”

The court ultimately dismissed the claim, finding that no actual harm was suffered, and that the AI-generated statements did not constitute legally binding declarations of intent attributable to the Defendant. The user of the AI system ultimately remained responsible for verifying the output.

The court also held that the Defendant had fulfilled its reasonable duty of care in providing its services, as it used industry-standard measures to enhance the accuracy of its AI-generated content whilst also giving users adequate warning about the AI limitations. Under Chinese law, it is not mandatory for generative AI service providers to ensure accuracy for every output generated by AI, although they are required to take effective measures to improve the accuracy and reliability of generated content and ensure that users recognise the functional limitations of their AI services.

The judgment serves to highlight the importance of clear user guidance and risk management as courts globally are forced to address AI-related disputes.

 

Authors:
Katie Hewson, Partner, STEPHENSON HARWOOD
Sarah O’Brien, Managing Associate, STEPHENSON HARWOOD
Nic McMaster, Managing Associate, STEPHENSON HARWOOD
Bobbie Bickerton, Managing Associate, STEPHENSON HARWOOD
Alison Llewellyn
, Senior Knowledge Lawyers, STEPHENSON HARWOOD

 

Compliments of Stephenson Harwood – a member of the EACCNY