Member News

EACCNY “Digitalization” Series | AI Regulation – Latest Legal Developments in Europe and the US

With the help of our members, this thought-leadership series explores the acceleration of “digitalization” on both sides of the Atlantic, and across various industries. Today, we present, Ana Razmazma, Counsel, at FENWICK & WEST in Silicon Valley, CA; along with Katie Hewson, Partner, and Daniel Jones, Associate both at STEPHENSON HARWOOD in London, UK. They will address: “AI Regulation – Latest Legal Developments in Europe and the US”.

 Navigating the Landscape of AI Regulation on Both Sides of the Pond: Ensuring Compliance and Reaping Benefits

Artificial intelligence (AI) has developed at such a rate that it now stands as a transformative power, with the potential to reshape industries and redefine decision-making. In simple terms, AI refers to computer systems performing tasks that typically require human-like intelligence, from processing complex data to learning patterns and making predictions. However, the spread of ever-more powerful AI has sparked concerns around the ethical and societal impacts it may have, not to mention its implications on existing rights and norms. This has prompted international governments and regulators to implement or propose laws aimed at underpinning ethical and safe AI practices. In this article, we explore the current regulatory frameworks in these areas in Europe and the US, spotlight areas under regulatory scrutiny, and offer insights for businesses navigating compliance while reaping AI’s advantages.

1. Unravelling the Regulatory Maze

EU and UK: Current and Planned AI Regulation

a} EU

The proposed EU Artificial Intelligence Act (“EU AI Act”) has been a pioneering step in shaping the regulation of AI internationally. The EU AI Act is structured around a risk-based categorisation, encompassing three levels: unacceptable risk, high risk, and non-high risk. Each category entails different regulatory obligations, including, for example, transparency, accountability, registration requirements, impact assessments, and human oversight. By adopting this approach, the EU AI Act aims to strike a balance between technological advancement and safeguarding fundamental rights. As it will have extra-territorial effect and is one of the first international regulatory frameworks targeted at AI, it is expected to have significant impact in AI governance. The details of its provisions are still being negotiated between the EU institutions.

Similarly, the EU Digital Services Act (“EU DSA”) operates in a comparable vein, aiming to tackle the challenges presented by the digital era. Specifically, the EU DSA’s focus lies on ensuring transparency, accountability, fairness and user protection across online platforms, which is why it will have an impact on AI regulation. To address illegal content within the digital realm, the EU DSA imposes a set of responsibilities on digital service providers, encompassing content moderation, data sharing, and algorithmic transparency. This affects digital service providers when they deploy AI on their platforms, as we’ve already seen with TikTok preparing to offer EU users an alternative that does not include its personalised algorithm for content curation.

On the data protection front, the European Union’s General Data Protection Regulation (“EU GDPR”) lays down extensive rules for the processing of personal data, placing a strong emphasis on principles such as fairness, transparency, and accountability. This is highly relevant to AI, as AI often involves the processing of personal data.

b) UK

The UK’s AI White Paper published in March 2023 (“AI White Paper”) set out the UK Government’s intentions for regulating the use of AI and ensuring responsible deployment. The AI White Paper proposes a different approach to AI regulation compared with the EU AI Act, setting out five broad cross-sectoral principles that will be enforced by the relevant regulators in each sector. More development will follow as the UK puts its plans into action.

In relation to the regulation of data protection in an AI context, the UK General Data Protection Regulation (“UK GDPR”) and the Data Protection Act 2018 (“UK DPA”) will apply where AI systems make use of personal data. The UK has been working on its Data Protection and Digital Information Bill, which will potentially modify the UK GDPR and UK DPA in a manner that will have a significant impact on the use of personal data in AI systems. For example, it loosens the restrictions on automated decision-making and may narrow what should be considered to be personal data.

US: Early Days of AI Regulation

The United States lacks a comprehensive federal regulation on AI. Instead, there is a patchwork of existing privacy regulations at both the federal and state levels that can be expanded to encompass the regulation of AI. Some of these federal regulations are specific to certain sectors or based on the type of personal data being processed – for example, HIPAA covers medical data and COPPA covers children’s information, while the GLBA applies to the financial sector. As a result, these regulations will be applicable to AI if it is used with any of these regulated types of personal data or within any of these regulated sectors. At the state level, recent consumer privacy laws that have been implemented since 2023 in Colorado, Connecticut, and Virginia do not explicitly mention AI but do regulate the use of profiling by businesses that lead to automated decision making and give residents the right to opt out of such usage. For instance, a business that sells goods and services to consumers in Virginia and utilizes AI to make automated decisions regarding the approval or refusal of credit applications may be required to comply with a consumer’s request to opt out of usage of their personal data for such purpose.

While US lawmakers have yet to pass any federal regulation on AI, other non-legislative branches of the US federal government have taken notice and demonstrated a willingness to intervene in order to safeguard consumers against the potential negative impacts of AI. In October 2022, the White House unveiled a Blueprint for an AI Bill of Rights[1], comprised of five principles that companies must adhere to when designing or implementing AI technologies in order to protect the rights of Americans. These principles include ensuring the safety of automated systems, preventing algorithmic discrimination, preserving data privacy, providing transparency about the use of AI, and offering meaningful human alternatives. That being said, it should be emphasized that the AI Bill of Rights lacks legal enforceability, leaving its actual impact in question. As part of ongoing efforts to address concerns about biased usage of AI technology, President Biden issued an Executive Order[2] in February 2023 directing federal agencies to identify and eradicate bias in the design and utilization of new technologies, including AI, while also ensuring public protection against algorithmic discrimination.

Pondering a Unified Global Regulatory Authority

As AI transcends geographical borders and advances at an unprecedented pace, the necessity for global regulatory coordination becomes increasingly apparent. The introduction of a global regulatory authority has the potential to harmonise standards, enforce ethical AI practices across diverse jurisdictions, and engender a climate of trust and uniformity in the adoption of AI technologies. This endeavour looks at striking a balance between AI’s benefits and potential risks through a global lens.

2. Regulatory Spotlights: A Closer Look

EU: The Garante’s Decision and the Holistic European Approach

Within the EU, regulatory scrutiny often covers principles relating to the processing of personal data, the lawfulness of data processing, conditions applicable to child consent, transparency information, and the concept of data protection by design and default. Notably, the Italian data protection authority, Garante, took a decisive step in March 2023 by temporarily prohibiting OpenAI in Italy, highlighting the paramount importance of upholding data protection principles to ensure the ethical and legal implementation of AI technologies. This decision marks a notable step in EU’s stance regarding AI regulation and has led to other regulators keeping a close eye on developments made by OpenAI, including the Commission Nationale pour la Protection des Données in Luxembourg.

US: The FTC Leads the Charge

The Federal Trade Commission (the ‘FTC’) in the US has been vocal about its willingness to be the agency regulating AI. FTC Chairwoman Lina Khan stated in a recent op-ed about AI technologies in the New York Times[3] that: “Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering.” The FTC’s authority to bring enforcement actions against businesses is based on Section 5(a) of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” What would constitute an unfair or deceptive practice as applied to AI? In a 2021 blog post[4], the FTC provided the example of the sale or use of racially biased algorithms as an unfair or deceptive practice. Punishment brought against AI companies for violating consumer privacy rights can be severe – in the last few years, the FTC has brought enforcement actions against companies that required the destruction of algorithms developed using data obtained deceptively. In the case of Everalbum, the FTC ordered Everalbum to delete an algorithm it had trained on the photos of its users where users were defaulted to having their pictures scanned without the opportunity to opt out. In Weight Watchers International, Weight Watchers was ordered to delete any algorithm trained on any personal data collected from children under the age of 13 without a parent’s consent.

3. How Businesses Using AI Systems Can Mitigate Compliance Risks

As the use of AI systems often involves processing substantial amount of personal data, businesses deploying these systems should be mindful of any potential compliance risks.

There are several key considerations in navigating these challenges:

Mapping and Risk Management

Processes should be established to map the usage of AI in the organisation as well as to have an assessment process before deploying any new tools. These processes could have regard to the impact assessments that are likely to be required for certain AI systems under the AI Act and should consider issues such as the impact of the AI system on fundamental rights. Organisations must also set their standard positions on contractual liability and security when they procure an AI system from a third party.

Balancing Utility and Privacy

A holistic approach is essential for businesses to manage the dual aspects of AI systems: input (data collection and model training) and output (especially content generation or decision-making). Since the outcomes of AI models hinge on the quality of the input data provided, it is important to balance the need for accurate output data with adherence to data protection laws. This will involve considerations of principles such as data minimisation (collecting only necessary personal data) and data accuracy (ensuring input data is precise and up-to-date). This equilibrium between maximising data utility and preserving privacy is of paramount importance.

Recognising the Hazards of Bias and Discrimination

Businesses engaging with AI must cultivate or maintain a heightened awareness of potential biases inherent in AI systems. Practices should be in place to ensure that any outputs are tested and reviewed by a human in order to screen out discriminatory influences and bias that may be embedded into the data on which the AI tools are trained. Businesses should also avoid exaggerated claims that their algorithms are “100% free of bias” to avoid making deceptive claims to consumers.

Developing Internal AI Use Policies and Robust Governance Structures

To demonstrate accountability both internally and externally, businesses should consider establishing internal appropriate policies and governance frameworks that are aligned with regulatory guidelines and legislation. This is a fundamental step in generating responsible AI practices within businesses and adhering to legal requirements.

Safeguarding AI Training on Personal and Confidential Data

Minimising privacy risks associated with AI training entails limiting the use of personal and confidential data and integrating robust safeguards throughout the data processing pipeline. Businesses should provide regular training sessions to employees using AI systems to ensure fair and legal processing of personal and confidential data and to prevent data breaches.

Facilitating Responses to Data Subject Requests

Businesses can cultivate transparency and foster trust with consumers by instituting mechanisms that enable prompt and comprehensive responses to data subject requests. Given the continuous evolution and sophistication of AI systems, businesses should look into having in place a system to efficiently search and capture personal data used in an AI system when a data subject access request comes through. Where a data subject requests for their personal data to be deleted, this has another layer of complexity with regard to the AI system. For example, in certain circumstances, where an algorithm has been trained on the data being the subject of the data deletion request it (i) may not always be possible to remove the relevant data; and (ii) where it is, this can significantly impact the functionality of important algorithms.

4. Unveiling the Advantages of AI Adoption

Challenges aside, using AI systems can yield numerous benefits.

Unveiling New Avenues for Revenue Generation

The integration and usage of AI technologies paves the way for new revenue streams for businesses by catalysing the development of innovative products and services that resonate with the demands of consumers. This will help diversify businesses’ offerings to generate new revenue streams.

Gaining Competitive Leverage through Enhanced Efficiency

AI-powered systems confer businesses with a distinct competitive edge in this competitive market, facilitating streamlined operations, resource utilisation, and customer experiences. With the benefit of being able to process high volumes of complex data sets rapidly in comparison to human capabilities, AI systems have been shown to increase efficiency if used the right way.

Fostering Informed and Inclusive Decision-Making

AI equips businesses with data-driven insights, enabling informed and efficient choices. Moreover, by considering a diverse range of inputs, AI has the potential to foster more inclusive and comprehensive decision-making processes. These output data have the potential of going beyond aspects that human oversight might miss by virtue of the plethora of information involved.

Conclusion

The AI era brings forth a number of opportunities along with a host of challenges, prominently centred on data privacy as well as ethical considerations. As governments enact regulations to address these concerns, we will see businesses presented with the need to align their AI strategies with the evolving legal compliance landscape. By prioritising transparency, accountability, lawfulness, and fairness, businesses can harness the full potential of AI while enhancing consumer trust and working towards a responsible AI-driven future.

[1] https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[2] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/02/16/executive-order-on-further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/

[3] https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html

[4] Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI, https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

AUTHORS:
> Ana Razmazma, Counsel, FENWICK & WEST  | arazmazma@fenwick.com
> Katie Hewson, Partner, STEPHENSON HARWOOD | katie.hewson@shlegal.com
> Daniel Jones, Associate, STEPHENSON HARWOOD | daniel.jones@shlegal.com

Stay tuned for more on this series! We hope you enjoy these Thought-Leadership pieces written by our members: FENWICK & WEST and STEPHENSON HARWOOD.