On September 29, 2025, California Governor Gavin Newsom signed into law Senate Bill 53, the Transparency in Frontier AI Act (TFAIA), the first-of-its-kind AI legislation in the U.S. that will require large AI developers to publicly disclose how they plan to mitigate potentially “catastrophic risks” posed by advanced frontier AI models. The law builds on recommendations from the June 2025 report from the Joint California AI Policy Working Group and is a pared-back successor to last year’s unsuccessful Senate Bill 1047, which was vetoed amidst industry opposition. Most provisions of SB 53 will be effective starting January 1, 2026.
Who Does the Law Apply to?
Most of SB 53’s requirements will apply to “large frontier developers,” which are developers that train frontier models using computing power exceeding 1026 FLOPs and have, together with their affiliates, $500 million in annual gross revenue in the preceding calendar year. Other requirements, such as the requirement to publish transparency reports and whistleblower protections, will also apply to “frontier developers,” which are developers that train models using computing power exceeding 1026 FLOPs without regard to annual revenue.
What Does the Law Require?
AI Safety Framework: Large frontier developers are required to develop, implement, comply with, and clearly and conspicuously publish a comprehensive AI framework on their websites. The AI framework must describe how the developer approaches the following with respect to its frontier models:
- Integrating national and international standards and “industry-consensus best practices”;
- Defining and evaluating risk assessment thresholds “to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk”1;
- Applying mitigations based on the results of the above risk assessments, which must be reviewed as part of a decision to deploy a model or use it extensively internally;
- Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations;
- Updating the AI framework, including any criteria that would trigger updates and required disclosures;
- Implementing cybersecurity measures to secure unreleased model weights from unauthorized modification or transfer, identifying and responding to “critical safety incidents,” and instituting internal governance practices to ensure implementation of these processes and;
- Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
Transparency Reports: Before, or concurrently with, deploying new or substantially modified frontier models, frontier developers must publish transparency reports on their websites that include information such as the model release date, supported languages and output modalities, intended uses, any applicable restrictions on the model’s use. Large frontier developers are required to include additional information in their transparency reports, including assessments of catastrophic risks associated with the model, the results of those assessments, the extent to which third-party evaluators were involved, and other steps taken to comply with the AI framework.
Regular Risk Assessment Reporting: Large frontier developers must submit summaries of assessments of catastrophic risks resulting from internal use of their frontier models to the Office of Emergency Services (OES) every three months or pursuant to another “reasonable schedule” specified by the developer and communicated in writing to the OES.
No Materially False or Misleading Statements: Frontier developers are expressly prohibited from making materially false or misleading statements regarding the catastrophic risks of their frontier models or their management of catastrophic risks. Large frontier developers are prohibited from making materially false or misleading statements about their implementation of, or compliance with, their AI framework. Good faith statements that were reasonable under the circumstances are exempt.
Reporting Critical Safety Incidents: The law will also require the OES to create a mechanism that frontier developers and the public can use to report “critical safety incidents.”2 Frontier developers must report any critical safety incidents to the OES within 15 days of discovering the incident, or to any appropriate agency within 24 hours if there is an imminent risk of death or serious physical harm. Frontier developers are encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not “frontier models.” The law authorizes the state’s Attorney General and the OES to share reports of critical safety incidents and employee reports with legislative and government entities, but they must “strongly consider” any risks related to trade secrets, public safety, cybersecurity, or national security when transmitting reports. In addition, starting January 1, 2027, the OES must produce annual reports to the state legislature and the governor containing anonymized and aggregated information on critical safety incidents.
How Is the Law Enforced?
Potential Safe Harbor for Reporting Critical Safety Incidents: Notably, the OES may adopt regulations designating one or more federal laws, regulations, or guidance documents that impose critical safety incident reporting standards that are equivalent to or stricter than SB 53. Frontier developers can align their compliance with those federal standards, provided they notify the OES of their intent to do so. A frontier developer will be considered compliant if they adhere to the standards set by the designated federal laws and regulations until the developer revokes their intent to comply or until the OES determines that the federal standards no longer meet SB 53’s required criteria.
Penalties: Large frontier developers that fail to comply with their AI framework, make prohibited deceptive statements, or fail to comply with the law’s publication and reporting requirements may face civil penalties of up to one million dollars per violation, depending on the severity of the offense.
Are There Whistleblower Protections?
SB 53 establishes specific whistleblower protections for employees reporting catastrophic risks associated with frontier models. Frontier developers are prohibited from retaliating against employees and must inform them of their rights and responsibilities under SB 53. Large frontier developers must also establish anonymous reporting processes and provide monthly updates on the status of investigations into their disclosures.
How Does This Law Interact with Federal Policy on AI Safety?
In July 2025, the White House released its comprehensive AI Action Plan (Plan), which we analyzed previously here. Among its many recommendations, the Plan specified that federal funding should not be directed toward states with “burdensome AI regulations,” but should also not interfere with states’ rights to pass “prudent laws that are not unduly restrictive to innovation.”
Gov. Newsom acknowledged in his signing message that “meaningful oversight of Al safety, particularly as it relates to matters of national security, involves joint work with the federal government.” Gov. Newsom further stated that “[s]hould the federal government or Congress adopt national Al standards that maintain or exceed the protections in this bill, subsequent action will be necessary to provide alignment between policy frameworks…To the degree that additional clarification is required, I encourage the Legislature to monitor actions at the federal level and, if and when federal standards are adopted, ensure alignment with those standards—all while maintaining the high bar established by SB 53.”
Wilson Sonsini routinely helps companies navigate complex issues pertaining to AI and Machine Learning. For more information or advice concerning AI development or practices, please contact Maneesha Mithal, Kelly Singleton, Doo Lee, or any member of the firm’s Data, Privacy, and Cybersecurity practice.
Compliments of Wilson Sonsini – A member of the EACCNY