By: Andrea Martinez
On September 29th, 2025, California Governor Gavin Newsom signed Senate Bill 53 into law, the Transparency in Frontier Artificial Intelligence Act, which is the first state legislation to impose public safety disclosure requirements on major artificial intelligence (“AI”) frontier developers, such as Google, Microsoft, and Apple.[1] California’s legislative efforts in this rapidly evolving and underregulated field seek to fill the gap presented by the lack of federal legislation and serve as a blueprint for other states to follow.[2] This legislation is expected to undoubtedly have profound and far-reaching effects on the industry, given its application to the most prominent AI companies and the wide accessibility of frontier AI models, such as ChatGPT and Google Gemini.[3] While this law advances transparency, it presents a significant legal tension between complying with transparency obligations and protecting proprietary information.[4]
The law imposes several public disclosure and reporting requirements on AI frontier developers to promote public safety and public trust by increasing awareness around the capabilities of these frontier models.[5] The law requires “large frontier developers” to implement, comply with, and publicly publish a framework that addresses, among other things, how they identify, assess, and mitigate “catastrophic risks” and “critical safety incidents” associated with their frontier models.[6] The law further mandates public transparency reports upon deploying frontier models that include summaries of catastrophic-risk assessments, their results, and any involvement of third-party evaluators.[7] In addition, large frontier developers must submit quarterly summaries of any internal catastrophic-risk assessment and report critical safety incidents to California’s Office of Emergency Services.[8]
The increased transparency obligations imposed by the law risk implicating trade secret protection and contractual liability for AI developers, given the nature of the information that goes into developing and deploying these frontier models.[9] AI developers often work with third parties in developing and evaluating their frontier models, which typically involves licensing third-party proprietary information.[10] These business transactions typically involve protections against disclosures of proprietary information by way of non-disclosure agreements (“NDAs”), forcing AI developers to navigate uncertainty and maintain confidentiality while complying with the law.[11]
Although the law permits AI developers to make redactions “that are necessary to protect the frontier developer’s trade secrets . . . or to comply with any federal or state law,” these redactions only apply to published documents.[12] This limitation presumably means that redactions cannot be made when disclosures are made solely to a government entity, such as internal catastrophic risk assessments that AI developers are required to submit.[13] Internal risk assessments of these frontier models often contain not only the developer’s proprietary information but also licensed third-party proprietary data, making AI developers vulnerable to liability if such disclosures involve trade secrets or other proprietary information protected by existing NDAs.[14] Although parties usually provide for traditional carve-outs for disclosures compelled by law, some disclosures under the law are only discretionary, likely excluding these disclosures from traditional carve-outs.[15] Moreover, while the law only permits redactions of the developer’s trade secrets, a developer may assert that, in publishing a document, redactions to third-party trade secrets or other confidential information are necessary to comply with federal or state law.[16] However, this scenario still creates uncertainty for developers in terms of what justifications may suffice to allow for redactions and how much redaction is necessary to comply with federal trade secret law or confidentiality agreements under state law.[17] Thus, developers may either risk being in breach of their NDAs if they don’t provide sufficient redactions or risk severe regulatory penalties if they redact beyond what’s necessary and are deemed non-compliant with the law.[18]
Accordingly, AI developers and third parties should re-evaluate existing NDAs and licensing agreements to clarify the scope of confidentiality obligations and permitted disclosures specific to California’s new broad disclosure requirements.[19] AI developers should steer away from boilerplate or traditional carve-outs and instead, specifically provide for carve-outs to the safety disclosures required or permitted by the law to mitigate risk of liability or otherwise avoid uncertainty.[20] Finally, parties might also consider notice requirements before disclosures are made under the law, possibly allowing for precautionary measures before such disclosures are made and further mitigating the risk of liability.[21]
[1] See Governor Newsom Signs SB 53, Advancing California’s World-Leading Artificial Intelligence Industry, Governor Gavin Newsom (Sep. 29, 2025), https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/ [https://perma.cc/TR65-YTS5]; MarketMinute, California’s Landmark AI Transparency Law (SB 53) Set to Reshape the Future of Artificial Intelligence, Wral News (Oct. 1, 2025), https://markets.financialcontent.com/wral/article/marketminute-2025-10-1-californias-landmark-ai-transparency-law-sb-53-set-to-reshape-the-future-of-artificial-intelligence [https://perma.cc/SU9Y-NZKN].
[2] See Governor Gavin Newsom, supra note 1.
[3] See MarketMinute, supra note 1; Bruce D. Sokler et al., Charting the Future of AI Governance: California’s SB 53 Sets a National Precedent — AI: The Washington Report, Mintz (Oct. 7, 2025), https://www.mintz.com/insights-center/viewpoints/54731/2025-10-03-charting-future-ai-governance-californias-sb-53-sets [https://perma.cc/983E-HBN5]; Chair Rebecca Bauer-Kahan, Assembly Comm. on Priv. and Consumer Prot., SB 53 (Wiener) – As Amended July 8, 2025, Senate Bill Policy Committee Analysis, S. 2025-26, 1st Sess., at 12 (2025), https://apcp.assembly.ca.gov/system/files/2025-07/sb-53-wiener-apcp-analysis_0.pdf [https://perma.cc/R2TS-WFVX] (providing examples of frontier models, including OpenAI’s o3 and Google’s Gemini 2.0).
[4] See California Declares War on AI Secrecy, Rooney L. (Sep. 30, 2025), https://rooney.law/blog/california-declares-war-on-ai-secrecy/ (on file with the American University Business Law Review) (“[California’s public disclosure requirements] sets up a clash not just between regulators and industry, but between the values of open governance and the protections of trade secret law.”).
[5] See Cal. Bus. & Prof. §§ 22757.10–22757.16 (Deering 2026).
[6] Id. (defining “large frontier developers” as those with an “annual gross revenue in excess of five hundred million dollars ($500,000,000) in the preceding calendar year”); Id. (defining a “catastrophic risk” as “a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars in damage to, or loss of, property . . .”); Id. (“[C]ritical safety incident [means] (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury, (2) harm resulting from the materialization of a catastrophic risk, (3) loss of control of a frontier model causing death or bodily injury, or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk”).
[7] Id.
[8] Id.
[9] See Rooney L., supra note 4 (“Tech companies argue that their safety protocols, data practices, and mitigation strategies are competitive advantages.”); Aden Hizkias, Why California’s SB 53 Still Gets AI Regulation Wrong, Chamber of Progress (July 9, 2025), https://progresschamber.org/insights/why-californias-sb-53-still-gets-ai-regulation-wrong/ [https://perma.cc/ST5F-CK2V] (discussing how SB 53’s disclosure requirements makes AI developers vulnerable to legal risk and intellectual property disclosure); Whitney Ford, AI Compliance Officer Is an Emerging Role for In-House Counsel, BL (Oct. 28, 2025, at 04:30 ET), https://news.bloomberglaw.com/legal-exchange-insights-and-commentary/ai-compliance-officer-is-an-emerging-role-for-in-house-counsel (on file with the American University Business Law Review) (describing the development of AI and implementation as “highly proprietary in nature”).
[10] See e.g., OpenAI’s Approach to Frontier Risk, OpenAI (Oct. 26, 2023), https://openai.com/global-affairs/our-approach-to-frontier-risk/ (on file with the American University Business Law Review) (noting that OpenAI’s large language models are developed using information licensed from third parties); Press Release, Associated Press, AP, Open AI Agree to Share Select News Content and Technology in New Collaboration (July 13, 2023), https://www.ap.org/media-center/press-releases/2023/ap-open-ai-agree-to-share-select-news-content-and-technology-in-new-collaboration/ [https://perma.cc/NTV3-LRUG] (discussing OpenAI’s licensing arrangement with AP for the purpose of training its frontier model). See generally Trent V. Bolar, The Emergence of Data Licensing Ecosystems for AI Training: Legal Outcomes Shaping the Future, Medium (July 8, 2025), https://medium.com/@trentice.bolar/the-emergence-of-data-licensing-ecosystems-for-ai-training-legal-outcomes-shaping-the-future-55e99abed915 [https://perma.cc/QYC8-VUPZ] (explaining the growing data licensing market within AI development, including licensing structures that involve proprietary datasets).
[11] See OpenAI Red Teaming Network, OpenAI (Sep. 19, 2023), https://openai.com/index/red-teaming-network/ (on file with the American University Business Law Review) (stating that any involvement in OpenAI’s Red Teaming Network, such as third-party auditors involved in safety training efforts, will likely be subject to NDA’s); see also California Assumes Role as Lead US Regulator of AI, Latham & Watkins LLP (Oct. 15, 2025), https://www.lw.com/en/insights/california-assumes-role-as-lead-us-regulator-of-ai [https://perma.cc/KU9B-J5RN] (“[S]B 53’s requirements to publish safety frameworks and disclose risk assessments will force developers to balance their new transparency obligations with the risk of disclosing sensitive intellectual property and trade secrets.”).
[12] See Cal. Bus. & Prof. § 22757.12 (Deering 2026).
[13] See id.
[14] See AI Accountability Policy Comment, Anthropic 7 (2023), https://www-cdn.anthropic.com/257e6352c677beeffcbce24233211887173a41dc/2023.06.06-Anthropic_NTIA_Comment_v2.pdf7 [https://perma.cc/94S6-SBXL] (“[A]udits and assessments—especially by third parties—can expose confidential and proprietary information, including trade secrets, IP, and private data.”); Issue Brief: Preliminary Reporting Tiers for AI-Bio Safety Evaluations, Frontier Model F. (Mar. 18, 2025), https://www.frontiermodelforum.org/updates/issue-brief-preliminary-reporting-tiers-for-ai-bio-safety-evaluations/ [https://perma.cc/EVH6-4BWT] (suggesting that proprietary datasets used to facilitate AI safety evaluations should be kept private for proprietary reasons); see also 18 U.S.C. § 1833(b) (providing immunity from trade secret misappropriation liability under federal or state trade secret law for disclosures made in confidence to a state government official and “solely for the purpose of reporting or investigating a suspected violation of law” (emphasis added)).
[15] See Cal. Bus. & Prof. §§ 22757.10–22757.16 (Deering 2026); Standard Carve Outs in an NDA, Wing Legal, https://wing.legal/standard-carve-outs-in-an-nda/ [https://perma.cc/Q4VE-X3Y5].
[16] See W. Whitaker Rayner, Protecting Trade Secrets Furnished to the Government, Trade Secret Insider (July 29, 2014), https://www.tradesecretsinsider.com/protecting-trade-secrets-furnished-to-the-government/ [https://perma.cc/4583-359C] (explaining how state and federal law can protect third-party trade secrets when companies have to make certain disclosures to governmental agencies).
[17] See Rooney L., supra note 4 (“How much disclosure is enough to satisfy Sacramento without exposing crown-jewel IP? That’s the billion-dollar question—and the one likely to keep litigators and lobbyists busy.”).
[18] See Cal. Bus. & Prof. § 22757.15 (Deering 2026); see also Latham & Watkins LLP, supra note 12.
[19] See generally 4 Things You Should Know About Non-Disclosure Agreements, Thomson Reuters (Oct. 15, 2024), https://legal.thomsonreuters.com/en/insights/articles/4-things-to-know-about-non-disclosure-agreements [https://perma.cc/4X7Z-H6VJ] (describing how NDA’s could be unenforceable if too broad or can otherwise be grounds for breaches of confidentiality if drafted with ambiguity).
[20] See id.; see also A. Benjamin Klaber & Jonathan D. Traub, Cracks in the Confidentiality Exception Boilerplate, Morgan Lewis (Aug. 3, 2021), https://www.morganlewis.com/blogs/sourcingatmorganlewis/2021/08/cracks-in-the-confidentiality-exception-boilerplate [https://perma.cc/AN4N-7URY] (discussing how traditional confidential exceptions in NDAs can lead to uncertainty or weaken safeguards).
[21] See Ted Maduri, Confidentiality and Non-Disclosure Agreements, DLA Piper (Oct. 13, 2020), https://www.dlapiper.com/en/insights/publications/2020/08/non-disclosure-agreements [https://perma.cc/KF78-MVQX] (“A disclosing party may want to include specific protective measures such as: . . . notification of unauthorized disclosure/misappropriation.”).
