Landmark laws for AI transparency

On 29 September 2025, California became the first U.S. state to enact specific transparency requirements for AI developers when Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (SB53) into law.

Key Provisions

SB53 applies to large AI developers with annual revenues over US$500 million and systems trained with over 10²⁶ FLOPs of computing power — currently covering only major players like OpenAI and xAI. The Act requires these companies to:

  • Disclose safety documentation, including testing for dangerous capabilities.
  • Report and publicly disclose serious AI-related incidents.
  • Protect whistleblowers who raise safety concerns.
  • Face civil penalties of up to US$10 million for repeated noncompliance.

The Act defines “dangerous capabilities” narrowly, such as the ability to create weapons, conduct cyberattacks, commit crimes, or evade human control. Companies must also manage and mitigate “catastrophic risks” — events that could kill or seriously injure over 50 people or cause US$1 billion in damage.

Background and Political Context

SB53 follows the controversial SB1047 (2024), which aimed to make AI developers liable for harm caused by their systems. Newsom vetoed that bill for being overly broad but later commissioned an expert report recommending transparency and whistleblower protections, which directly shaped SB53. Its sponsor, Senator Scott Wiener, described SB1047 as a liability law and SB53 as a transparency law establishing “commonsense guardrails.”

Industry and Federal Reactions

Reactions are mixed: Anthropic supports the bill but prefers a federal standard, while OpenAI and Meta cautiously endorse its goals yet warn against a patchwork of state laws. Critics, including Andreessen Horowitz, argue that such laws could infringe on interstate commerce. Nonetheless, similar proposals are emerging in New York and Michigan, raising pressure for a unified federal AI safety framework.

Broader Implications

California, home to 32 of the world’s top 50 AI firms, wields global influence in tech policy. By mandating transparency around high-risk AI capabilities, SB53 sets a global precedent for balancing innovation with accountability. Newsom has urged other governments to adopt California’s model, framing SB53 as proof that responsible AI regulation can coexist with innovation — and as a signal that “frontier AI” oversight is both necessary and achievable.

Source: United States Studies Centre

Leave a Reply

Your email address will not be published. Required fields are marked *