This is a cache of https://www.nbcnews.com/tech/tech-news/anthropic-backs-californias-sb-53-ai-bill-rcna229908. It is a snapshot of the page at 2025-09-09T01:19:36.773+0000.
Anthropic backs California's SB 53 AI bill

Anthropic backs California bill that would mandate AI transparency measures

The bill, if passed, would set some of the first broad legal requirements for AI companies in the United States.
The Anthropic website on a laptop arranged in New York, US, on Tuesday, Aug. 15, 2023.
The Anthropic website. Gabby Jones / Bloomberg via Getty Images file

Artificial intelligence developer Anthropic became the first major tech company Monday to endorse a California bill that would regulate the most advanced artificial intelligence models.

Proposed by state Sen. Scott Wiener, SB 53, if passed, would create the first broad legal requirements for large developers of AI models in the United States.

Among other conditions, the bill would require large AI companies offering services in California to create, publicly share and adhere to safety-focused guidelines and procedures stipulating how each company attempts to mitigate risks from AI. The bill would also strengthen whistleblower requirements by creating stronger pathways for employees to flag concerns about severe or potentially catastrophic risks that might otherwise go unreported.

“With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,” Anthropic said in a statement.

The bill would largely codify existing voluntary commitments made by the world’s largest AI companies, emphasizing transparency and attention to risks from advanced AI systems. For example, Anthropic, OpenAI, Google, Meta and other companies have already committed to assessing how their products could be used for nefarious purposes and to lay out mitigations to prevent these threats. Recent research has shown that AI models can help users execute cyberattacks and lower barriers to acquiring biological weapons.

SB 53 would make many of those commitments mandatory, requiring companies to post their approaches to AI risk on their websites and to share summaries of “catastrophic risk” assessments directly with a state-level office.

The new California bill would apply only to AI companies building cutting-edge models that demand massive computing power. Within that subset of AI companies, the strictest requirements in the bill would apply only to those with annual revenues exceeding $500 million.

SB 53 would also establish an emergency reporting system through which an AI developer or members of the public could report critical safety incidents related to a model.

“Anthropic is a leader on AI safety, and we’re really grateful for the company’s support,” Wiener told NBC News.

The bill appears likely to pass, having received overwhelming support in both the Assembly and the Senate in recent voting rounds. The Legislature must cast its final vote on the bill by Friday night.

“Frontier AI companies have made many voluntary commitments for safety, often without following through. This legislation takes a small but important first step toward making AI safer by making many of these voluntary commitments mandatory,” Dan Hendrycks, executive director of the Center for AI Safety, told NBC News. “While we need much more rigorous regulation to manage AI risks, SB 53 — and Anthropic’s public support for it — are an encouraging development.”

However, industry trade groups like the Consumer Technology Association (CTA) and the Chamber of Progress are highly critical of the bill. The CTA said last week on X, “California SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks.”

SB 53 is an updated, somewhat-narrower version of a similar bill Wiener proposed last year. That bill, called SB 1047, attracted widespread scrutiny from AI developers, including OpenAI and initially Anthropic, in addition to industry trade groups like the Chamber of Progress and prominent Silicon Valley investing firms like Andreesen Horowitz. Critics attacked SB 1047’s scope and language about potential penalties in case AI models caused “critical harm.”

Unlike SB 53, SB 1047 would have required developers to undergo annual third-party audits of their adherence to the law and barred developers from releasing models that carried an “unreasonable risk” of individuals using the model to cause critical harms.

SB 1047 was passed by the Legislature but vetoed by Gov. Gavin Newsom, who said it would throttle AI development and “slow the pace of innovation.” Several commentators and bill proponents argued that critics had misrepresented the bill’s contents and that industry lobbying played a key role in the bill’s veto.

After the veto, Newsom formed a working group charged with providing recommendations for a revised version of SB 1047. Led by a group of AI experts, the working group provided its recommendations in the California Report on Frontier AI Policy in June.

Originally introduced in January, SB 53 incorporates many of the working group’s recommendations, emphasizing transparency and the verification of commitments from leading AI labs.

“We modeled the bill on that report,” Sen. Wiener said. “Whereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.”

Helen Toner, interim director of the Center for Security and Emerging Technology at Georgetown University, highlighted the growing consensus on the need for more insight into frontier AI companies’ practices. “SB 53 is primarily a transparency bill, and that’s no coincidence,” Toner said. “The need for more transparency from frontier AI developers is one of the AI policy ideas with the most consensus behind it.”

Anthropic agreed. “We’ve long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California’s previous attempt at AI regulation,” it said in its statement.

Any AI regulation passed in California would most likely have a significant impact on AI development nationally and around the world, as California is home to dozens of the world’s leading AI companies.

“California is really at the beating heart of AI innovation, and we should also be at the heart of a creative AI safety approach,” Wiener said.

The role of state legislation is a key issue in AI policy debates, as industry actors, including Anthropic competitor OpenAI, argue that a comprehensive, uniform approach to AI at the federal level is required — not a collage of state laws.

The recently enacted Big Beautiful Bill federal spending package nearly included an amendment to prohibit states from passing AI-related legislation for 10 years, but the amendment was scratched in a late-night reversal.

OpenAI’s director of global affairs, Chris Lehane, responded to Anthropic’s announcement by reaffirming OpenAI’s preference for federal regulation. “America leads best with clear, nationwide rules, not a patchwork of state or local regulations,” he wrote early Monday afternoon on LinkedIn.

Anthropic acknowledged the tension in its statement Monday but said SB 53 is a step in the right direction given federal inaction. “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” it wrote.

Wiener said: “Ideally we would have comprehensive, strong pro-safety, pro-innovation federal law in this space. But that has not happened, so California has a responsibility to act. I would prefer federal regulation, too, but I’m not holding my breath for that.”