Gov. Gavin Newsom signs SB 53, the first U.S. law mandating AI safety transparency and whistleblower protections, despite pushback from tech giants.
California enacts first-in-nation AI safety law with SB 53
California Governor Gavin Newsom signed SB 53, the first U.S. law to directly regulate the safety of large AI companies. The law would target OpenAI, Anthropic, Meta, Google DeepMind, and other leading AI labs.
Key requirements
SB 53 forces companies to be transparent about their safety protocols and guarantees whistleblower protections. It also requires AI labs to report:
- critical incidents that could impact public safety;
- non-human crimes (such as cyberattacks);
- instances of disinformation or manipulative behavior by models.
The latter is a provision that goes beyond even the more stringent EU AI Act.
Industry Reaction
Tech giants have met SB 53 with mixed reviews. Anthropic supported the law, while Meta and OpenAI opposed it, warning of the risks of a “regulatory patchwork” at the state level. OpenAI even published an open letter to Newsom asking him not to sign the law.
Despite the criticism, Newsom said that SB 53 combines public protection with innovation:
“AI is the new frontier of innovation, and California is not only ready for it, but is becoming a national leader in the field of AI safety.”
Context
SB 53 passed just two weeks after being voted on in the legislature. Its signing could set a precedent: a similar bill is already pending in New York, where Governor Katie Gochul has a decision.
In parallel, a political lobbying movement is actively forming in Silicon Valley. Leaders of OpenAI and Meta are investing hundreds of millions of dollars in super PACs that promote candidates with a soft approach to AI regulation.