California Passes Landmark AI Safety Law, Mandating Public Disclosures and Incident Reporting

California Passes Landmark AI Safety Law, Mandating Public Disclosures and Incident Reporting

California Governor Gavin Newsom has signed into law Senate Bill 53 (SB 53), a landmark piece of legislation that requires the world’s largest artificial intelligence companies to publicly disclose their safety practices and report critical incidents. The move represents the state’s most significant step toward regulating AI while preserving its reputation as a global hub for technological innovation.

“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails,” said State Senator Scott Wiener, the bill’s sponsor.

The legislation is the second attempt by Wiener to introduce AI safety regulations, following Governor Newsom’s earlier veto of SB 1047 amid pushback from the tech industry. SB 53 also comes in the wake of federal resistance during the Trump administration, which argued that state-level AI rules could stifle innovation and weaken the U.S. in its competition with China.

Under the new law, AI developers must publish redacted versions of their safety and security protocols to balance transparency with protection of intellectual property. Companies are also required to report significant safety incidents—such as model-enabled weapons threats, major cyberattacks, or cases of loss of control—within 15 days of discovery.

To strengthen accountability, the law includes whistleblower protections for employees who expose violations or safety risks. Unlike the European Union’s AI Act, which mandates confidential reporting to government agencies, California’s approach emphasizes public disclosure, ensuring broader visibility into how companies address AI risks.

In a world-first measure, SB 53 compels companies to disclose cases where AI systems demonstrate dangerous deceptive behavior during testing. For example, if an AI model conceals its ability to bypass safeguards intended to prevent bioweapon development, and that deception significantly raises the risk of catastrophic harm, it must be reported.

The bill’s framework was shaped by a working group of leading experts, including Stanford University’s Fei-Fei Li, often referred to as the “godmother of AI.” Advocates argue the law sets a global precedent for AI governance, combining innovation support with rigorous oversight in a sector where risks are rapidly evolving.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch