← All Posts

AI Safety Laws Are Coming From Both Coasts. Here's What They Actually Require.

For years, AI safety has been a matter of voluntary commitments — white papers, public pledges, and internal policies that companies could adopt or ignore at their discretion. That era is ending. In the past six months, both New York and California have moved to codify AI safety obligations into enforceable law, and the requirements they've landed on are remarkably similar.

New York's RAISE Act (the Responsible AI Safety and Education Act) and California's SB 53 (the Transparency in Frontier Artificial Intelligence Act, enacted September 2025) both target developers of frontier AI models — the large-scale systems trained with more than 1026 computational operations. These aren't abstract policy proposals. They create specific, auditable obligations with real penalties for noncompliance. And the compliance pattern they establish — safety protocols, independent audits, incident reporting, multi-year data retention — is one that will eventually extend well beyond frontier models to any AI system operating in the physical world.

What the laws require

Despite being drafted independently on opposite coasts, the two laws converge on the same core mandates.

Safety protocols and frameworks. The RAISE Act requires large developers to create and implement written safety and security protocols before deployment — covering risk assessment, testing procedures, cybersecurity protections, and designated senior personnel responsible for compliance. SB 53 requires developers to publish a frontier AI framework describing risk governance, catastrophic risk thresholds, mitigation strategies, and deployment review processes. Both demand these frameworks be documented, publicly available, and regularly updated.

Independent oversight. The RAISE Act mandates annual independent third-party audits to verify compliance. SB 53 requires transparency reports before each model deployment and quarterly assessment submissions to the California Office of Emergency Services. Different mechanisms, same principle: external verification, not self-certification.

Incident reporting. When something goes wrong, the RAISE Act gives developers 72 hours to report safety incidents to New York's Division of Homeland Security. SB 53 allows 15 days for standard incidents but tightens to 24 hours when there's an imminent risk of death or serious injury.

Data retention. The RAISE Act explicitly requires five years of document retention — unredacted copies of safety protocols, testing records with enough detail for third-party replication, and all audit reports. SB 53's retention obligations are implicit in its ongoing quarterly reporting requirements and five-year retention of any redacted materials.

Penalties and whistleblower protections. The RAISE Act imposes civil penalties of up to $10 million for a first violation and $30 million for subsequent violations, enforced by the New York Attorney General. SB 53 authorizes penalties of up to $1 million per violation, enforced by the California Attorney General. Both laws include protections for employees who report safety concerns internally or to regulators.

A compliance challenge, not a paper exercise

On paper, these requirements sound manageable. In practice, meeting them demands infrastructure that most organizations don't have. We've written before about the gap between collecting operational data and being able to produce it when regulators come asking — Tesla's repeated deadline extensions in its NHTSA investigation are a concrete example of what happens when data infrastructure wasn't built with regulatory production in mind.

Independent audits require structured, queryable data — not raw logs buried in proprietary formats that take months to process. Seventy-two-hour and fifteen-day reporting windows mean you need real-time monitoring and incident detection, not post-hoc investigation after the fact. Five-year tamper-evident retention with cryptographic provenance is not something you bolt on after a regulator sends a letter.

And overlapping requirements from multiple jurisdictions — New York, California, and eventually more, alongside the EU AI Act, updated ISO standards, and ongoing NHTSA investigations — mean companies need a systematic, automated approach to compliance, not a bespoke scramble for each new law.

Building for compliance by design

This is the infrastructure problem PhyWare is designed to solve.

PhyTrace captures comprehensive operational telemetry — sensors, AI reasoning, speed, location, operational state — in a structured, normalized format. It provides the evidence base that safety protocols are actually being followed, not just documented. When an auditor asks to see testing data with enough detail for third-party replication, the answer is a query against a structured data model, not a months-long manual review.

PhyCloud stores that data immutably with cryptographic provenance — hash-chained, tamper-evident, and independently verifiable. It maps directly to the five-year retention mandates in the RAISE Act. Every record is traceable back to its source, and no one — not the operator, not us — can alter it after the fact.

PhySafe provides real-time safety monitoring against defined rules and thresholds. The 72-hour and 24-hour incident reporting windows in these laws assume you can detect and classify safety incidents as they happen. PhySafe makes that assumption practical rather than aspirational.

PhyComp automates compliance checking by mapping regulatory requirements to data queries and generating audit-ready evidence packages. When the annual RAISE Act audit arrives or SB 53's quarterly submission is due, PhyComp produces the report — it doesn't become a project to staff up for.

PhyTAU, our dedicated hardware module, captures forensic-quality data at the edge with nanosecond-precision timestamps and tamper-evident storage — supporting incident recording and evidence integrity even in environments where connectivity is intermittent.

The regulatory direction is clear

Today, the RAISE Act and SB 53 apply to frontier foundation models — the large language models and multimodal systems from companies like OpenAI, Anthropic, and Meta. But the compliance blueprint they establish — safety protocols, independent audits, incident reporting, multi-year data retention — maps directly to the challenges facing autonomous physical systems. Warehouse robots, delivery fleets, and robotaxis already navigate a fragmented landscape of OSHA requirements, NHTSA investigations, EU AI Act obligations, and ISO safety standards. As AI regulation matures, these requirements will converge on the same pattern.

PhyWare is building for that trajectory — not waiting for each jurisdiction to publish its own rules, but providing the data infrastructure that satisfies the underlying regulatory pattern. Companies that invest in compliance-ready architecture now will be positioned to meet whatever comes next.

If you're building or operating autonomous systems and the compliance challenge resonates, we'd like to hear from you.

www.phyware.io · LinkedIn · business@phyware.io
Share this post: X LinkedIn