Technology

California Just Told AI Companies: Play by Our Rules or Lose Our Business

Governor Newsom signed a first-of-its-kind executive order requiring AI companies to prove safety and privacy guardrails before winning state contracts, directly challenging Trump's hands-off approach.

By Shaw Beckett·4 min read
California state capitol building with digital AI circuit patterns overlaid on a sunset sky

While Washington is busy stripping guardrails off the AI industry, California just bolted new ones on.

Governor Gavin Newsom signed Executive Order N-5-26 on Monday, making California the first state to impose binding AI safety and privacy requirements on companies seeking government contracts. The order gives state agencies 120 days to develop new procurement standards that will force AI vendors to certify their systems include protections against illegal content generation, harmful bias, and civil rights violations. If you want California's business, you now have to prove your AI won't cause harm first.

The timing is not subtle. President Trump revoked the Biden administration's AI executive order on his first day in office and has spent the past year pushing a "light-touch" framework that prioritizes innovation speed over safety requirements. Newsom's order is a direct counterpunch, and it carries real economic weight: California's state government spent over $400 billion in the 2024-2025 fiscal year, making it one of the largest purchasers of technology services in the country.

Governor Newsom at a podium signing an executive order document with tech advisors nearby
Newsom framed the order as protecting innovation and rights simultaneously.

What the Order Actually Requires

The executive order targets AI companies at the contract level, not through legislation. That distinction matters because it sidesteps the kind of legal challenges that killed California's previous attempt at comprehensive AI regulation, SB 1047, which Newsom vetoed in September 2024 over concerns it would drive AI companies out of the state.

This time, the mechanism is simpler: if you want to sell AI products or services to any California state agency, you have to meet the state's standards. Those standards will be developed over the next four months by the Department of General Services and the Department of Technology, but the executive order lays out the floor. Companies must demonstrate safeguards against the generation of illegal content, including child sexual abuse material and deepfakes. They must show their systems have been tested for harmful bias across protected categories like race, gender, and disability. And they must prove their AI tools won't violate users' civil rights.

The order also requires state agencies to watermark any AI-generated images or videos they produce, creating a transparency standard that no federal agency currently follows.

"California's always been the birthplace of innovation," Newsom said Monday. "But we also understand the flip side. While others in Washington are designing policy in the shadow of misuse, we're focused on doing this the right way."

The Federal Vacuum California Is Filling

To understand why this order matters, you need to understand what happened at the federal level. In January 2025, Trump revoked Executive Order 14110, Biden's sweeping AI safety framework that required companies developing the most powerful models to share safety test results with the government. Trump replaced it with a directive focused on "removing barriers to American AI innovation," which critics argue amounts to letting the industry police itself.

Since then, federal AI policy has been shaped largely by corporate lobbying. The AI data center boom has accelerated without environmental review, companies have cut workforces citing AI efficiency with no federal retraining mandate, and a major supply chain attack on AI infrastructure exposed security gaps that no federal standard addresses.

California is stepping into that vacuum with purchasing power rather than legislation, a strategy that could prove more durable. Companies that want access to a market representing roughly 15% of the U.S. economy will need to meet these standards regardless of what Washington does or doesn't require.

Split image showing Silicon Valley tech campus on one side and the U.S. Capitol building on the other
California and Washington are heading in opposite directions on AI oversight.

Why Procurement Power Beats Legislation

Newsom learned something from the SB 1047 debacle last year. That bill, authored by State Senator Scott Wiener, would have required AI companies to implement safety testing before releasing powerful models. It had broad public support but drew fierce opposition from OpenAI, Google, and Meta, all of which have significant operations in California. Newsom vetoed it, arguing the bill was too broad and could push companies to other states.

An executive order targeting procurement avoids those pitfalls. Companies aren't being told how to build their AI. They're being told what standards they must meet if they want to do business with the state. It's the same lever that the federal government has used for decades to impose workplace safety, environmental, and cybersecurity standards on contractors.

Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union, noted that procurement requirements have historically been one of the most effective tools for setting industry standards. When a buyer as large as California sets conditions, suppliers tend to build those conditions into their baseline products rather than maintaining separate versions.

The order also includes a notable provision on federal supply chain designations. If the federal government labels an AI company as a supply chain risk, California will conduct its own independent assessment rather than automatically following the federal determination. This means a company that falls out of favor with the Trump administration for political reasons could still do business with California if the state finds no actual security risk.

The 120-Day Clock

The executive order starts a four-month sprint. By late July 2026, the Department of General Services and the Department of Technology must deliver specific recommendations for new AI-related vendor certifications. These will likely include requirements for companies to attest to responsible AI governance practices, demonstrate public safety protections, and submit to some form of auditing or testing verification.

The agencies must also develop frameworks for expanding AI use within state government itself, with the goal of improving public services while maintaining the safety standards the order establishes. This dual mandate, pushing agencies to adopt AI while requiring that adoption to meet strict standards, reflects the balancing act Newsom has tried to maintain throughout the AI policy debate.

State procurement officers will need training on evaluating AI vendor claims, a non-trivial challenge given that many "AI safety" certifications in the market today are self-reported and lack independent verification. The order doesn't specify what form that verification will take, but the options range from third-party auditing to standardized testing frameworks similar to what the EU's AI Act requires of "high-risk" systems in Europe. How California solves this verification problem could become a model for other states watching from the sidelines. Colorado, Illinois, and Connecticut have all passed narrower AI transparency laws in the past two years, and at least a dozen other state legislatures have AI procurement bills in committee.

A countdown timer displaying 120 days overlaid on California government office building
State agencies have until late July to develop binding AI procurement standards.

What This Really Means

California's executive order is not going to stop the AI industry from building whatever it wants. It's not even trying to. What it does is establish a market-based incentive structure: if you want access to one of the largest government procurement markets in the world, your AI has to meet safety and civil rights standards that the federal government has explicitly chosen not to require.

The real test will be whether this creates a de facto national standard. California has done this before. The state's vehicle emission standards, initially dismissed by the auto industry as economically ruinous, eventually became the baseline that most manufacturers built to because maintaining separate product lines for California and everyone else was more expensive than just meeting the higher standard.

If the same dynamic plays out with AI, Newsom's procurement order could end up shaping the industry more than any legislation Congress might eventually pass. That's a significant "if," but it's the bet California is making. And with a $400 billion annual budget backing it up, it's a bet the AI industry can't afford to ignore.

Sources

Written by

Shaw Beckett