Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
Introducing Duke Ellington | Fortune

Introducing Duke Ellington | Fortune

11 February 2026
Seahawks head coach turned down a cushy career in finance at KPMG for a football internship—12 years later, he won the Super Bowl at 38

Seahawks head coach turned down a cushy career in finance at KPMG for a football internship—12 years later, he won the Super Bowl at 38

11 February 2026
In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn

In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn

11 February 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » OpenAI disputes watchdog allegation it violated California’s new AI law with GPT-5.3-Codex release
News

OpenAI disputes watchdog allegation it violated California’s new AI law with GPT-5.3-Codex release

Press RoomBy Press Room11 February 20265 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
OpenAI disputes watchdog allegation it violated California’s new AI law with GPT-5.3-Codex release

OpenAI may have violated California’s new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.

A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law’s provisions.

An OpenAI spokesperson disputed the watch dog’s position, telling Fortune the company was “confident in our compliance with frontier safety laws, including SB 53.”

The controversy centers on GPT-5.3-Codex, OpenAI’s newest coding model, which was released last week. The model is part of an effort by OpenAI to reclaim its lead in AI-powered coding and, according to benchmark data OpenAI released, shows markedly higher performance on coding tasks than earlier model versions from both OpenAI and competitors like Anthropic. However, the model has also raised unprecedented cybersecurity concerns.

CEO Sam Altman said the model was the first to hit the “high” risk category for cybersecurity on the company’s Preparedness Framework, an internal risk classification system OpenAI uses for model releases. This means OpenAI is essentially classifying the model as capable enough at coding to potentially facilitate significant cyber harm, especially if automated or used at scale.

AI watchdog group the Midas Project is claiming OpenAI failed to stick to its own safety commitments—which are now legally binding under California law—with the launch of the new high-risk model.

California’s SB 53, which went into effect in January, requires major AI companies to publish and stick to their own safety frameworks, detailing how they’ll prevent catastrophic risks—defined as incidents causing more than 50 deaths or $1 billion in property damage—from their models. It also prohibits these companies from making misleading statements about compliance.

OpenAI’s safety framework requires special safeguards for models with high cybersecurity risk that are designed to prevent the AI from going rogue and doing things like acting deceptively, sabotaging safety research, or hiding its true capabilities. However, the Midas Project said that despite triggering the “high risk” cybersecurity threshold, OpenAI didn’t appear to have implemented the specific misalignment safeguards before deployment.

OpenAI says the Midas Project’s interpretation of the wording in its Preparedness Framework is wrong, although it also said that the wording in the framework is “ambiguous” and that it sought to clarify the intent of the wording in that framework with a statement in the safety report the company released with GPT-5.3-Codex. In that safety report, OpenAI said that extra safeguards are only needed when high cyber risk occurs “in conjunction with” long-range autonomy—the ability to operate independently over extended periods. Since the company believes GPT-5.3-Codex lacks this autonomy, they say the safeguards weren’t required.

“GPT-5.3-Codex completed our full testing and governance process, as detailed in the publicly released system card, and did not demonstrate long-range autonomy capabilities based on proxy evaluations and confirmed by internal expert judgments, including from our Safety Advisory Group,” the spokesperson said. The company has also said, however, that it lacks a definitive way to assess a model’s long-range autonomy and so relies on tests that it believes can act as proxies for this metric while it works to develop better evaluation methods.

However, some safety researchers have disputed OpenAI’s interpretation. Nathan Calvin, vice president of state affairs and general counsel at Encode, said in a post on X: “Rather than admit they didn’t follow their plan or update it before the release, it looks like OpenAI is saying that the criteria was ambiguous. From reading the relevant docs … it doesn’t look ambiguous to me.”

The Midas Project also claims that OpenAI cannot definitively prove the model lacks the autonomy required for the extra measures, as the company’s previous, less advanced model already topped global benchmarks for autonomous task completion. The group argues that even if the rules were unclear, OpenAI should have clarified them before releasing the model.

Tyler Johnston, founder of Midas Project, called the potential violation “especially embarrassing given how low the floor SB 53 sets is: basically just adopt a voluntary safety plan of your choice and communicate honestly about it, changing it as needed, but not violating or lying about it.”

If an investigation is opened and the allegations prove accurate, SB 53 allows for substantial penalties for violations, potentially running into millions of dollars depending on the severity and duration of noncompliance. A representative for the California Attorney General’s Office told Fortune the department was “committed to enforcing the laws of our state, including those enacted to increase transparency and safety in the emerging AI space.” However, they said the department was unable to comment on, even to confirm or deny, potential or ongoing investigations.

Updated, Feb. 10: This story has been updated to move OpenAI’s statement that it believes that it is in compliance with the California AI law higher in the story. The headlines has also been changed to make clear that OpenAI is disputing the allegations from the watch dog group. In addition, the story has been updated to clarify that OpenAI’s statement in the GPT-5.3-Codex safety report was meant to clarify what the company says was ambiguous language in its Preparedness Framework.

California Law openAI Safety
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

Introducing Duke Ellington | Fortune

Introducing Duke Ellington | Fortune

11 February 2026
Seahawks head coach turned down a cushy career in finance at KPMG for a football internship—12 years later, he won the Super Bowl at 38

Seahawks head coach turned down a cushy career in finance at KPMG for a football internship—12 years later, he won the Super Bowl at 38

11 February 2026
In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn

In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn

11 February 2026
Robinhood launches testnet version of ‘Robinhood Chain’

Robinhood launches testnet version of ‘Robinhood Chain’

11 February 2026
How GM’s supply-chain chief uses debate to reduce risk

How GM’s supply-chain chief uses debate to reduce risk

11 February 2026
It turns out that Joe Biden really did crush Americans’ dreams for the future. Just look at how the vibe changed 5 years ago

It turns out that Joe Biden really did crush Americans’ dreams for the future. Just look at how the vibe changed 5 years ago

11 February 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Robinhood launches testnet version of ‘Robinhood Chain’

Robinhood launches testnet version of ‘Robinhood Chain’

11 February 20261 Views
How GM’s supply-chain chief uses debate to reduce risk

How GM’s supply-chain chief uses debate to reduce risk

11 February 20260 Views
It turns out that Joe Biden really did crush Americans’ dreams for the future. Just look at how the vibe changed 5 years ago

It turns out that Joe Biden really did crush Americans’ dreams for the future. Just look at how the vibe changed 5 years ago

11 February 20261 Views
Citadel and Cathie Wood back Zero, a new blockchain designed for traditional finance

Citadel and Cathie Wood back Zero, a new blockchain designed for traditional finance

10 February 20261 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
Introducing Duke Ellington | Fortune

Introducing Duke Ellington | Fortune

11 February 2026
Seahawks head coach turned down a cushy career in finance at KPMG for a football internship—12 years later, he won the Super Bowl at 38

Seahawks head coach turned down a cushy career in finance at KPMG for a football internship—12 years later, he won the Super Bowl at 38

11 February 2026
In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn

In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn

11 February 2026
Most Popular
OpenAI disputes watchdog allegation it violated California’s new AI law with GPT-5.3-Codex release

OpenAI disputes watchdog allegation it violated California’s new AI law with GPT-5.3-Codex release

11 February 20260 Views
Robinhood launches testnet version of ‘Robinhood Chain’

Robinhood launches testnet version of ‘Robinhood Chain’

11 February 20261 Views
How GM’s supply-chain chief uses debate to reduce risk

How GM’s supply-chain chief uses debate to reduce risk

11 February 20260 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.