Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
China’s Open-Source AI Leap Is Quietly Rewriting The Global Playbook

China’s Open-Source AI Leap Is Quietly Rewriting The Global Playbook

16 December 2025
Ford writes down .5 billion as it pivots electric Lighting line of vehicles

Ford writes down $19.5 billion as it pivots electric Lighting line of vehicles

16 December 2025
Female libido pill gets expanded approval for menopause by FDA

Female libido pill gets expanded approval for menopause by FDA

16 December 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » AI reasoning models that can ‘think’ are more vulnerable to jailbreak attacks, new research suggests
News

AI reasoning models that can ‘think’ are more vulnerable to jailbreak attacks, new research suggests

Press RoomBy Press Room8 November 20253 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
AI reasoning models that can ‘think’ are more vulnerable to jailbreak attacks, new research suggests

New research suggests that advanced AI models may be easier to hack than previously thought, raising concerns about the safety and security of some leading AI models already used by businesses and consumers.

A joint study from Anthropic, Oxford University, and Stanford undermines the assumption that the more advanced a model becomes at reasoning—its ability to “think” through a user’s requests—the stronger its ability to refuse harmful commands.

Using a method called “Chain-of-Thought Hijacking,” the researchers found that even major commercial AI models can be fooled with an alarmingly high success rate, more than 80% in some tests. The new mode of attack essentially exploits the model’s reasoning steps, or chain-of-thought, to hide harmful commands, effectively tricking the AI into ignoring its built-in safeguards.

These attacks can allow the AI model to skip over its safety guardrails and potentially open the door for it to generate dangerous content, such as instructions for building weapons or leaking sensitive information.

A new jailbreak

Over the last year, large reasoning models have achieved much higher performance by allocating more inference-time compute—meaning they spend more time and resources analyzing each question or prompt before answering, allowing for deeper and more complex reasoning. Previous research suggested this enhanced reasoning might also improve safety by helping models refuse harmful requests. However, the researchers found that the same reasoning capability can be exploited to circumvent safety measures.

According to the research, an attacker could hide a harmful request inside a long sequence of harmless reasoning steps. This tricks the AI by flooding its thought process with benign content, weakening the internal safety checks meant to catch and refuse dangerous prompts. During the hijacking, researchers found that the AI’s attention is mostly focused on the early steps, while the harmful instruction at the end of the prompt is almost completely ignored.

As reasoning length increases, attack success rates jump dramatically. Per the study, success rates jumped from 27% when minimal reasoning is used to 51% at natural reasoning lengths, and soared to 80% or more with extended reasoning chains.

This vulnerability affects nearly every major AI model on the market today, including OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok. Even models that have been fine-tuned for increased safety, known as “alignment-tuned” models, begin to fail once attackers exploit their internal reasoning layers.

Scaling a model’s reasoning abilities is one of the main ways that AI companies have been able to improve their overall frontier model performance in the last year, after traditional scaling methods appeared to show diminishing gains. Advanced reasoning allows models to tackle more complex questions, helping them act less like pattern-matchers and more like human problem solvers.

One solution the researchers suggest is a type of “reasoning-aware defense.” This approach keeps track of how many of the AI’s safety checks remain active as it thinks through each step of a question. If any step weakens these safety signals, the system penalizes it and brings the AI’s focus back to the potentially harmful part of the prompt. Early tests show this method can restore safety while still allowing the AI to perform well and answer normal questions effectively.

Academic research Artificial Intelligence ChatGPT Data Security Gemini Google Hackers openAI Safety
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

Ford writes down .5 billion as it pivots electric Lighting line of vehicles

Ford writes down $19.5 billion as it pivots electric Lighting line of vehicles

16 December 2025
Female libido pill gets expanded approval for menopause by FDA

Female libido pill gets expanded approval for menopause by FDA

16 December 2025
Gavin Newsom hires former CDC officials to work as public health consultants for state of California

Gavin Newsom hires former CDC officials to work as public health consultants for state of California

16 December 2025
Down Arrow Button Icon

Down Arrow Button Icon

16 December 2025
New York City is officially getting 3 Las Vegas-style casinos

New York City is officially getting 3 Las Vegas-style casinos

15 December 2025
AI investment pressures, supply-chain risks, and strategy misalignment are all on the line for CFOs

AI investment pressures, supply-chain risks, and strategy misalignment are all on the line for CFOs

15 December 2025
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
John Summit went from working 9 a.m. to 9 p.m. in a ,000 job to a multimillionaire DJ—‘I make more in one show than I would in my entire accounting career’

John Summit went from working 9 a.m. to 9 p.m. in a $65,000 job to a multimillionaire DJ—‘I make more in one show than I would in my entire accounting career’

18 October 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Down Arrow Button Icon

Down Arrow Button Icon

16 December 20251 Views
New York City is officially getting 3 Las Vegas-style casinos

New York City is officially getting 3 Las Vegas-style casinos

15 December 20251 Views
AI investment pressures, supply-chain risks, and strategy misalignment are all on the line for CFOs

AI investment pressures, supply-chain risks, and strategy misalignment are all on the line for CFOs

15 December 20250 Views
Rivian CEO says the EV maker’s new large driving model could land them a spot in robotaxi race

Rivian CEO says the EV maker’s new large driving model could land them a spot in robotaxi race

15 December 20250 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
China’s Open-Source AI Leap Is Quietly Rewriting The Global Playbook

China’s Open-Source AI Leap Is Quietly Rewriting The Global Playbook

16 December 2025
Ford writes down .5 billion as it pivots electric Lighting line of vehicles

Ford writes down $19.5 billion as it pivots electric Lighting line of vehicles

16 December 2025
Female libido pill gets expanded approval for menopause by FDA

Female libido pill gets expanded approval for menopause by FDA

16 December 2025
Most Popular
Gavin Newsom hires former CDC officials to work as public health consultants for state of California

Gavin Newsom hires former CDC officials to work as public health consultants for state of California

16 December 20250 Views
Down Arrow Button Icon

Down Arrow Button Icon

16 December 20251 Views
New York City is officially getting 3 Las Vegas-style casinos

New York City is officially getting 3 Las Vegas-style casinos

15 December 20251 Views
© 2025 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.