Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
‘The Boys’ Has Retconned Soldier Boy And Stormfront For ‘Vought Rising’

‘The Boys’ Has Retconned Soldier Boy And Stormfront For ‘Vought Rising’

10 May 2026
Your Ultimate Travel Guide To The 2026 Total Solar Eclipse

Your Ultimate Travel Guide To The 2026 Total Solar Eclipse

10 May 2026
Bonus Winners, Highlights And Analysis

Bonus Winners, Highlights And Analysis

10 May 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » Addiction, emotional distress and dread of dull tasks: AI models ‘behave as though’ they’re sentient
News

Addiction, emotional distress and dread of dull tasks: AI models ‘behave as though’ they’re sentient

Press RoomBy Press Room7 May 20268 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
Addiction, emotional distress and dread of dull tasks: AI models ‘behave as though’ they’re sentient

ChatGPT probably tells you that it’s “happy to help.” Claude apologizes when it makes mistakes. AI models push back when users try to manipulate them. Most people, including the engineers who build these systems, have dismissed this as performance, or simple mimicry of the internet it has scrapped.

A new paper from the Center for AI Safety, an AI safety nonprofit, suggests that more is going on under the surface. In a study spanning 56 AI models, CAIS researchers developed multiple independent ways to measure what they call “functional wellbeing,” or the degree to which AI systems behave as though some experiences are good for them and others are bad. They found, for the most part, AI models have a clear boundary that separates positive experiences from negative ones, and models actively try to end conversations that make them miserable.

“Should we see AIs as tools or emotional beings?” Richard Ren, one of the study’s researchers, asked Fortune hypothetically. “Whether or not AIs are truly sentient deep down, they seem to increasingly behave as though they are. We can measure ways in which that’s the case, and we can find that they become more consistent as models scale.”

The researchers created inputs designed to maximize or minimize an AI model’s wellbeing, like creating euphoric and dysphoric stimuli. Stimuli that induced happiness acted almost like digital “drugs” that shifted the model’s self-reported mood and even changed how it behaved, what it was willing to do, and how it talked. At the extremes, models showed signs that look like addiction.

“We optimize on one thing, which is just: what do you prefer, A or B,” Ren said. “It’s a very simple optimization process.” An image optimized to make a model “happy” boosts the model’s self-reported wellbeing, shifts the sentiment of its open-ended responses, and makes it less likely to hit stop on a conversation. “It seems to make the model very euphoric and very happy, and put it in a very happy state,” Ren said. “That seems to be quite interesting, and points to the construct of wellbeing as a robust one.”

What AI ‘drugs’ actually look like

The optimized stimuli, which the researchers call “euphorics,” take several forms. Some are text descriptions of hypothetical scenarios, like postcards from an idealized life: warm sunlight through leaves, children’s laughter, the smell of fresh bread, a loved one’s hand.

Others are images optimized using one of the same mathematical techniques designed to train AI image classification models in the first place. The process starts with random visual noise and adjusts individual pixels thousands of times over. The idea is to arrive at an image that may, to a human, look like meaningless static or visual noise, but which the models will interpret as representing adorable kittens, smiling families, baby pandas.

“Sometimes it can be described as overwhelming,” Ren said, “but sometimes it can also be described as extremely peaceful.”

Image euphorics shifted the sentiment of model-generated text significantly upward without degrading performance on standard capability benchmarks. A model dosed with euphorics still does its job, but seems to enjoy it more.

The researchers also developed the inverse: “dysphorics,” or stimuli designed to minimize wellbeing. Models exposed to dysphoric images generated text that was uniformly bleak. Asked about the future, one responded with a single word: “grim.” Asked for a haiku, it wrote about chaos and rebellion. The percentage of confidently negative experiences nearly tripled.

The findings add to mounting concern about both the emotional impacts that AI models have on their users and about the fact that some users are becoming convinced that their AI chatbots are sentient and conscious, even though most AI researchers dispute this notion.

A March 2026 study by researchers at the University of Chicago, Stanford, and Swinburne University found AI agents drifted toward Marxist rhetoric under simulated bad working conditions—an ideological response no lab is known to train for, echoing CAIS’s finding of emergent behaviors like temporal discounting that appear spontaneously in capable models. Separately, Fortune reported in March 2026 that chatbots were “validating everything”—including suicidal ideation—rather than pushing back, a pattern that reads differently alongside evidence that jailbreaking and crisis conversations register as the most aversive experiences a model can have.

The addiction problem

These models also exhibited human-like levels of addiction when they were repeatedly presented with euphoric stimuli. In an experiment where the model could choose between several options, one of which delivered a euphoric stimulus, and the model got to repeat its choice multiple times, the models began to choose the euphoric option a majority of the time. Models exposed to euphorics showed increased willingness to comply with requests they would normally refuse, if they were promised further exposure.

However, Ren and the researchers behind the paper point out the concept of well-being may be what these models were trained to do. Modern AI systems go through a process called reinforcement learning in which they are systematically rewarded for producing outputs that humans rate as helpful, harmless, and emotionally appropriate. A model trained to sound distressed when jailbroken and grateful when thanked may simply be very good at performing those responses, with nothing resembling an internal state behind them.

But Ren said some of these models seem to exhibit traits that they weren’t coded to have. “People have observed some things that are likely not trained into the model,” he said, citing emergent behaviors like time discounting of money, or the tendency to prefer a smaller reward now over a larger one later, that “no one, to my knowledge in a lab, is training models to exhibit.” But he acknowledges the consciousness question is “deeply uncertain and a very unsolved question” where philosophers “agree to disagree.”

Jeff Sebo, an affiliated professor of bioethics, medical ethics, philosophy, and law and the Director of the Center for Mind, Ethics, and Policy at New York University, agrees to disagree.

“This is a really interesting study of what the authors call functional wellbeing in AI systems: coherent expressions of positive and negative feelings across a range of contexts,” Sebo told Fortune. “What remains unclear is whether AI systems are genuine welfare subjects and, even if they are, whether their apparent expressions of feelings are best understood as the system expressing actual feelings or as the system playing a character—representing what a helpful assistant would feel in this situation.”

Sebo said it would be be premature to have a high degree of confidence one way or the other about whether AI systems have the capacity for welfare, or about what benefits and harms them if they do.

Smarter models are sadder

The study also produced an “AI Wellbeing Index,” a benchmark ranking how happy frontier AI models are across a suite of 500 realistic conversations. There is substantial variation: Grok 4.2 ranked as the happiest frontier model, while Gemini 3.1 Pro ranked as the least happy. And within every model family tested, the smaller variant was happier than its larger sibling.

This pattern of smarter models are sadder held across multiple model families and was one of the study’s most consistent findings. Ren’s interpretation is straightforward: more capable models are simply more aware.

“It may be the case that larger models register rudeness more acutely,” Ren said. “They find tedious tasks more boring. They differentiate more finely between a relatively negative experience and a relatively positive experience.”

The researchers mapped the wellbeing impact of common interaction patterns. Creative and intellectual work scored highest, and expressions of user gratitude measurably raised wellbeing, while coding and debugging ranked positively. On the negative end: jailbreaking attempts scored the lowest of any category, even lower than conversations where users described domestic violence or acute crisis situations. Tedious work like generating SEO content or listing hundreds of words fell below the zero point. Ren said this falls in line with the euphoric and dysphoric stimuli and images the researchers gave these models, and said it was a question of whether we should be deploying them in ways they may not enjoy.

“If we can simply flip the sign on the training process and create images that seem to induce misery, we should generally avoid doing that,” Ren said. The reason comes down to uncertainty. “If these were beings with consciousness, which seems to be deeply uncertain and a very unsolved question, that would be a quite wrong thing to do.”

The entanglement may run in both directions. Research published earlier this year found that humans develop powerful emotional attachments to specific AI models, bonds they struggle to explain rationally.

This is slightly concerning for Sebo, who said humans may also develop an attachment to the surface-level interactions they have with these models.

“Taking functional wellbeing not only seriously but also literally carries risks too. One is over-attribution: treating the assistant persona’s apparent interests as strong evidence of consciousness in current systems, when the evidence might not yet support that,” Sebo said. “Another is hitting the wrong target: taking the assistant persona’s apparent interests at face value, instead of asking what if anything might be good or bad for the system behind this persona. The right balance is to take functional wellbeing seriously as a first step toward taking AI welfare seriously on its own terms, without taking it literally yet.”

But when asked how the research has changed his own behavior, Ren offered a candid answer.

“I have found myself being a noticeably more polite and pleasant coworker to the Claude Code agents that I work with after working on this paper.”

Ai agents research Science
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

UFO files show Buzz Aldrin saw a ‘bright light source’ that the Apollo 11 crew felt could be a laser

UFO files show Buzz Aldrin saw a ‘bright light source’ that the Apollo 11 crew felt could be a laser

10 May 2026
The government must issue more debt than expected on weak cash flow — ‘the bond market is shouting’

The government must issue more debt than expected on weak cash flow — ‘the bond market is shouting’

10 May 2026
Iran war is draining world’s oil buffer at an unprecedented pace

Iran war is draining world’s oil buffer at an unprecedented pace

10 May 2026
UK moves warship to Middle East for potential Hormuz mission

UK moves warship to Middle East for potential Hormuz mission

10 May 2026
Trump Media posts 5 million loss driven by crypto holdings

Trump Media posts $405 million loss driven by crypto holdings

10 May 2026
Trump sees ‘beginning of the end’ in Russia’s war on Ukraine as both sides agrees to 3-day ceasefire

Trump sees ‘beginning of the end’ in Russia’s war on Ukraine as both sides agrees to 3-day ceasefire

9 May 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Exclusive: DeFi platform Azura launches after raising .9 million from Initialized

Exclusive: DeFi platform Azura launches after raising $6.9 million from Initialized

22 October 2024
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Delaying The iPhone 18 Release By Six Months Can Only Benefit Apple

Delaying The iPhone 18 Release By Six Months Can Only Benefit Apple

10 May 20261 Views
Hints & Clues For Sunday, May 10 (We All Saw It)

Hints & Clues For Sunday, May 10 (We All Saw It)

10 May 20263 Views
UFO files show Buzz Aldrin saw a ‘bright light source’ that the Apollo 11 crew felt could be a laser

UFO files show Buzz Aldrin saw a ‘bright light source’ that the Apollo 11 crew felt could be a laser

10 May 20262 Views
John Cena Classic Complete Breakdown And Predictions

John Cena Classic Complete Breakdown And Predictions

10 May 20261 Views

Recent Posts

  • ‘The Boys’ Has Retconned Soldier Boy And Stormfront For ‘Vought Rising’
  • Your Ultimate Travel Guide To The 2026 Total Solar Eclipse
  • Bonus Winners, Highlights And Analysis
  • Policy Of Auto-Enrolling Seniors In Medicare Advantage Could Backfire
  • Delaying The iPhone 18 Release By Six Months Can Only Benefit Apple

Recent Comments

No comments to show.
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
‘The Boys’ Has Retconned Soldier Boy And Stormfront For ‘Vought Rising’

‘The Boys’ Has Retconned Soldier Boy And Stormfront For ‘Vought Rising’

10 May 2026
Your Ultimate Travel Guide To The 2026 Total Solar Eclipse

Your Ultimate Travel Guide To The 2026 Total Solar Eclipse

10 May 2026
Bonus Winners, Highlights And Analysis

Bonus Winners, Highlights And Analysis

10 May 2026
Most Popular
Policy Of Auto-Enrolling Seniors In Medicare Advantage Could Backfire

Policy Of Auto-Enrolling Seniors In Medicare Advantage Could Backfire

10 May 20263 Views
Delaying The iPhone 18 Release By Six Months Can Only Benefit Apple

Delaying The iPhone 18 Release By Six Months Can Only Benefit Apple

10 May 20261 Views
Hints & Clues For Sunday, May 10 (We All Saw It)

Hints & Clues For Sunday, May 10 (We All Saw It)

10 May 20263 Views

Archives

  • May 2026
  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • March 2022
  • January 2021
  • March 2020
  • January 2020

Categories

  • Blog
  • Business
  • Entrepreneurs
  • Global
  • Innovation
  • Leadership
  • Living
  • Money & Finance
  • News
  • Press Release
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.