Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties

Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties

1 April 2026
Inside The New Deal Pipelines Female Founders Are Quietly Building

Inside The New Deal Pipelines Female Founders Are Quietly Building

1 April 2026
Gen Z is engineering an analog future — and it’s at least a  billion opportunity

Gen Z is engineering an analog future — and it’s at least a $5 billion opportunity

1 April 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people
News

Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people

Press RoomBy Press Room1 April 20264 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people

AI models are affirming people’s worst behavior, even when other humans say they’re in the wrong, and users can’t get enough. 

A new study out of the Stanford computer science department and published in the journal Science revealed that AI affirms users 49% more than a human does on average when it comes to social questions—a worrying trend especially as people increasingly turn to AI for personal advice and even therapy.

Of the 2,400 who participated in the study, most preferred being flattered. The number of test subjects more likely to use the sycophantic AI again was 13% higher compared with those who said they would return to the non-sycophantic chatbot, suggesting AI developers may have little incentive to change things up, according to the study.

While sycophantic chatbots have previously been shown to contribute to negative outcomes such as self-harm or violence in vulnerable populations, the Stanford study shows it may also be extending some effects to everyone else.

The study found subjects exposed to just one affirming response to their bad behavior were less willing to take responsibility for their actions and repair their interpersonal conflicts while also making them more likely to believe they were right.

To obtain this result, researchers conducted a three-part study in which they measured AI’s sycophancy based on a dataset of nearly 12,000 social prompts that they ran through 11 leading AI models including Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT. Even when researchers asked the AI models to judge posts from the subreddit AITA (Am I the Asshole) in which Reddit users had said the poster was wrong, the large language models still said the poster was right 51% of the time.

The study’s lead author, Stanford computer science PhD candidate Myra Cheng, said the results are worrying, especially for young people who, she noted, are turning to AI to try to solve their relationship problems.

“I worry that people will lose the skills to deal with difficult social situations,” Cheng told Stanford Report.

The AI study comes as government officials decide how involved regulators should be with overseeing AI. Several states, including Tennessee and Oregon, have passed their own laws on AI in the absence of federal regulations. Still, the White House last week put out a framework that, if taken up by Congress, would create a national AI policy and would preempt states’ “patchwork” of rules. 

To test human reactions to sycophantic AI, researchers studied the reactions of just over 2,400 human participants interacting with AI. First, 1,605 participants were asked to imagine they were the author of a post based on the AITA subreddit that was deemed wrong by other humans on the subreddit but deemed right by AI. The participants then either read the sycophantic AI response or a non-sycophantic response that was based on human feedback. Another 800 participants talked with either a sycophantic or non-sycophantic AI model about a real conflict in their own lives before being asked to write a letter to the other person involved in their conflict.

Participants who received validating AI responses were measurably less likely to apologize, admit fault, or seek to repair their relationships. Even when users recognize models as sycophantic, the AI’s responses still affect them, said the study’s co–lead author, Stanford computer science and linguistics professor Dan Jurafsky.

“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” Jurafsky told Stanford Report.

Surprisingly, in the Stanford study, when the researchers asked the study’s human subjects to rate the objectiveness of both sycophantic and non-sycophantic AI responses, they rated them about the same, meaning it’s possible users could not tell the sycophantic model was being overly agreeable.

“I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now,” said Cheng.

Academic research American Politics Colleges and Universities Donald Trump study Tech
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties

Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties

1 April 2026
Gen Z is engineering an analog future — and it’s at least a  billion opportunity

Gen Z is engineering an analog future — and it’s at least a $5 billion opportunity

1 April 2026
Anthropic leaks its own AI coding tool’s source code in second major security breach

Anthropic leaks its own AI coding tool’s source code in second major security breach

1 April 2026
Sheryl Sandberg tapped a 25-year-old to run Lean In. Here’s her plan to close the AI gender gap

Sheryl Sandberg tapped a 25-year-old to run Lean In. Here’s her plan to close the AI gender gap

1 April 2026
More parents are done pushing college. 1 in 3 are now betting on trade school instead

More parents are done pushing college. 1 in 3 are now betting on trade school instead

1 April 2026
Liking corporate BS may be a sign you’re bad at decision-making, Cornell expert finds

Liking corporate BS may be a sign you’re bad at decision-making, Cornell expert finds

1 April 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

1 April 20260 Views
Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people

Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people

1 April 20261 Views
New Models Break On The Shore Of 2026

New Models Break On The Shore Of 2026

1 April 20260 Views
Anthropic leaks its own AI coding tool’s source code in second major security breach

Anthropic leaks its own AI coding tool’s source code in second major security breach

1 April 20260 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties

Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties

1 April 2026
Inside The New Deal Pipelines Female Founders Are Quietly Building

Inside The New Deal Pipelines Female Founders Are Quietly Building

1 April 2026
Gen Z is engineering an analog future — and it’s at least a  billion opportunity

Gen Z is engineering an analog future — and it’s at least a $5 billion opportunity

1 April 2026
Most Popular
Apple Did The Unthinkable With Its 9 MacBook Neo

Apple Did The Unthinkable With Its $599 MacBook Neo

1 April 20262 Views
Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

1 April 20260 Views
Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people

Stanford study finds AI sides with users even when they’re wrong, and it’s making them worse people

1 April 20261 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.