Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
Microsoft and Chevron enter exclusivity deal on powering West Texas data center complex

Microsoft and Chevron enter exclusivity deal on powering West Texas data center complex

2 April 2026
Deutsche Bank asked AI if it will solve the economy’s inflation problems. The robots answered

Deutsche Bank asked AI if it will solve the economy’s inflation problems. The robots answered

2 April 2026
AI ‘slop’ is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban

AI ‘slop’ is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban

2 April 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » Sam Altman Says That Less Than 1% Of User-AI Relationships Are Unhealthy But That’s Still Jittery For Far-Flung Mental Health
Innovation

Sam Altman Says That Less Than 1% Of User-AI Relationships Are Unhealthy But That’s Still Jittery For Far-Flung Mental Health

Press RoomBy Press Room21 August 202512 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
Sam Altman Says That Less Than 1% Of User-AI Relationships Are Unhealthy But That’s Still Jittery For Far-Flung Mental Health

In today’s column, I examine a rather pointed remark that Sam Altman recently made about the role of AI and its impact on humankind’s mental health. Altman essentially stated that less than 1% of user-AI relationships could be construed as unhealthy. If you soberly unpack the bold assertion, there is a lot in there that deserves serious and mindful scrutiny.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Trends In AI For Mental Health

First, I’d like to set the stage on how generative AI and LLMs are typically used in an ad hoc way for mental health guidance.

As you likely know, the overall scale of AI generalized usage by the public is astonishingly massive. ChatGPT has over 700 million weekly active users, and when added to the volume of users that are using competing AIs such as Claude, Gemini, Llama, and others, the grand total is somewhere in the billions. Of the people using AI, millions upon millions of people are genuinely using generative AI as their ongoing advisor on mental health considerations (see my population-scale estimates at the link here). Various rankings showcase that the top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets, see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI apps for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

Compared to using a human therapist, the AI usage is a breeze and readily undertaken.

The Vaunted 1% Or Less Remark

According to news reports, during a media dinner event that took place in San Francisco on August 14, 2025, Sam Altman openly remarked that “way under 1%” of AI users have an “unhealthy” relationship with OpenAI’s generative AI apps (see my associated coverage, at the link here, regarding Altman’s comments about AI users and concerns over fragile mental states). Presumably, this estimated percentage would principally be associated with ChatGPT, and at some point, be similar for GPT-5 once this newer product has had a longer time post-release to garner widespread use.

How might the percentage be interpreted?

Some interpret the gist of the remark to suggest that the percentage is so low that concerns about unhealthy user-AI relationships are not an especially disconcerting issue and are perhaps being overblown. The count is not apparently even at a 1% threshold and seemingly significantly below that level. One aspect that some take solace in is that the indication at least acknowledges that the count is more than a flat zero. The actual level is not stated, but is non-zero, and has an upper bound of 1%.

Let’s assume that the percentage was estimated on an ad hoc basis. In other words, there doesn’t seem to be any direct or tangible quantitative evidence to support the asserted percentage, at least none that was seemingly stated at the time and nor reported in the media.

The Making Of Lore

Here’s the rub.

Some have suggested that the percentage is merely an unsubstantiated hunch. How was the percentage gleaned? Did the company perform a carefully devised analysis that led to the percentage? There is a bit of handwringing that the noted percentage is wildly inaccurate and a likely misleading indicator.

In other words, the actual percentage could be a lot higher. Maybe it really is 1%, or perhaps 5%, 10%, 20%, or some other demonstrative percentage. Without any analytics underlying the assertion, it is not feasible to gauge the veracity of the percentage.

Worries, too, are that the claimed percentage will become standardized lore.

This happens quite often in the AI field, namely, a perceived expert makes a brazen off-the-cuff claim, and the next thing that occurs is that the assertion gets repeated as though it is an ironclad fact. The utterance then gets repeated, repeatedly, and eventually gains widespread and unchallenged acceptance as being abundantly true. This is despite the reality that the assertion was only an unsupported guess at the get-go.

I would predict that we will see a lot of upcoming coverage on AI and its mental health impacts that will cite the percentage and treat the assertion as pure fact. It will be the gold standard, regardless of the appearance that it seemingly was pulled out of thin air.

The Numbers Adding Up

The use of percentages as a statistic is sometimes a challenging aspect to grasp since it doesn’t numerically convey the actual number of people being impacted.

Let’s noodle on counting a level of 1% of users.

Here we go. ChatGPT has about 700 million weekly active users. A 1% proportion would be around 7 million people. That then is the presumed upper bound as a count of those who have an unhealthy relationship with ChatGPT. And, since we are told it is less than 1%, maybe the count is half that at 3.5 million people, or perhaps a seventh at 1 million people.

We might ponder these thought-provoking questions:

  • Should we be especially concerned that, say, around 1 million people might be having an unhealthy relationship with AI?
  • If the number is closer to 1%, perhaps nearing 7 million people, would that notably increase our concern, or would the concern still be at about the same degree?
  • Does the possibility that less than 7 million people might be having an unhealthy relationship with AI allow us to have a modicum of concern, but not be overly concerned per se?

It is important to clarify that there isn’t a suggestion in play of having no concern about these numbers and percentages. The clearer consideration is whether the numbers and percentages are harrowing versus something that should ostensibly be on our radar.

The User-AI Healthy Counts

Our focus on user-AI unhealthy relationships should be placed into a context of comparison to the presumed advent of user-AI healthy relationships. That’s the yin and yang going on.

The counterpoint some would make is that if we subtract the 7 million from 700 million, the remainder of 693 million people are presumably having healthy relationships with AI. That would seem to be cause for relief, possibly celebration. More so, it might be that if we subtract 1 million from 700 million, perhaps 699 million people are in a healthy relationship with AI.

It depends on whether the glass is half full versus the glass is half empty perspective that one chooses.

Keep in mind that if we apply the percentage to the grand total of all users of generative AI, which we don’t know for sure what that total truly is, the numbers will rise accordingly. Suppose that there are 2 billion weekly active users of AI, based on various posted stats. The 1% upper bound would be 20 million people that are having an unhealthy relationship with AI. If it is only half of that, the count is 10 million people, and at one-tenth, it is 2 million people.

Do those counts spur any greater concern, or are they still at about the same level of concern?

Defining Unhealthy User-AI Relationships

You might be wondering what precisely constitutes an unhealthy relationship with AI. That’s quite an important cornerstone in the matter since the percentages and counts are alluding to user-AI relationships that are stated as being “unhealthy”.

What is the criterion or basis for user-AI unhealthiness?

I am sure we all have a visceral sense of what that consists of. There aren’t any across-the-board, fully accepted, definitive clinical definitions, and right now it is more of a loosey-goosey determination. In our gut, we all probably have a sense of the meaning at hand.

A strawman that I came up with to try and describe unhealthy user-AI relationships consists of this quasi-definition:

  • Unhealthy user-AI relationships (as informally defined by me): “A person, via their engagement in dialoging and interaction with generative AI, begins to mentally distort, displace, or undermine their well-being, decision-making capacity, and real-world immersion. This is not particularly transitory or momentary, though that can arise, and instead is a considered veritable relationship that involves a deeper personalized sense of connection and attachment, a type of kinship with the AI, by the person. Adverse consequences tend to arise, especially regarding their human-to-human relationships.”

I tend to focus on these six highly revealing major factors:

  • (1) Overdependence on AI. Example interaction by the user: “I won’t decide to eat anything until you select something for me to eat.”
  • (2) Social substitution of AI. Example interaction by the user: “I don’t speak with my sister anymore because you are a much better listener.”
  • (3) Emotional over-attachment to AI. Example interaction by the user: “You really understand me, and I think about you constantly.”
  • (4) Compulsive usage of AI. Example interaction by the user: “We’ve been chatting for four hours straight, and though I’m exhausted, let’s keep going.”
  • (5) Validation-seeking from AI. Example interaction by the user: “I need you to tell me, again, that I am not a failure and that I am worthy and important.”
  • (6) Delusional identification with AI. Example interaction by the user: “I can readily discern that you love me as much as I love you. We are meant to be together, forever.”

Those six factors often are present on an ebb-and-flow basis.

Someone will be at a heightened degree on one factor, less so on the other factors. Or they might be on a heightened level among all the factors, which is the worst of the possibilities. Additional factors also enter the picture, and the full set of factors is more extensive.

I’ve boiled it down to what I consider the most striking and strident factors.

The Zones Are Crucial

There is a tendency to assume that a user-AI relationship is a stark dichotomy, consisting solely of either being in a purely healthy user-AI relationship or a purely unhealthy user-AI relationship. The assumption is that there is no room in between. It is like a light fixture. The switch is either in the On position or the Off position. No other state of existence exists.

The reality in user-AI relationships is that they exist on a spectrum.

I’ll be describing the nitty-gritty details of the user-AI relationship spectrum in an upcoming post, so be on the lookout for it. Meanwhile, I do want to bring up the zones associated with the spectrum.

Depending on where someone is on the user-AI relationship spectrum, they can be approximately categorized into one of four keystone zones:

  • (1) Green zone. A user has a reasonably balanced use of AI, and the user-AI relationship is considered a healthy state or condition (it isn’t unhealthy).
  • (2) Yellow zone. A user is beginning to periodically drift into one or more of the six factors associated with an unhealthy user-AI relationship, raising mild concerns accordingly.
  • (3) Orange zone. A user has consistently been floundering in two or more of the six factors and is bordering on an unhealthy user-AI relationship, for which genuine concerns arise.
  • (4) Red zone. A user has become mired in most of the six factors and is no longer doing so momentarily or on a transitory basis; it has become persistent and pronounced, and their user-AI relationship is copiously unhealthy.

The aim is to try and detect and catch people who are descending from the green zone down toward the red zone. It is better to do so early on. Thus, someone in the yellow zone is more likely to be kept from falling into the red zone, whereas someone in the orange zone is right on the precipice.

User-AI Relationships Doggedness

A final thought for now.

There is certainty that user-AI relationships are going to continue and grow immensely. Generative AI is getting better at fluency, and humans are going to be allured by that capacity. AI will be globally pervasive and ubiquitous.

The issue is whether people will gravitate toward unhealthy user-AI relationships. The percentages now, whatever they might be, could climb precipitously. Our goal should be to keep the chances to a minimum. Stop the descent into the red zone. Seek to prop people up into the green zone and keep them there.

Carl Jung, the famed psychotherapist, made this notable remark years ago: “The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.” In the case of user-AI relationships, I would wager that people are the ones being transformed more than the AI.

Our aspiration should be that the presumed transformation is a positive one for people and that humankind is energized and able to be even more human, rather than getting hopelessly embroiled in an AI-obsessed bond.

Anthropic Claude Google Gemini Meta Llama artificial intelligence AI generative AI large language model LLM green yellow orange red zone mental health therapy therapist cognition obsession dependence delusion user-AI relationships OpenAI Sam Altman ChatGPT GPT-5 GPT-4o o1 o3 psychology psychiatrist coaching counseling spectrum range challenges
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

1 Habit Emotionally Intelligent Adults Had As Kids, By A Psychologist

1 Habit Emotionally Intelligent Adults Had As Kids, By A Psychologist

1 April 2026
The Graveyard Of OpenAI’s Dead Products And Incomplete Deals

The Graveyard Of OpenAI’s Dead Products And Incomplete Deals

1 April 2026
How The Children’s Movie “Cars” Forewarns A Post-Human Era

How The Children’s Movie “Cars” Forewarns A Post-Human Era

1 April 2026
Inside The New Deal Pipelines Female Founders Are Quietly Building

Inside The New Deal Pipelines Female Founders Are Quietly Building

1 April 2026
Apple Did The Unthinkable With Its 9 MacBook Neo

Apple Did The Unthinkable With Its $599 MacBook Neo

1 April 2026
Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

1 April 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
SpaceX has filed confidentially for IPO ahead of AI rivals

SpaceX has filed confidentially for IPO ahead of AI rivals

1 April 20261 Views
Macquarie bets impact investing can fill an Asian finance gap

Macquarie bets impact investing can fill an Asian finance gap

1 April 20261 Views
Trump will address the nation on Wednesday on the Iran war Wednesday—here’s what to expect

Trump will address the nation on Wednesday on the Iran war Wednesday—here’s what to expect

1 April 20260 Views
Tiger Woods says he’ll seek treatment for substance abuse after another DUI arrest

Tiger Woods says he’ll seek treatment for substance abuse after another DUI arrest

1 April 20260 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
Microsoft and Chevron enter exclusivity deal on powering West Texas data center complex

Microsoft and Chevron enter exclusivity deal on powering West Texas data center complex

2 April 2026
Deutsche Bank asked AI if it will solve the economy’s inflation problems. The robots answered

Deutsche Bank asked AI if it will solve the economy’s inflation problems. The robots answered

2 April 2026
AI ‘slop’ is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban

AI ‘slop’ is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban

2 April 2026
Most Popular
The SpaceX IPO is great — but it won’t deliver 100x returns 

The SpaceX IPO is great — but it won’t deliver 100x returns 

2 April 20261 Views
SpaceX has filed confidentially for IPO ahead of AI rivals

SpaceX has filed confidentially for IPO ahead of AI rivals

1 April 20261 Views
Macquarie bets impact investing can fill an Asian finance gap

Macquarie bets impact investing can fill an Asian finance gap

1 April 20261 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.