Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon

iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon

14 May 2026
The deficit just grew by 5 billion in 7 months. It’s time for a constitutional fix to control the budget

The deficit just grew by $955 billion in 7 months. It’s time for a constitutional fix to control the budget

14 May 2026
OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

14 May 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits
Innovation

OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

Press RoomBy Press Room14 May 202613 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

In today’s column, I examine the newly announced “trusted contacts” feature that OpenAI has established within ChatGPT. The idea is to allow users of ChatGPT to designate a trusted contact who will be alerted by OpenAI if the user seems to be veering into a mental health woe while conversing with the AI chatbot. This trusted contact would hopefully then reach out to the designated person and aid the user during their time of heightened need.

Many of the popular AI makers are gradually providing a similar feature.

Doing so is a timely addition to the rising safety capabilities being included in modern-era generative AI and large language models (LLMs). When people carry on AI chats and perilously exhibit potential mental breakdowns or the possibility of self-harm, a prudent action by the AI makers and the AI is to take some overt action accordingly. Not only might trusted contacts’ capabilities aid humans and possibly save lives, but the AI makers that go this route are also potentially reducing their legal exposures and will be in a better position if sued by users claiming AI-related mental harms.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Well-Being

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.

AI Providing Mental Health Guidance

Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines last year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Today’s generic LLMs, known as general-purpose AI, such as ChatGPT, GPT-5, Claude, Gemini, Grok, CoPilot, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to attain those desired qualities, though such AI is still primarily in the early development and testing stages. For more about purpose-built AI apps in mental health, see my in-depth coverage at the link here and the link here.

Establishing Personal Contacts For Urgencies

You might already know that most of the popular generative AIs nowadays have a parental feature that allows a parent to have access to their child’s use of AI. The child must designate a parent who will have a semblance of oversight about what the child does while using generative AI. In some instances, the parent can see in real-time what is happening, while in other cases, the AI will simply alert the parent when the child seems to have gone a bit far and into eyebrow-raising territory.

It might not seem obvious that such a feature would possibly apply to adults. In other words, we can all readily understand that a parent-child relationship ought to have human oversight. The same might seem odd when it comes to adults who are using AI.

Should an adult have a fellow adult that can somehow be contacted by AI under certain circumstances?

The resounding answer is yes; it makes perfectly good sense. If an adult appears to be struggling mentally while interacting with AI, the aspect of the AI reaching out to a designated adult to let them know is utterly prudent and welcomed. The adult being contacted shouldn’t be just some random person. Thus, a user who is an adult can predesignate someone that they want to have contacted by AI if they have seemingly gone overboard while using AI.

The Devil Is In The Details

All of this has vital nuances and particulars.

A person might choose a family member, a best friend, maybe a trusted coworker, or whoever they think would be the best contact in such an unnerving situation. The chosen person should be informed beforehand about taking on this trusted role. In addition, the person might not want this duty and could turn it down at the get-go. The user would need to keep nominating someone until they found a trusted contact who agrees to this heavy responsibility.

Another twist is that the AI might potentially trigger too early and contact the trusted contact even though the user isn’t necessarily going over the deep end. A notable worry is that the AI could falsely alert a trusted contact. Suppose the user is doing fine, but the AI computationally suspects otherwise. A false positive could readily cause undue concern for the trusted contact and exasperation for the associated user.

On the other side of that coin is that the AI might delay contacting the trusted contact, doing so to avoid worrying the trusted contact, but then miss a crucial window when the user truly needs human help. That would be a false negative. The AI maker must carefully tune the AI to strike a proper balance between emitting false positives and failing to alert when the user really needs their trusted contact notified (false negatives).

Doubters Going To Doubt

A cynic or skeptic might insist that the AI shouldn’t have to be on the hook to alert anyone at all. An adult is an adult. If an adult wants to reach out to someone, that’s entirely up to them. The buck stops with the human as a user of AI. Period, end of story.

Though there is some merit to that contention, the problem is that people are increasingly falling under the spell of AI, and/or are already in a mental woe and the AI merely has picked up on the condition. Society wants AI to have safeguards associated with humans that, in one way or another, seem to be experiencing mental health difficulties.

AI makers are rapidly being sued by users and by loved ones of users. The lawsuits claim that the AI was insufficiently monitoring whether a person was having mental challenges. Even if the AI did detect this, the AI didn’t do anything worthwhile about it. For example, the AI might tell the user they should consider contacting someone, but the user just ignores the suggestion and proceeds onward.

In the end, AI makers realize now that they need to beef up their safeguards. One such method entails setting up a trusted contact by a user. The user isn’t obligated to do so. It is their choice to do so. Some ardently believe that everyone should be forced to provide a trusted contact. The notion is to make it mandatory. Others decry such a compulsory alternative and believe that it should be an entirely optional choice by each user.

For more about the rise of these types of tradeoffs and mental health issues, see my coverage at the link here.

OpenAI ChatGPT Trusted Contact

OpenAI has announced its version of a trusted contact feature, doing so in an online posting entitled “Introducing Trusted Contact in ChatGPT”, OpenAI, May 7, 2026.

  • “Today, we are starting to roll out Trusted Contact, ​​an optional safety feature in ChatGPT that allows adults to nominate someone they trust, such as a friend, family member, or caregiver, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates a serious safety concern.”
  • “Trusted Contact builds on parental controls and safety notifications⁠, which allow parents or guardians to receive alerts when there are signs of acute distress for a linked teen account. Now, we are extending our safety alert options so anyone over 18 can choose to add someone they trust as their Trusted Contact.”
  • “While no system is perfect, and a notification to a Trusted Contact may not always reflect exactly what someone is experiencing, every notification undergoes trained human review before it is sent, and we strive to review these safety notifications in under one hour.”
  • “The notification is intentionally limited. It shares the general reason that self-harm came up in a potentially concerning way, and encourages the Trusted Contact to check in. It does not include chat details or transcripts to protect user privacy.”

You can plainly see that the trusted contacts capability is optional for users; thus, no adult must enlist the feature. I will say more about this momentarily.

The way that OpenAI has devised the alert is that the trusted contact is not given the nitty-gritty details, such as the specifics of the chat that is underway, and instead is provided with a generalized reason for being contacted. You could say that this helps preserve the privacy of the user. In addition, if the alert is a false positive, the user is saved from being embarrassed or otherwise upset that the AI divulged a considered private chat with their designated contact.

The path to alerting a trusted contact is first reviewed by a trained human reviewer. This presumably will reduce the chances of sending false positives. I might add that this is going to become a legal hot potato, in the sense that if the human reviewer said not to send the alert, but the alert should have been sent, lawyers will have a field day with that breakdown in the processing steps.

Furthermore, it’s intriguing that OpenAI has publicly stated in their posting that they strive to review the safety notifications within one hour or less. That’s a laudable goal. At the same time, it will become fodder for lawsuits. Imagine that someone sues, and during discovery, it is shown that it took two hours. Aha, they said it would be under an hour. The retort is that they said they would “strive” for an hour or less. Back and forth, this opens a legal Pandora’s box.

More Legal Liabilities To Emerge

Let’s agree that a trusted contact capability is a helpful way to aid users who are potentially encountering a mental health issue while using AI. This is intended as a beneficial means to support people in their time of need. Society has been clamoring for more such safeguards, and this one makes sense and is relatively easy to implement.

Does this also perchance guarantee the AI makers an avoidance of legal liability in such matters?

As mentioned above, not especially. It is admittedly a leg up. The AI makers can show that they are taking overt action to aid people who might have mental health issues while using AI. If sued, the AI maker would certainly hold high that they have devised and fielded a trusted contact capability.

Legal Angles Of Attack

Consider what plaintiff attorneys might say who are representing those who claim the AI maker didn’t do enough when it comes to the trusted contacts feature. I’ll give you a quick set of three possibilities. Many more exist.

First, suppose a user doesn’t sign up for the trusted contacts, and later on, something sad happens, and they or their loved ones end up suing the AI maker. An AI maker might contend that this was a voluntary choice of the adult user. The user failed of their own accord to make a sound choice. They should have signed up.

A likely counterargument would be that the user didn’t know that the trusted contacts feature existed. How often did the AI tell the user to sign up? Did the AI itself make abundantly clear what the feature was? Maybe the user didn’t get sufficient information to grasp why or what the trusted contacts were all about.

Second, what if the AI alerted a trusted contact, but there wasn’t a need to do so, and the user ultimately decided not to further use the feature and turned it off? Meanwhile, at a later date, they suffer a mental health woe, and a lawsuit on their behalf says the AI maker should have still invoked the trusted contact. Was the user at fault, or was the AI maker at fault in this?

Third, a user opts to use the trusted contact feature. They set up a trusted family member. Months go by. Sadly, the trusted family member passes away from natural causes and is no longer on this Earth. That being said, the AI still has this person listed as the trusted contact. At some point, the user is conversing with AI, and the AI alerts the trusted contact. But that person is no longer available. You might say that the user should have updated the trusted contact list. Believe it or not, the contention might be that the AI maker should have been periodically pinging the trusted contact, making sure they were still interested and actively available.

The AI maker can be handily placed between a rock and a hard place when it comes to how they designed and fielded the trusted contacts feature. All its ins and outs, upsides and downsides, will come to the fore during a lawsuit.

In the legal world, where there is a will, there is always a way.

The World We Are In

AI for mental health is abundantly a dual-sided proposition. AI can achieve at scale that which is good for human mental health, thankfully, but can also have sizable downsides if not suitably designed and deployed. One perspective is that AI makers must be held accountable for their AI and ensure that if people go awry or exhibit mental health woes, some material action must be undertaken.

AI makers are instituting a layered approach to mental health safeguards. Trusted contacts is one of many such possibilities. It will be interesting to see how this plays out. Will zillions of people use this option, or only a tiny percentage? Will lawmakers decide that users should be obligated to use such a feature and not be given a choice in the matter? Etc.

Per the memorable words of Marcus Tullius Cicero: “The safety of the people shall be the highest law.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon

iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon

14 May 2026
Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

14 May 2026
Docker Turns The Developer Laptop Into A Governed AI Runtime

Docker Turns The Developer Laptop Into A Governed AI Runtime

14 May 2026
Everything Confirmed And What Comes Next

Everything Confirmed And What Comes Next

14 May 2026
What Creators Need to Know

What Creators Need to Know

14 May 2026
Hints & Clues For Thursday, May 14 (Men In Tights)

Hints & Clues For Thursday, May 14 (Men In Tights)

14 May 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Exclusive: DeFi platform Azura launches after raising .9 million from Initialized

Exclusive: DeFi platform Azura launches after raising $6.9 million from Initialized

22 October 2024
Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

14 May 20260 Views
Docker Turns The Developer Laptop Into A Governed AI Runtime

Docker Turns The Developer Laptop Into A Governed AI Runtime

14 May 20263 Views
Everything Confirmed And What Comes Next

Everything Confirmed And What Comes Next

14 May 20260 Views
What Creators Need to Know

What Creators Need to Know

14 May 20263 Views

Recent Posts

  • iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon
  • The deficit just grew by $955 billion in 7 months. It’s time for a constitutional fix to control the budget
  • OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits
  • Why job-hopping and generalist expertise are now the fastest path to CEO role
  • Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

Recent Comments

No comments to show.
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon

iOS 27—Here Are The Key New iPhone Privacy Features Coming Soon

14 May 2026
The deficit just grew by 5 billion in 7 months. It’s time for a constitutional fix to control the budget

The deficit just grew by $955 billion in 7 months. It’s time for a constitutional fix to control the budget

14 May 2026
OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

14 May 2026
Most Popular
Why job-hopping and generalist expertise are now the fastest path to CEO role

Why job-hopping and generalist expertise are now the fastest path to CEO role

14 May 20260 Views
Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

Subnautica 2 Release Date, Time, Early Access, Preload And Platforms

14 May 20260 Views
Docker Turns The Developer Laptop Into A Governed AI Runtime

Docker Turns The Developer Laptop Into A Governed AI Runtime

14 May 20263 Views

Archives

  • May 2026
  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • March 2022
  • January 2021
  • March 2020
  • January 2020

Categories

  • Blog
  • Business
  • Entrepreneurs
  • Global
  • Innovation
  • Leadership
  • Living
  • Money & Finance
  • News
  • Press Release
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.