Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
At least 2 killed and several more hurt in shooting at Brown University with suspect still at large

At least 2 killed and several more hurt in shooting at Brown University with suspect still at large

13 December 2025
Danish intelligence report warns of US economic leverage and military threat under Trump

Danish intelligence report warns of US economic leverage and military threat under Trump

13 December 2025
More financially distressed farmers will lose their property as loan repayments and incomes falter

More financially distressed farmers will lose their property as loan repayments and incomes falter

13 December 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » End-Of-Life: The One Decision AI Cannot Predict
Innovation

End-Of-Life: The One Decision AI Cannot Predict

Press RoomBy Press Room10 February 20245 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
End-Of-Life: The One Decision AI Cannot Predict

We often talk about personalized medicine; we hardly ever talk about personalized death.

End-of-life decisions are some of the most intricate and feared resolutions, by both patients and healthcare practitioners. Although multiple sources indicate that people would rather die at home, in developed countries they often end their lives at hospitals, and many times, in acute care settings. A variety of reasons have been suggested to account for this gap, among them the under-utilization of hospice facilities, partially due to delayed referrals. Healthcare professionals do not always initiate conversations about end-of-life, perhaps concerned about causing distress, intervening with patients’ autonomy, or lacking the education and skills of how to discuss these matters.

We associate multiple fears with dying. In my practice as a physician, working in palliative care for years, I have encountered three main fears: fear of pain, fear of separation and fear of the unknown. Yet, living wills, or advanced directives, which could be considered as taking control of the process to some extent, are generally uncommon or insufficiently detailed, leaving family members with an incredibly difficult choice.

Apart from the considerable toll they face, research has demonstrated that next-of-kin or surrogate decision makers can be inaccurate in their prediction of the dying patient’s preferences, possibly as these decisions personally affect them and engage with their own belief systems, and their role as children or parents (the importance of the latter demonstrated in a study from Ann Arbor).

Can we possibly spare these decisions from family members or treating physicians by outsourcing them to computerized systems? And if we can, should we?

AI For End-Of-Life Decisions

Discussions about a “patient preference predictor” are not new, however, they have been recently gaining traction in the medical community (like these two excellent 2023 research papers from Switzerland and Germany), as rapidly evolving AI capabilities are shifting the debate from the hypothetical bioethical sphere into the concrete one. Nonetheless, this is still under development, and end-of-life AI algorithms have not been clinically adopted.

Last year, researchers from Munich and Cambridge published a proof-of-concept study showcasing a machine-learning model that advises on a range of medical moral dilemma: the Medical ETHics ADvisor, or METHAD. The authors stated that they chose a specific moral construct, or set of principles, on which they trained the algorithm. This is important to understand, and though admirable and necessary to have been clearly mentioned in their paper, it does not solve a basic problem with end-of-life “decision support systems”: which set of values should such algorithms be based on?

When training an algorithm, data scientists usually need a “ground truth” to base their algorithm on, often an objective unequivocal metric. Let us consider an algorithm that diagnoses skin cancer from an image of a lesion; the “correct” answer is either benign or malignant – in other words, defined variables we can train the algorithm on. However, with end-of-life decisions, such as do-not-attempt-resuscitation (as pointedly exemplified in the New England Journal of Medicine), what is the objective truth against which we train or measure the performance of the algorithm?

A possible answer to that would be to exclude moral judgement of any kind and simply attempt to predict the patient’s own wishes; a personalized algorithm. Easier said than done. Predictive algorithms need data to base their prediction on, and in medicine, AI models are often trained on a large comprehensive dataset with relevant fields of information. The problem is that we don’t know what is relevant. Presumably, apart from one’s medical record, paramedical data, such as demographics, socioeconomic status, religious affiliation or spiritual practice, could all be essential information to a patient’s end-of-life preferences. However, such detailed datasets are virtually non-existent. Nonetheless, recent developments of large language models (such as ChatGPT) are allowing us to examine data we were previously unable to process.

If using retrospective data is not good enough, could we train end-of-life algorithms hypothetically? Imagine we question thousands of people on imaginary scenarios. Could we trust that their answers represent their true wishes? It can be reasonably argued that none of us can predict how we might react in real-life situations, rendering this solution unreliable.

Other challenges exist as well. If we do decide to trust an end-of-life algorithm, what would be the minimal threshold of accuracy we would accept? Whichever the benchmark, we will have to openly present this to patients and physicians. It is difficult to imagine facing a family at such a trying moment and saying “your loved one is in critical condition, and a decision has to be made. An algorithm predicts that your mother/son/wife would have chosen to…, but bear in mind, the algorithm is only right in 87% of the time.” Does this really help, or does it create more difficulty, especially if the recommendation is against the family’s wishes, or is delivered to people who are not tech savvy and will struggle to grasp the concept of algorithm bias or inaccuracies.

This is even more pronounced when we consider the “black box” or non-explainable characteristic of many machine learning algorithms, leaving us unable to question the model and what it bases its recommendation on. Explainability, though discussed in the wider context of AI, is particularly relevant in ethical questions, where reasoning can help us become resigned.

Few of us are ever ready to make an end-of-life decision, though it is the only certain and predictable event at any given time. The more we own up to our decisions now, the less dependent we will be on AI to fill in the gap. Claiming our personal choice means we will never need a personalized algorithm.

AI algorithms Artificial Intelligence Death decisions end-of-life Fear pain palliative care Spirituality
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

Illinois Latest State To Approve ‘Right To Die’ Legislation

Illinois Latest State To Approve ‘Right To Die’ Legislation

13 December 2025
Early Buzz For ‘Highguard,’ The Game Awards Closer, Is Quite Poor

Early Buzz For ‘Highguard,’ The Game Awards Closer, Is Quite Poor

13 December 2025
Apple Confirms iPhone Attacks—All Users Must Update Now

Apple Confirms iPhone Attacks—All Users Must Update Now

13 December 2025
Samsung Galaxy S26 Release Date: What’s Happening In May?

Samsung Galaxy S26 Release Date: What’s Happening In May?

13 December 2025
Google’s Play Update—Bad News For Most Samsung Users

Google’s Play Update—Bad News For Most Samsung Users

13 December 2025
WWE SmackDown December 12, 2025 Results: Highlights And Takeaways

WWE SmackDown December 12, 2025 Results: Highlights And Takeaways

13 December 2025
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
John Summit went from working 9 a.m. to 9 p.m. in a ,000 job to a multimillionaire DJ—‘I make more in one show than I would in my entire accounting career’

John Summit went from working 9 a.m. to 9 p.m. in a $65,000 job to a multimillionaire DJ—‘I make more in one show than I would in my entire accounting career’

18 October 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Trump couldn’t insult his way to victory in Indiana redistricting battle

Trump couldn’t insult his way to victory in Indiana redistricting battle

13 December 20250 Views
Stock market rotation out of AI is just getting started, analysts say

Stock market rotation out of AI is just getting started, analysts say

13 December 20250 Views
2 U.S. service members and one American civilian killed in Islamic State ambush in Syria

2 U.S. service members and one American civilian killed in Islamic State ambush in Syria

13 December 20252 Views
Early Buzz For ‘Highguard,’ The Game Awards Closer, Is Quite Poor

Early Buzz For ‘Highguard,’ The Game Awards Closer, Is Quite Poor

13 December 20251 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
At least 2 killed and several more hurt in shooting at Brown University with suspect still at large

At least 2 killed and several more hurt in shooting at Brown University with suspect still at large

13 December 2025
Danish intelligence report warns of US economic leverage and military threat under Trump

Danish intelligence report warns of US economic leverage and military threat under Trump

13 December 2025
More financially distressed farmers will lose their property as loan repayments and incomes falter

More financially distressed farmers will lose their property as loan repayments and incomes falter

13 December 2025
Most Popular
Illinois Latest State To Approve ‘Right To Die’ Legislation

Illinois Latest State To Approve ‘Right To Die’ Legislation

13 December 20250 Views
Trump couldn’t insult his way to victory in Indiana redistricting battle

Trump couldn’t insult his way to victory in Indiana redistricting battle

13 December 20250 Views
Stock market rotation out of AI is just getting started, analysts say

Stock market rotation out of AI is just getting started, analysts say

13 December 20250 Views
© 2025 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.