We often talk about personalized medicine; we hardly ever talk about personalized death.

End-of-life decisions are some of the most intricate and feared resolutions, by both patients and healthcare practitioners. Although multiple sources indicate that people would rather die at home, in developed countries they often end their lives at hospitals, and many times, in acute care settings. A variety of reasons have been suggested to account for this gap, among them the under-utilization of hospice facilities, partially due to delayed referrals. Healthcare professionals do not always initiate conversations about end-of-life, perhaps concerned about causing distress, intervening with patients’ autonomy, or lacking the education and skills of how to discuss these matters.

We associate multiple fears with dying. In my practice as a physician, working in palliative care for years, I have encountered three main fears: fear of pain, fear of separation and fear of the unknown. Yet, living wills, or advanced directives, which could be considered as taking control of the process to some extent, are generally uncommon or insufficiently detailed, leaving family members with an incredibly difficult choice.

Apart from the considerable toll they face, research has demonstrated that next-of-kin or surrogate decision makers can be inaccurate in their prediction of the dying patient’s preferences, possibly as these decisions personally affect them and engage with their own belief systems, and their role as children or parents (the importance of the latter demonstrated in a study from Ann Arbor).

Can we possibly spare these decisions from family members or treating physicians by outsourcing them to computerized systems? And if we can, should we?

AI For End-Of-Life Decisions

Discussions about a “patient preference predictor” are not new, however, they have been recently gaining traction in the medical community (like these two excellent 2023 research papers from Switzerland and Germany), as rapidly evolving AI capabilities are shifting the debate from the hypothetical bioethical sphere into the concrete one. Nonetheless, this is still under development, and end-of-life AI algorithms have not been clinically adopted.

Last year, researchers from Munich and Cambridge published a proof-of-concept study showcasing a machine-learning model that advises on a range of medical moral dilemma: the Medical ETHics ADvisor, or METHAD. The authors stated that they chose a specific moral construct, or set of principles, on which they trained the algorithm. This is important to understand, and though admirable and necessary to have been clearly mentioned in their paper, it does not solve a basic problem with end-of-life “decision support systems”: which set of values should such algorithms be based on?

When training an algorithm, data scientists usually need a “ground truth” to base their algorithm on, often an objective unequivocal metric. Let us consider an algorithm that diagnoses skin cancer from an image of a lesion; the “correct” answer is either benign or malignant – in other words, defined variables we can train the algorithm on. However, with end-of-life decisions, such as do-not-attempt-resuscitation (as pointedly exemplified in the New England Journal of Medicine), what is the objective truth against which we train or measure the performance of the algorithm?

A possible answer to that would be to exclude moral judgement of any kind and simply attempt to predict the patient’s own wishes; a personalized algorithm. Easier said than done. Predictive algorithms need data to base their prediction on, and in medicine, AI models are often trained on a large comprehensive dataset with relevant fields of information. The problem is that we don’t know what is relevant. Presumably, apart from one’s medical record, paramedical data, such as demographics, socioeconomic status, religious affiliation or spiritual practice, could all be essential information to a patient’s end-of-life preferences. However, such detailed datasets are virtually non-existent. Nonetheless, recent developments of large language models (such as ChatGPT) are allowing us to examine data we were previously unable to process.

If using retrospective data is not good enough, could we train end-of-life algorithms hypothetically? Imagine we question thousands of people on imaginary scenarios. Could we trust that their answers represent their true wishes? It can be reasonably argued that none of us can predict how we might react in real-life situations, rendering this solution unreliable.

Other challenges exist as well. If we do decide to trust an end-of-life algorithm, what would be the minimal threshold of accuracy we would accept? Whichever the benchmark, we will have to openly present this to patients and physicians. It is difficult to imagine facing a family at such a trying moment and saying “your loved one is in critical condition, and a decision has to be made. An algorithm predicts that your mother/son/wife would have chosen to…, but bear in mind, the algorithm is only right in 87% of the time.” Does this really help, or does it create more difficulty, especially if the recommendation is against the family’s wishes, or is delivered to people who are not tech savvy and will struggle to grasp the concept of algorithm bias or inaccuracies.

This is even more pronounced when we consider the “black box” or non-explainable characteristic of many machine learning algorithms, leaving us unable to question the model and what it bases its recommendation on. Explainability, though discussed in the wider context of AI, is particularly relevant in ethical questions, where reasoning can help us become resigned.

Few of us are ever ready to make an end-of-life decision, though it is the only certain and predictable event at any given time. The more we own up to our decisions now, the less dependent we will be on AI to fill in the gap. Claiming our personal choice means we will never need a personalized algorithm.

Share.
Exit mobile version