Hospitals, clinical practices, and healthcare systems across the U.S. are struggling. Their workforces are strained and shorthanded. Their operating costs are rising. And demand for their services often exceeds capacity, limiting care access.
Enter artificial intelligence. In the nearly two years since ChatGPT’s launch thrust AI into the spotlight, investors, tech companies, and healthcare organizations have invested massively in AI, issued countless press releases, and launched innumerable pilots, at times painting breathless visions of AI saving healthcare.
However, AI’s net impact on healthcare has been limited so far. Are we expecting too much too soon?
Expectations Versus Reality
Across the broader (non-healthcare) economy, a growing chorus is outlining AI’s bear case as the gap between expectations and reality widens. While many companies now use AI to generate emails, images, and marketing materials, there is no “killer application” to justify AI’s high costs.
Compared to other industries, AI may have an even tougher time reshaping healthcare, where the stakes are high, organizations are complex, and regulations are uncertain.
For one, there are technical challenges. Predictive algorithms do not generalize across settings. For example, hospitals implementing a sepsis algorithm “out of the box” (without training it on local data) experienced many false alarms and undetected sepsis cases. Furthermore, generative AI is too unreliable to apply to high-value tasks, such as performing triage, making diagnoses, and recommending treatments. The challenge is that “a generative AI system like GPT-4 is both smarter than anyone you’ve met and dumber than anyone you’ve met,” explained Microsoft Research President Peter Lee. “We both assume too much and too little about its potential in health care.”
Additionally, many physicians, nurses, and healthcare consumers are skeptical of AI, concerned it will jeopardize privacy, exacerbate biases, and tarnish doctor-patient relationships. Based on their experiences with electronic health records—which have failed to meet expectations and contributed to burnout—they find the claim that AI will necessarily improve healthcare no longer credible.
Finally, implementing AI in the real world is complex, involving many stakeholders, requiring significant resources, and fraught with potential pitfalls. Yet, unlike prior digital initiatives like implementing electronic health records (over $34 billion in meaningful use payments) or temporarily pivoting to virtual care (the COVID-19 pandemic), provider organizations have no major incentives to adopt AI products, which increase costs and force them to change their workflows, usually without directly increasing reimbursement.
Navigating A Prolonged Transition
None of this is to say that AI is or will be useless in healthcare. Some organizations already use AI solutions for meaningful benefits, such as preventing rehospitalizations and easing doctors’ documentation burden. As AI technology advances, it is poised to improve various aspects of clinical care, operations, and research.
Still, we must dial back our expectations. History tells us that it will take many years—not months—to build useful AI products, integrate them into workflows, and eventually unlock new, better ways of providing care.
During this transitional period, healthcare provider organizations should take the following actions to maximize AI’s current and future net benefits.
1. Safely Experiment and Evaluate
They must follow the foundational principles of evidence-based medicine, recognizing that, however exciting the technology is, healthcare is first about people, not products. Organizations like the Coalition for Healthcare AI are developing standards for implementing healthcare AI models and establishing assurance labs to evaluate them. Healthcare providers should pilot solutions for meaningful problems and establish the governance and evaluation necessary to ensure they use effective AI tools safely and fairly.
2. Improve Systems of Care
Healthcare is a complex adaptive system in which multiple, dynamically interacting components determine performance. Organizations implementing AI should follow a holistic systems approach, looking beyond technology to include people, systems, and design.
For one, before rushing to automate any healthcare task, they must first ask whether the task is worth doing in the first place. As Peter Drucker taught, “There is nothing quite so useless as doing with great efficiency something that should not be done at all.”
Second, because constraints—points where demand exceeds capacity—determine the pace at which an entire process can work, they must identify and relieve downstream bottlenecks before applying AI to make various processes more efficient. For example, automating patient scheduling will have little impact if doctors’ schedules are already filled. And early identification of patients with sepsis will be useless if nurses and clinicians cannot act on the information.
3. Accept incremental gains
Organizations must resist audacious claims and ground in reality. AI will not magically fix all that ails healthcare. And because large language models cannot reason or truly understand, many healthcare problems may require new hybrid approaches that blend machine learning with more traditional symbolic AI.
Still, they can harness today’s AI for modest benefits (e.g., offloading some drudgery and tailoring patient educational and engagement content) while setting themselves up for future success. Importantly, they must not overlook non-AI opportunities to do better. And, most of all, they should reflect on who they are, what they do, and how they can do it better—with or without AI.