AI won’t save healthcare alone

Artur O.'s avatarPosted by
AI is a textbook example of the hype cycle and effective marketing, according to Jessica Morley
AI is a textbook example of the hype cycle and effective marketing, according to Jessica Morley

But it might help us rethink it.

Jessica Morley, Associate Research Scientist at Yale’s Digital Ethics Center, doesn’t buy the hype around AI in healthcare. She explains why a digital intervention alone cannot solve a physical healthcare system, why expecting AI to “fix” medicine is deeply misguided, and what it truly takes to build responsible, human-centered technologies.

Many hope that AI will heal or rescue the broken healthcare system. Are they wrong or right?

I think it’s somewhere in between. AI isn’t going to solve every problem in global healthcare systems, especially not with how we’re using it right now. There’s a tendency to see AI as a way to do everything we already do, just faster or cheaper. Predict quicker, diagnose faster, etc. But that breaks down in the real world because human systems are messy, and algorithms prefer order. They don’t naturally coexist.

I do think AI can help, especially with reducing administrative burdens and advancing research. It could help us understand health better and make better use of underutilized health data. But again, it’s not going to be a cure-all, because you can’t solve physical, social problems with a digital tool alone.

For example, if AI tells someone they’re at risk for type 2 diabetes and recommends more fruits and veggies or more steps per day, but they live in an environment where that’s not possible, then the AI output is effectively useless. We need to fix the physical and social constraints first, or AI recommendations just won’t be actionable.

AI is neither new nor magical. But why do so many still believe it’s a breakthrough technology? Is healthcare so broken that we’re desperate for a fix?

AI is a textbook example of the hype cycle and effective marketing. Around 2015 or 2016, when companies like DeepMind became involved, it was still a niche area. But over time, the media, policymakers, and funders started to buy into the excitement. That created a feedback loop: everyone wanted to do something with AI, so suddenly everything using data was labeled AI.

And yes, there’s a deep desire for a silver bullet, a golden solution. If you look at medical history, this has happened before. Antibiotics, for example, created the same kind of excitement. The difference now is that the speed of global information exchange makes the hype more visible. But people tend to have short-term memories – we forget these cycles repeat.

You claim that individual metrics might predict only up to 40% of a person’s health, leading to a kind of politics of avoidance. So, how can AI be reoriented to support population health and address social determinants, rather than just fine-tuning care for the already well-served?

That 40% figure varies, but the point is that so much of health is influenced by factors outside a person’s control, including their environment, food, water, and social networks, as well as misinformation online. Since the 1970s, we’ve known that individual behaviors have limited influence. Even genetics, once seen as the answer to everything, hasn’t been as predictive as hoped.

To reorient AI, we need to lean into the science. AI can help us understand where and why certain health outcomes occur, not just who is at risk, but also what social or environmental conditions are contributing to that risk. Why are diabetes rates rising in one specific community? Instead of pushing ads for Ozempic, we could use AI to inform urban planning, improve access to nutritious food, or provide healthy cooking lessons in schools.

That’s where AI could be transformative – not by blaming individuals but by helping us act on population-level insights.

If AI treats a person’s data, not the person, as you argue, how do we prevent care from becoming detached from the human experience? And hasn’t that already happened, with doctors reading electronic records instead of talking to patients?

To some extent, it’s already true that doctors rely heavily on data. But a key difference is that human doctors can – and often do – still engage directly with patients. They might notice subtle signs, such as a patient’s skin tone, their energy, or the way a child behaves. Parents might say, “They’re just not themselves,” and a doctor can interpret that.

Algorithms don’t understand “not quite right.” They don’t access the broader human context, whether a patient is depressed because they lost their job, or stressed from life events, because that data isn’t usually recorded or connected.

So, how do we prevent further detachment? First, we need to think more carefully about the data we collect and the models we build. Not all clinical problems are best solved by old-fashioned rule-based AI.

Second, we must avoid outsourcing everything to algorithms. Especially for people most at risk of exclusion, there must always be human alternatives. I’ve heard of a GP practice using an AI appointment triage system. Patients were leaving in droves because they didn’t feel heard. That’s not the direction we want. We should design systems where humans and AI work in parallel, not in competition.

The digital transformation is making healthcare shift from evidence-based to algorithm-based care. How can we ensure accountability in AI-driven care when no single actor is responsible for mistakes?

Let me break this down. Evidence-based medicine includes two components: high-quality evidence from trials and reviews, and the clinician’s contextual application of that evidence to individual patients. The second part is often forgotten, and it’s completely missing in AI systems.

So when I say we’re shifting to algorithm-based medicine, I mean we’ve lost both the rigor of evidence and the human judgment. To address accountability, three things are needed:

  1. Auditability. We need full transparency into where AI is used, what data it’s trained on, and how decisions are made. Currently, that’s lacking—even identifying where AI is in use is hard.
  2. Liability. Our legal frameworks need updating. Doctors are still held responsible, but if they don’t fully understand the algorithm or fall victim to automation bias, it’s unclear where the blame lies. We need to ensure clinicians can challenge algorithms and are supported in doing so.
  3. Regulation. Software-as-a-medical-device regulation isn’t ready for these tools, especially not generative AI. We need to agree on what constitutes “gold standard evidence” for AI and how we measure safety and efficacy when tools are dynamic and non-deterministic.

AI is about data. Meanwhile, “those most in need of healthcare often have the least accurate data”. How can we responsibly use AI in underrepresented populations?

We need to rethink the data infrastructure. Instead of always demanding more data, we should define a minimum viable dataset: what is the baseline needed to train models responsibly?

Then we need to understand why certain data is missing. For instance, in the UK, ethnicity is often not recorded, not because the field doesn’t exist, but due to distrust. That’s a social issue, not a technical one.

We also need to build models that don’t require excessive data from commercial sources, such as wearables or credit cards. These tools risk becoming accurate only for the wealthy and widening the gap in care.

Finally, we must accept that bias will always exist because the world is biased. So we need safeguards. That could mean not designing services solely based on AI predictions, but checking population data, engaging communities, and planning for those left out.

All you’ve said about ethical AI suggests that AI must be regulated strictly. But should we regulate AI now or wait and see how it evolves?

That’s the Collingridge dilemma: regulate early with little information, or regulate late when it’s too entrenched to change. But I think it’s a false dichotomy.

Regulation and innovation aren’t enemies. Regulation can evolve with technology. For example, we can expand the definition of software as a medical device to include more tools and then design flexible evidence standards, such as real-world monitoring, rather than relying on rigid trials.

Regulatory sandboxes, like those in the EU AI Act, are another great idea. They let us experiment safely without putting the public at risk.

Most importantly, we need auditability. If we know where AI is used, we can go back and investigate mistakes. The idea that regulation always stifles innovation is a myth, often pushed by those with vested interests.

If you had a choice: a diagnostic algorithm with 95% accuracy or a human doctor with 90%, which would you choose?

I’ll sound like a classic social scientist here: it depends. In settings with no access to a doctor, using a 95% accurate algorithm is better than nothing. We have a duty to test and use anything that could improve or save lives.

But we should also shift the conversation. Instead of algorithm versus doctor, let’s talk about how they can work together. Collaboration is more important than replacement. Sometimes algorithms outperform doctors, but often, those comparisons are cherry-picked. A junior doctor may be outperformed by an algorithm, but a seasoned consultant might not be. They’re apples and oranges. We should aim for a better fruit salad by combining their strengths.

Another dilemma: Is it better to ask ChatGPT for a diagnosis than to use Google?

Not really. They both rely on similar information, but ChatGPT is more convincing. That’s dangerous: if it’s wrong, it’s persuasively wrong. It’s also designed to please. If you hint at a desired answer, it will often give it.

Google is annoying now with its AI summaries, but at least it presents multiple perspectives. Neither is ideal. I totally understand the temptation – and I do it too – but I wouldn’t default to ChatGPT. It’s a slippery slope.

Listening to you, it’s clear that AI in medicine is something you’re passionate about. Where does your fascination with AI ethics come from?”

I’ve always loved understanding complex things. My undergrad degree was in geography because I couldn’t choose just one subject. Ethics feels the same. It touches everything from society and law to tech and policy. It demands a holistic view.

There’s also something fascinating about merging the physical, grounded world of medicine with the abstract nature of algorithms. The questions are endless and that’s what keeps me interested.


Can I ask you a favor?

Donate to aboutDigitalHealth.com and help sustain this not-for-profit knowledge portal. Every contribution makes a difference. Thank you!

€1.00

Leave a comment