Making AI Work For Health

Posted by
Gina Neff is the author of the books "Self-Tracking" and Human-"Centered Data Science".
Gina Neff wrote two books on AI: “Self-Tracking” and “Human-Centered Data Science”.

Will AI see us now? Interview with Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge.

A recent University of California San Diego study found that ChatGPT outperforms physicians in empathetic answers to patient questions. What does it mean for healthcare in the future?

People need trusted, evidence-based and accurate information to make good decisions about their health. The opportunity with new chat-based tools is that they can provide new ways to get that information to people. The challenge is that with these new ways of connecting people to information, there are more opportunities for errors, manipulation, and mistakes, and these are costly—and potentially deadly—in healthcare.

Imagine an AI-based chatbot that has access to the most recent studies and can compare data of millions of patients to create the best possible personalized treatment plan. Would you choose it over a human doctor?

The models emerging for healthcare are ‘decision support systems’ where good quality information is synthesized to help people make good decisions. The challenge is that the underlying studies that these systems rely on still overrepresent certain populations—those living in the Global North, with access to university research hospitals and who fit the requirements for inclusion in the study. We know that medical studies have excluded or underrepresented women and marginalized people for too long. This means that the best synthesis still leaves many people out of the equation.

What is needed for people to trust AI?

Trust is such a tricky area.

On the one hand, when AI technologies work for the benefit of people and societies, they can become invisible infrastructure in our daily lives. We don’t worry whether or not we ‘trust’ Netflix to give us recommendations for things to watch because the benefits of AI-recommender systems for entertainment far outweigh the negatives. It’s when AI systems are used to make judgments about values that things become harder.

In no other industry does trust play such a significant role as in the health sector. Do large language models (LLMs) have any use here at all?

LLMs are rapidly becoming part of the infrastructure of search. That means that they will play an enormous role in how people search for and get health information, and I think that should concern everyone in healthcare.

What practical applications do you see for LLMs like ChatGPT in healthcare? And which ones are just buzz?

There are many approaches to AI, and in the buzz around large language models, we are losing sight of the enormous strides happening in other types of deep learning models that can help clinicians manage larger amounts of data and diagnostic images. Currently, large language models still require one-to-one oversight by trained medical professionals to weed out bad or wacky information that they often produce. Generative AI isn’t fit for the clinic today, but other types of AI are being used in diagnostic imaging services, bench research, and in monitoring.

Do you think big tech companies offering more powerful AI algorithms will play an increasingly important role in healthcare, which has so far been the domain of traditional healthcare providers?

Big tech is already looking to make plays in healthcare because they are positioning healthcare problems as information problems. But as most people on the front lines of care know, information is only part of the equation. Our research on ‘data work’ in healthcare shows how there is still so much in the domain of traditional healthcare providers in gathering data, getting data ready for analysis, working with patients to help understand their data, and translating the results from data analysis from bench medicine to bedside medicine. We will see if tech companies can take their hands-off method and apply it successfully to a hands-on field.

The digitization of healthcare—due to the relatively conservative model of medical service delivery and entry barriers—is lagging behind other industries. Will the same happen to AI adoption?

I’m not sure that this is true. The healthcare business model in many countries does not necessarily prioritize new investments in administrative efficiency, so the incentives to digitize haven’t necessarily been there for individual clinics and clinicians. The US, for example, created federal funding and regulatory incentives for digitization. This shows that the path forward is one that sees government, industry and patients as partners in creating more efficient approaches to health information.

Are you more pessimistic or optimistic about the application of AI in medicine?

We are in the early stages of seeing what AI will contribute to fair, just, and equitable digital societies. I’m optimistic that we will see the enormous creativity of people who work to make better and healthier futures and that they will use the best tools—AI or otherwise—to help them achieve that for all of us.


Can I ask you a favour?

Please donate to aboutDigitalHealth.com (€1+) and support independent journalism. It only takes a minute. Thank you!

€1.00

Leave a comment