
In her new book Dr. Bot: Why Doctors Can Fail Us and How AI Could Save Lives, Charlotte Blease, Ph.D., examines how artificial intelligence could transform healthcare by tackling the psychological and systemic flaws that hinder medical practice. We explore why medicine often struggles to confront its own limitations and how AI may be better positioned to save lives than human doctors.
What is Dr. Bot really about?
The book asks: what are the problems in healthcare – especially the psychological ones – that AI might help solve? It’s built on the recognition that doctors are not gods; they’re human beings, with human limitations.
I look at the psychology of the medical appointment. There are challenges that cannot be fixed by better funding or hiring more staff. Yes, less burnout and more resources would help, but errors and miscommunication will always occur due to human limitations. My argument is that technology can be used as a workaround.
That doesn’t mean this is a love letter to technology. I’m critical of the tools we currently have. However, technology is only going to continue to improve, and we need honest conversations now about how to use it responsibly.

Writing about such a fast-moving field must have been difficult. How did you keep the book relevant?
That was tough. But I framed the book around the trajectory of a medical appointment, from getting access to disclosing symptoms to diagnosis. This structure helps it stay relevant because those stages don’t change, even if the technology does.
Half the book focuses on the problems. Patients often don’t disclose everything; some even die of embarrassment. Doctors interrupt. There’s a hierarchy in the room that inhibits honesty. But studies going back to the 1960s show patients pour their hearts out to machines. Even in email, people tend to disclose more information than they do face-to-face.
At every stage, I ask how technology might reduce these frictions. Of course, AI brings its own problems, but it may reduce noise in medicine, such as the impact of a doctor’s age, gender, or experience on outcomes.
- Half of doctors in the UK and US report burnout or depression.
- The AI healthcare market is projected to reach $148 billion by 2029.
- Victims of domestic abuse are 37 percent more likely to disclose to a computer than a doctor; AI can predict abuse up to 30 months earlier.
- Male bodies appear three times more often than female bodies in medical textbooks.
- Around 10% of women have endometriosis, but diagnosis takes an average of 8.6 years in the USA.
- One in ten people worldwide (500 million) live with rare diseases; in Europe, diagnosis for many rare illnesses takes about 10 years.
- It takes 17 years for new clinical research to reach patients.
- A biomedical article is published every 39 seconds; reading just 2% of these would take a doctor 22.5 hours a day.
We’ve been aware of healthcare shortages, errors, and inefficiencies for years. Why is the adoption of technology so slow?
That’s the million-dollar question. Medicine has a poor track record of confronting its human limitations. On an individual level, doctors tend to think errors are other people’s problems. Institutionally, medicine has resisted innovations for centuries – whether it was antiseptics, anesthesia, penicillin, clinical trials, or handwashing. Telemedicine also belongs on that list.
Doctors are trained in specific ways, and it’s psychologically hard to change practice on the job, especially under stress. That’s where AI has an advantage; it doesn’t suffer from human biases or fatigue.
When you say “Dr. Bot,” are you referring to traditional AI, machine learning, or generative AI?
All of the above. The book looks at the spectrum: traditional expert systems, predictive analytics, machine learning, generative AI, and even telemedicine. I explore what each can do, why some have failed, and the barriers to adoption. One major challenge is what’s called the “last mile problem”: getting doctors and algorithms to work well together.
Interestingly, laypeople are more likely to trust algorithms than doctors are. Studies show that when doctors use generative AI, they sometimes reduce its accuracy because they over-rely on their own training. That raises questions about expertise, accountability, and training.
Many doctors dislike electronic health records. Aren’t they wary that AI will replace them?
Absolutely. But survey research I’ve done over nearly a decade shows doctors assume AI will take over the tasks they dislike most, like documentation. In the U.K., surveys show one in four GPs already use generative AI tools for clinical tasks, often for paperwork. Another survey found that one in seven doctors is using ambient AI to assist during visits.
These tools don’t eliminate errors, but they reduce burnout and improve perceived quality. The irony is that doctors believe AI will only handle mundane tasks, but it will eventually challenge their expertise in diagnosis, raising questions about identity and existence.
Eric Topol once predicted digital health would democratize medicine, but it hasn’t yet. Are we entering that new era?
I think so, cautiously. Generative AI could benefit patients in much the same way the internet has. At first, doctors feared “Doctor Google” would create chaos. But research shows patients are often savvy about digital tools.
Yes, there are digital divides, language biases in training data, and ethical issues around privacy and inequality. However, remember that over half the world now has mobile internet access. For people with little or no access to doctors, even imperfect AI tools can be transformative.
Some say ChatGPT is already being used as a substitute for doctors. Dangerous shortcut—or proof that AI medicine is here?
It’s already here. Patients are using it, and denial is dangerous. For rare illnesses, surveys indicate that patients and families are twice as likely to utilize AI tools. That makes sense – diagnoses can take decades to confirm.
Take endometriosis. It takes an average of nine years to be diagnosed in the U.S., yet studies show that ChatGPT can provide accurate information instantly. Why should patients wait nearly a decade to get answers that doctors often withhold?
Of course, AI isn’t perfect. But neither are doctors; studies show they practice evidence-based medicine only about half the time. We mustn’t hold machines to a higher standard than humans.
Why did you decide to write this book?
Initially, I wanted to explore the psychology of medical appointments. But I realized it would be incomplete without asking what solutions might exist. Technology emerged as a potential workaround.
It’s also personal. I lost my partner to cancer, and my father to dementia six months later. My twin sister and my brother have muscular dystrophy, diagnosed late. I’ve seen medicine’s flaws firsthand.
I work within medical faculties, but I’m not bound to medicine as a profession or to big tech. My perspective is simple: if medicine exists to care for patients, who or what can do that better?
Finally, what’s your hope for AI in healthcare over the next five years?
I don’t think it will “fix” healthcare. Adoption will be messy. However, like the internet, these tools will, on balance, benefit patients, particularly by improving access to information and alleviating some of the burdens on doctors.
One hopeful example is ambient AI, which already reduces burnout in documentation. But again, it must work for all patients, including those whose first language isn’t English.
I think patients will continue to be savvy about using AI. And if we handle it responsibly, these tools could make healthcare just a little more humane.
Aren’t you afraid that the book will be controversial? Already, the title “Dr Bot” suggests that AI can be a doctor..
I’m not doctor-bashing. Doctors face enormous burdens and impossible knowledge demands. My argument is that we expect too much of them.
That said, conversations about AI tend to polarize. People go utopian or dystopian. Neither is useful. We need serious, middle-ground discussions about where AI genuinely helps, and where it introduces risks, especially around privacy.
Charlotte Blease is a health researcher examining how AI can reshape patient care, making healthcare more effective, ethical, and responsive. An Associate Professor at Uppsala University and Researcher at Harvard Medical School, she has published over 150 peer-reviewed articles and books. She divides her time between Uppsala, Belfast, and London.

Can I ask you a favor?
Support aboutDigitalHealth.com – a nonprofit hub for trusted health innovation insights. Your donation helps keep this knowledge portal free, independent, and accessible to all. Every contribution, big or small, truly makes a difference. Thank you!
€2.00