
If AI proves better than the status quo, we should use it, says Robert Wachter, M.D. In his new book A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, he argues that AI will move quickly into clinical decision making, where choices about tests, treatments, and procedures drive about 80 percent of healthcare costs and where the consequences are hardest to ignore.
What is A Giant Leap about?
It’s about this remarkable experiment we’re running, bringing a new technology that I think will ultimately be smarter than any human into healthcare, a field people care about deeply. You could argue it is the most important thing humans do for each other. But it is also a field, at least in the United States and, I think, everywhere, that accounts for a huge share of our lives and spending.
So to me, it is the greatest experiment in the history of healthcare, trying to understand whether this technology makes care better, safer, more expensive, or less expensive. Everybody was talking about it, so I spent a couple of years trying to understand it myself and what it will mean to be a doctor, a nurse, a patient, a family member, or a policymaker in this era of technology that is very different from anything we have seen before.
Do you think we have already grasped how big this leap is? After decades of digitalization that often created more problems than solutions, how different is this moment?
In my book, I quote Hemingway’s line from The Sun Also Rises. A character is asked how he went bankrupt, and he says, “Two ways: gradually, and then suddenly.” I think we have been in the gradual phase. Digitizing healthcare was absolutely essential. I would take some issue with the idea that it was all negative. I practiced before computers and after, and it is better now. I used to spend much of my day running around, going to the X-ray department, and trying to read handwriting on paper charts. Two people could not look at a record at the same time. Now I send prescriptions electronically. Those are real improvements.
But digitization also brought unintended consequences and was not the cure for healthcare. Many of us believed electronic records would automatically make care better, safer, and cheaper. That did not happen. Still, it was foundational. We needed the data in digital form. What we lacked were tools to make sense of it and to change how care is delivered.
This feels like that moment. In the United States, we now have tools far more capable than anything we had even three years ago, combined with a system under real strain. It is extraordinarily expensive, often inaccessible, and, in many places, not functioning the way people need it to. If you came to my hospital’s emergency department today, you would see patients on gurneys in the hallways because there are no rooms available.
That combination, powerful new tools and a system in need of change, layered on top of already digitized data, is what makes me optimistic. Healthcare has not been an early adopter of technology, but the uptake of AI is happening quickly because it solves real problems. People try it and say, “This actually helps me care for patients better.” I see that in my own work.

Healthcare faces burnout, bureaucracy, and rising costs. Is AI a structural solution or a sophisticated patch?
It is both, and we will have to see how it evolves. Take ambient scribing, which did not exist three years ago. Now every doctor in my hospital has access to it. It allows us to talk to patients without staring at a screen or a keyboard, while the conversation is turned into a properly structured note. If we turned it off, many doctors would be deeply frustrated. Patients like it too. It brings the focus back to the interaction.
But there are complications. A study in the United States showed that costs went up, in part because these systems help produce documentation that supports higher billing. That reflects a flaw in how we pay for care. We mostly pay for services, not outcomes. In systems where payment is tied to keeping patients healthy, AI would be directed toward that goal instead.
So AI does not fix the underlying economics of healthcare. But it does reduce friction. It can improve the doctor-patient relationship, quickly summarize complex records, and allow patients to take more responsibility for their own care when it is safe to do so. There are concerns about bias, but bias already exists in the system. AI could help reduce it.
Still, it can also make problems worse if we are not careful. In the United States, we have not been very good at lowering healthcare costs. I think AI will make care better, safer, and more convenient. Whether it makes it cheaper is still an open question.
With so many new tools emerging, which uses of AI will truly change the physician’s role, and which are overhyped?
The most important one is clinical decision support, how AI helps with diagnosis and treatment. That is already happening, though in a somewhat informal way. Many physicians use tools like OpenEvidence, a large language model designed for clinicians. It is quite good, and people are beginning to trust it.
This becomes transformative when it is embedded in the electronic record. The system already knows the patient’s history so that it can generate diagnostic suggestions within the workflow. If I agree with one possibility but want to consider another, it can propose the right tests and even draft treatment plans. It becomes a copilot.
That is where things become consequential. If the system is wrong, the stakes are high. For now, the doctor remains the final decision maker. But there is a risk that as trust grows, clinicians will become less vigilant. We are early in that process.
So far, the tools have been used for tasks like documentation and billing, which you might call low-hanging fruit. That was the right place to start. But the real impact will come when AI shapes decisions. Physicians’ salaries account for about 10 percent of costs directly, but their decisions drive roughly 80 percent. That is where the system changes.
Is AI already becoming the first point of contact for patients?
Yes, in many ways it already is. There are enormous numbers of health-related searches every day, and increasingly, patients are using AI tools before they see a doctor. Even when they do see one, they often arrive with questions or thoughts based on an AI search.
That can be uncomfortable for clinicians, but it is part of a broader pattern. Technology tends to democratize access to information. I think that is largely positive. As these tools improve, I would rather patients use them than rely on traditional search.
But they are not yet fully reliable. Patients do not always know which information is relevant, which can lead to misleading results. Future systems will need to guide users more, asking questions and refining input as a doctor would. They will likely be integrated into patient portals, with access to medical records already in place.
A key challenge will be triage, knowing when the tool can provide guidance and when it must direct someone to seek care. That is essential for trust, and we are still early in getting it right.
If AI can match or surpass physicians in diagnosis and even empathy, what does that mean for the identity of a doctor?
We are all grappling with that, and it is not unique to medicine. Journalists, lawyers, accountants, and every field are facing the same question. In healthcare, I do not think we will fully hand over decision-making any time soon. These tools depend on having the right information, and knowing what matters still requires expertise, particularly in a high-stakes field like medicine.
At the same time, the AI tools do make us better. I can access subspecialty knowledge instantly, something that was not possible before. But it does raise a deeper question about what we as doctors uniquely contribute. Even empathy, which many thought would remain distinctly human, can be simulated quite effectively.
There is also a generational shift. Younger patients may be more comfortable relying on AI if it is convenient, safe, and affordable. That could change expectations and potentially reduce demand for traditional care.
For now, AI is a copilot. But it does challenge our sense of expertise. It is not easy to accept that something in your pocket may know more than you do. Still, if it helps us deliver better care, I think we will adapt.
You argue that AI does not need to be perfect, only better. What does that mean in practice?
We tend to hold technology to a very high standard, which is understandable. If AI makes a mistake, it can scale that mistake quickly. But expecting perfection is unrealistic and counterproductive.
There will be errors. At some point, if it has not happened already, AI will contribute to harm. But there are already hundreds of thousands of deaths each year from medical errors. We do not shut down healthcare because of that.
The question is whether AI is better than the current system. Better might mean performing at the level of a physician but with greater convenience and lower cost. That is already visible in areas like mental health chatbots, which provide access to care that many people would otherwise not have.
There have also been troubling failures, particularly in interactions with young people. Those need to be studied, improved, and regulated. But abandoning the technology would ignore the alternatives. As I often say, do not compare it to perfection; compare it to what exists today.
Who is shaping how AI will change medicine?
Right now, largely the companies. In the United States, policymakers are taking a relatively hands-off approach, partly out of concern about global competition for AI primacy.
I’m not that worried about the absence of regulation when it comes to the adoption of AI by hospitals and healthcare systems. Healthcare organizations are cautious. We will not adopt tools unless we believe they are safe. In the US, we worry a lot about liability. But in the consumer space, where patients interact directly with AI, there is less oversight, and I do think regulation is needed. At the moment, though, companies are moving at breakneck speed and setting much of the agenda.
What surprised you most while writing the book?
How optimistic I became. Ten years ago, when I wrote The Digital Doctor, I was frustrated by how poorly the transition to digital records had gone. I came away from that experience fairly pessimistic.
This time, the more people I spoke with, the more I came to see this technology as fundamentally different, arriving at a moment when healthcare urgently needs improvement.
That does not mean there are no risks. If these tools ultimately replace large parts of cognitive work, the implications go far beyond medicine. But I did not expect to end up as optimistic as I did.
Have you read the book? Share your feedback on LinkedIn.