Artificial intelligence can spot some abnormalities with the same or higher precision as doctors. It can analyze millions of clinical cases to select the best possible treatment. But even if AI outperforms clinicians, patients still prefer human carers. Why do we behave in such an irrational manner?
There’s more than the result
Artificial intelligence has entered healthcare, raising many challenging questions. Will we be diagnosed and treated by AI? What happens when patients seeking the highest standards of treatment start choosing diagnoses made by AI and surgeries performed by robots instead of people?
No worries. Even if AI makes fewer mistakes than doctors, it doesn’t mean patients will prefer machine-driven care. The decisions we make are complex and go beyond calculations of risk and outcomes.
In the book “The Elephant in the Brain: Hidden Motives in Everyday Life,” the authors—Kevin Simler and Robin Hanson—claim that we are not rational beings. Example? Having the original Leonardo da Vinci’s “Mona Lisa” in front of you along with a perfect, or maybe even better replica, which one would you appreciate more? The original or the copy? For the vast majority, it’d be the original, as representing a higher value, even if it had lost color intensity and started to crack over time.
The reason is simple—in the evaluation process, people are not only guided by the representation of external quality but also by its hidden layer in the form of the artist’s talent, invested time, and a story behind it. This is why the Leonardo da Vinci original artwork is priceless, while perfect copies are cheap.
I quote the example of art because medicine is also often referred to as the “art of medicine”—a set of medical skills, personality traits, mental and physical abilities applied in medical treatment. Behind the doctor’s qualifications consist of around ten years of intensive medical studies, residency, and internship. It is a job that demands not only knowledge but also interpersonal and communication skills. And even if AI can analyze data faster and quicker, it lacks other attributes so much appreciated by the patient, including empathy and physical presence.
Mass-scale care for all. VIP treatment for lucky few
Alongside the result, people value the effort needed to create a product or service. After a period of delight with mass products—often more attractive in terms of price and thus easily accessible—we observe a return to individualism. If we can afford it, we choose more valuable products, hand-made in small craft workshops instead of large factories. A lamp found at an antique market means much more to us than a lamp with the same functionality bought at Ikea. This concerns also health.
Digital health has one significant advantage: accessibility. Thanks to Ikea, many people can afford stylish furnishings for their home. Similarly, the diagnosis performed by a robot based on artificial intelligence will be much cheaper because this type of AI system can make thousands of other diagnoses simultaneously. AI does not need a doctor’s office, costly education in medical schools, technical equipment, or medical secretaries, just a server somewhere in the world.
In the long term, it will have severe consequences. As digital health systems continue to mature, we can expect the health system to develop two speeds: affordable digital services provided on a mass scale and doctors’ services—hard to access, expensive. As a result, an appointment with a human doctor will gain value—only a human is able to meet the individual needs, have a fluent conversation, spend time with a patient providing not only medical services but also psychological support. Even the best AI can’t do that. However…. you better read this text to the end.
Errors made by humans and machines are not equal
According to WHO data, about 1.35 million people die every year in road accidents. Nevertheless, much more attention is paid to fatal accidents caused by autonomous cars. Tesla estimates 1 fatality per 320 million miles driven—about four times less than the normal cars.
Sure, there are fewer autonomous vehicles on the roads than cars with human drivers. However, accidents involving cars with AI behind the steering wheel receive more publicity, even though on the same days drunk drivers kill many more people—about 28 in the United States. That’s one person every 52 minutes.
As humans, we are willing to forgive human errors because we understand them, commit them ourselves and accept the weaknesses of our species. We have gotten used to the reality: car accidents do not shock as much as they did in the first days of the industrial revolution in the 19th century.
Fatigue, sleepless nights, and even one beer too many—all can lead to a fatal accident, confirming that humans are not perfect. Nobody is. Such incidents, even though strongly criticized, are explained away as bad luck or misfortune. It “happens.” If an autonomous car hits a cyclist, one who might have run a red light, we tend to be more sympathetic towards the cyclist—towards the human rather than the vehicle. We assume that since AI is perfect, or at least better than humans, it has no right to make a mistake.
This unwritten human right to err is also valid in medicine. Some minor errors made by a doctor are silenced and forgiven. But only to a certain level—mistakes that endanger health or life are not tolerated anymore. By the way, do you know that medical errors are the third-leading cause of death in the US?
I guess that—similarly to autonomous vehicles—we will be more attentive in watching the moves made by AI systems in making diagnoses and treatment decisions. People will meticulously review every case of deterioration or death after robotic surgery. All this with much more scrutiny than when we deal with humans. We may even feel a sense of satisfaction that the AI isn’t as perfect as we’ve been told after all. Is it a kind of schadenfreude towards AI?
Survival instinct and human solidarity
People have always organized their lives within communities. Life within a group increases the probability of survival. Historically, this principle relates to fighting against threats from nature, raising children together, or conflicts with other tribes.
Although this cooperation for the common good has lost its importance in modern societies, behavior patterns haven’t changed much. Belonging to a given community remains equally important, whether we talk about professional groups, places of residence, religious beliefs, and other beliefs, interests, or political options.
Although AI—a product of human creativity and intelligence—neither thinks nor forms a self-contained creation, it is often perceived in this way. Scientists talk about “self-learning systems,” the press reports of “AI taking over jobs,” and Elon Musk and Stephen Hawking predict that “AI is a threat to humanity.” Many people see it as a competitor or even an enemy—as something we are not familiar with, we cannot understand, and which does not foster our trust. This thinking pattern exists even if scientists emphasize that AI will help us solve many problems by creating new solutions for clean energy, friendly transport, and safe medicine. The robots won’t replace you—it will create new jobs.
This negative image of AI is not a rule. Today, every one of us is a beneficiary of AI in everyday life, and many of these systems operate invisibly in the background. Only the personalization of AI by introducing physical robots and medical chatbots will allow us to notice the division between people and machines. And most friendly-looking robots with human faces won’t help. Somewhere in our subconscious, we may still have scenes from science-fiction movies, which present robots revolting, cyborgs eliminating people, and deceitful AI systems.
Feelings: the game is not over yet
Not everything has been said yet. AI will not only perform mathematical calculations better. It will learn how to communicate with humans and tune into the most hidden expectations, fears or hopes. The human ear is unable to perceive subtle changes in the voice that indicate different emotions. The human eye is too imperfect to interpret the meaning of micro facial expressions and body language correctly. But the algorithms will not miss any data. AI systems will know us better than we know ourselves. Robots and chatbots with synthetic emotions and feelings will recognize our needs better than a best friend. AI doctors will always be empathetic and unlimited by the 15-minute time slot for an appointment.
And something else—people are getting lonelier. According to a recent report, about 33 percent of adults experience feelings of loneliness worldwide. Rapid social shifts, technologization, and a growing population will only deepen our feelings of loneliness.
An empathetic robot is a candidate for an ideal friend—intelligent, individualized, less cumbersome to maintain than a dog or cat. Wait… A robot can be better than your lovely pet? You probably won’t believe it until it brings you a cup of tea when you have a cold or laughs with you while you watch TV shows. This is already happening: Smartphones and computers are the prototypes of mini-robots for communication, entertainment and work. We find it hard to leave them even for a minute.
Benefits only, no responsibilities (other than plugging the robot in).
AI thinks logically, makes rational decisions free of noise like fatigue, memory gaps, and stress. But AI also can generate emotions because emotions can be programmed. Mathematics can describe the real world quite precisely. And AI embraces math better than humans.
I have a small favour to ask…
This website is free of commercials. Please support free journalism and aboutDigitalHealth.com (€1+). It only takes a minute. Thank you!