The Ethics of AI In Healthcare

Posted by

Father Paolo Benanti is an expert in ethics, digital ethics, and technology. He is a Franciscan monk and Professor of Moral Theology, Bioethics, and Neuroethics at the Gregorian Pontifical University in Rome. I discuss with Father Benanti the controversial aspects of AI in healthcare and how the digital transformation changes us – human beings.

Father Benanti, two years ago, there was a morally ambiguous case in the USA – a doctor used a virtual presence system to tell a patient he would die. With the broad adoption of telemedicine and medical workforce shortages, this practice may become an everyday reality. What’s wrong with such an approach?

From the beginning of human history, we have understood medicine as a scientific discipline. There was a time when a priest and doctor was the same person. We’ve always picked up someone special from the human community to hold the position of a doctor. It’s not enough that a doctor is a human. First, it must be a particular person who is committed to staying with other community members in special moments of their members’ lives, when the patients find themselves sick, fragile, and sometimes dying.

Through IT technology and AI, machines can surrogate many actions traditionally performed by a human being. The question that we face now is: Can machines perform all tasks? We have some particular activities inside our communities that are not for everyone, that require some special life-long commitment—for example, doctors, judges, police officers. All of them take an oath to do their best to perform well in their assigned tasks.

Let’s go back to the case you mentioned. What is awkward in this situation? The existential compromise – in the meaning of dealing with death – between two human beings was changed by a technological medium. The moment in which the doctor should accompany the patient, explaining “there is nothing I can do for you, but it doesn’t mean that I’m abandoning you. I will be with you until the last moment. This is my commitment, even if I can’t save you” was neglected. And suddenly, this surrounding role of a doctor is replaced by a machine. The last part of life was reduced to providing information, like in a call. Is that still a way we would like to be treated as humans at the end of life?

Technology provokes questions about the role of humans in societies, what has to be done by humans, and what can be performed by a machine.

That’s not all. By analyzing medical data from patients with similar diseases, artificial intelligence systems can already predict with ever-increasing accuracy when a patient is going to die. Is it okay, in your opinion, for the machines to calculate when humans pass?

This question has a multiple-layer answer. Could some decisions be AI-driven? For example, if I want to borrow money, I go to the bank, I talk to the human worker who has to make decisions regarding my trustworthiness. An algorithm can also do it, and it’s fair. When it comes to some key issues, like fundamental rights – and the right to live is one of them – I deserve a human decision, not an algorithmic decision.

In a complex medical environment, this kind of prediction regarding the survival rates determines the therapy and patient’s journey. It’s not a forecast for the sake of forecasting. It can influence the next steps, including what kind of medication or support the patient gets. First of all, regarding myself, I deserve a decision made by a human.

Let me shift the situation. We know that sometimes a human driver can fail. Accidents caused by drivers happen, but it also happens that many external circumstances lead to an accident. But if you ask if a machine can kill someone due to an error, a hundred percent of people would say “no.” We know that we have limitations as people, but we do not accept machines’ limitations that could kill people. Because the robot is “something,” not “someone.” Going back to the relations in medicine. Are we ready to accept this kind of failure? What would happen if the machine predicts three months of life, but the patient lives ten years or just one day. So the structure of the patient-doctor relationship should rely on human relation, only co-assisted by machines.

Many developers of AI-based systems ask themselves how to build their solutions to make them “based on human values.” What should be ethical guidelines for AI in healthcare?

We have to consider that “ethics” has several meanings. Someone can have in mind “normative ethics” that says “don’t do this or that.” If someone represents a narrow interpretation of ethics, it’s probably challenging to find a norm or law that is 100% acceptable worldwide. But ethics is much more than ethical standards. It’s like saying that justice consists only of law regulations. No, justice is more than rules.

We know from Aristotle that ethics is a virtue. If we would ask people if they prefer a just or unjust AI-driven healthcare system, I believe that all of them would answer “a just one.” From this perspective, if we have a broad understanding of ethics in the sense of virtues, we can see it as a guardian that outlines a path for an AI-based medical system. And that sheds a different light on the question you asked. From this perspective, we don’t need to find a universally acknowledged norm or law. We need to find a series of guidelines and borders that machines can’t cross regarding the decision-making process in medicine. It’s much easier and more acceptable in the process of developing new solutions.

You called the increasing application of artificial intelligence “the dawn of a new era.” What is the positive and negative transformative power of this new technology?

I can’t exhaustively quote every scenario, but let me focus on the most important ones. Firstly, we have to decide if we would like to develop new species of machina-sapiens and put it in a Darwinian competitive position versus homo-sapiens. If we create a new species, we will have to fight. And that’s negative in the sense that we have an obsessive-compulsive competitor – an algorithm that is narrow in its goals and very determined to achieve them. Nevertheless, I don’t believe that this methodology will be the most successful way of developing AI. A positive new era will come if we create AI as tools augmenting human abilities. In this approach, we do not build a replacement for a human being. Humans will remain in the center, and all the technological innovations will be universal and scalable, democratizing medicine and improving its quality.

Let me give an example. We can imagine having AI tools that allow doctors to augment their abilities to read and interpret medical images. The best practitioners can also work without algorithms. But there are many places in the world, especially in rural areas, where the quality of medicine is not high due to lack of infrastructure or shortages of healthcare professionals. With AI-based tools, we can democratize the best quality analysis of medical images around the world. Even a general doctor in a rural area can have access to this way to make decisions at a precise level which were previously available only for specialized clinics. It’s a really positive thing.

One of the most challenging topics for an oncologist is to perfectly fit a treatment to the patient following the recently published studies. If we consider how many studies are published every year, the AI system could support the doctor in decision-making and choosing the best possible treatment scenario and medicines. Here again, AI is not replacing human doctors; it’s only augmenting their capabilities.

In both cases that I mentioned, there is one primary condition: the human remains at the center. If we design the system as a competitor to humans’ capabilities, we will probably have the worst possible medicine in the future.

There is a lot of noise around so-called human-centered healthcare. What does patient-centricity in the era of new technologies mean for you?

We should look into philosophy to answer the question. Socrates used to say: “Know yourself.” He meant that you are much more than the body – you also have an inner world of feelings, desires, fears, sensations. If “know yourself” becomes “know your data,” we have another kind of layer – a layer made of data. Human-centered healthcare means that the machine is not only an optimizer with a specific function, but it must also be designed to understand the emotions and the feelings of the patient in order to respect them. To illustrate it, we can again use the example of the self-driving car. We can project that someday they won’t have accidents. But if we put a fearful person inside, and the car drives too fast, who will get terrorized? The vehicle also should tailor itself not to harm the sensibility of the passenger. Or the opposite – we can also have brave passengers bored by the way the car is driving. Going back to healthcare – technology also has to align with patients’ expectations. That is a respectful digital health technology.

Let’s imagine an AI surgery robot for surgical stitching. If it’s too fast, a patient could be scared, thinking, “I’m not a piece of textile.” If it’s too slow, some patients could feel that it’s too clumsy. So we see that it’s essential to perceive a human-being as much more than the body. We are also emotional beings, and it must be respected by technology.

Some companion robots or virtual assistants already have “artificial empathy or “synthetic empathy,” so they can detect and respond to human emotions in an empathic way. This can play a significant role for robotic doctors and nurses. How would you judge it?

We have many options for how we see this kind of innovation. From a cost perspective, it will probably someday be cheaper to have such assistants for patients. But we can also develop AI to make patients have better feelings in the sickness. And in this case, it is justified because it strengthens the doctor-patient relationship and relieves the doctor from a too significant burden, maybe even preventing burn-out. This kind of bots could also remind the patient to take a medicine, improve adherence, inform the doctor about detected abnormalities, and recognize some signs of stress in the patient’s voice to help adjust the decisions. If AI-based systems are used as an ecosystem to make the care better, it’s okay. But if we do it because medicine is a business and there is a need to optimize the costs, it’s not okay.

In your latest book, Father Benanti, “Digital Age,” you raise the question regarding “new humans in the digital age.” So who are we in the digital age?

That’s a big question. The so-called Turing Test set up a wall. We say that the test is passed if I can’t answer the question “who are you?” It puts a great question on the other side: “Who am I?” If it doesn’t matter anymore that I’m a biological intelligence or an artificial intelligence, what does it mean to be human. The question about identity arose after the “behind the wall” test – this methodology doesn’t care anymore if you are someone or something.

If I can make a perfect copy of a ten-pound coin, its value is ten pounds. No one can distinguish the difference. But if I could make an excellent copy of you, is this copy precisely the same as you are? Another example: If I place in front of you a duck and hair-dryer: they are similar in shape, but if I turn off the hair-dryer, I can turn it on. If I turn off the duck, I can’t turn it on again. Something is not someone. Being alive is not to function but to exist.

And now, in the digital age, the question arises: What does it mean to exist and not to function? So you see that if we try to forget the philosophy, it will anyway come back to us.

If AI is changing the world, culture, us, shouldn’t we also start changing ourselves and adjust to the new opportunities that AI offers, re-shaping some of the human values?

It has always happened. Seventy thousand years ago, as the human species, we moved around the world. And as humans, we did something different than other species. When a mammoth moved from Africa to the north of Europe, it had to wait till it had long hair to survive in the new environment. But 70,000 years ago, we didn’t wait for the hipster to appear to go to northern Europe, but we dressed in the mammoth skin. We used inventions to adapt ourselves, change ourselves, and change the reality we lived in. So the answer is: Why also not today? I would ask another relevant question: How do we guarantee this change is positive considering today’s high-speed changes. The old precautional principle “Think twice, cut once” has to be applied carefully.

Technology gives us a lot of power. In the early years of the 20th century, we tried to extinguish ourselves with a stupid technology – the atomic bomb. And this power is not a matter of intelligent technology or our intelligence. If this power is not guided by ethical principles, it can destroy us. Now the game is much faster than before. 

On one of the discussion forums, Quora, there is a long discussion under the question: “Could technology be the new religion?” Some suggest that while religions try to give us answers about everything, technology allows us to discover the world and universe and explore our human bodies. We know more, and with technology, we can improve, upgrade ourselves. So is technology a new religion?

It could be. One of the first philosophers, Heraclitus, wrote: The god that is in Delphi is not speaking, is not shouting. It’s giving meaning to things.” The god that is in Google is not speaking, is not shouting. It gives sense to data. The oracle that is in Delphi is a new god on my mobile phone. The gods in the pantheon – the goodness of love, god of commerce, and others – have been replaced by the goods in the smartphone: Tinder as the god of love, Amazon as the god of commerce, and so on. They are the new oracles to which I sacrifice my data in exchange for some benefits. Is it religious behavior? I think so.

Every one of us believes in something. I can ask you: Do you believe that reality is made by atoms? You’ve never seen it, so you can only believe in it, meaning – you believe in a reliable source that told you, “yes, there are atoms.” As humans, we are believers. Today we have data-driven faith.

What’s the danger of it?

Data-driven belief is founded on correlations. We can go back to philosophy and ask: What does truth mean? What is the truth in an age full of correlations? If there is a correlation between the umbrella and rain, is the umbrella causing the rainfall or the opposite? If I ask Google Maps what the fastest way to the specific destination I am going to is, Google Maps simply correlates without knowing the cause of the traffic and why I want to go to this place. And it works. In exchange for causation, I get the correlation. But this new epistemological model is not yet well-studied, and we can make huge mistakes. It also happens in science. For example, in the past, we believed that the heat released when cutting metal is not energy but heat contained in the metal. It was a huge mistake. We can make this mistake regarding correlations.

There is one more aspect. Some used to say that technology “democratizes” some spheres of our lives. If everyone is included in this process, not everyone has the right or possibility to judge it, adapt it to the cultural environment to use it following some individual or local principles. And this is a new challenge – because it’s the same for everyone. Just look, for example, at fake news. The social impact of technology is enormous.

How to ensure ethical AI in the future? In February 2020, The Pontifical Academy for Life, Microsoft, IBM, FAO, and the Italian Government were the first to sign the “Call for an AI Ethics,” a document developed to support an ethical approach to Artificial Intelligence and to promote a sense of responsibility among organizations, governments, and institutions with the aim to create a future in which digital innovation and technological progress serve human genius and creativity and not their gradual replacement. Find out more here:

Did you like the interview?

This website is free of commercials. Please support independent journalism and (€1+). It only takes a minute. Thank you!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s