
“What concerns me most is that humanity could let fear stop us from harnessing the potential that AI can offer patients, families, healthcare workers, and local economies,” says Dr. Ricardo Baptista Leite, MD, CEO of HealthAI—The Global Agency for Responsible AI in Health.
We discuss why artificial intelligence in healthcare requires a global perspective.
Artur Olesch: Let me ask you directly: what are you afraid of regarding AI in healthcare?
Ricardo Baptista Leite: Artificial intelligence has the potential to save lives and improve the quality of life and well-being of all citizens. However, to achieve these ambitious outcomes, we need to fix the lack of governance by building a global ecosystem that includes all stakeholders, from governments and regulators to multilateral international organizations, from academia to patient organizations, and from private sector and technologists to citizens-at-large.
At HealthAI, we want to ensure that all these stakeholders have a voice so that we can co-create equitable global governance of AI for health. Our first step to achieve this ambitious goal will be through our newly launched HealthAI Community of Practice, to which we call on all those interested in being part of the process to apply on our website.
Do current directives in healthcare, such as those for medical devices or software as medical devices, need to be upgraded to include AI?
Some countries have taken important steps toward assessing AI technologies for health by evaluating software as a medical device. However, there is still a long road ahead towards implementing mechanisms capable of ensuring the full extent of oversight needed at a global scale. The challenges of identifying and fixing dataset biases, guaranteeing data privacy and security, or implementing post-market surveillance mechanisms will demand national and regional regulatory capacity and leadership.
HealthAI, as an implementing partner to countries, will provide technical assistance to governments and regulators to ensure they are capable of taking on that leadership role and of implementing locally-adapted international standards that promote the responsible use of AI in their health systems. By doing so, national and regional regulatory bodies that become part of the HealthAI Global Regulatory Network will be poised to ensure equitable access to safe, quality, and practical AI tools.
No innovation brings solely benefits without harmful side effects. So, how do you define “responsible AI in healthcare,” assuming potential dangers that cannot be reduced to zero?
We follow the definition derived from the World Health Organization: Responsible AI in health refers to the use of AI technologies in healthcare, which are expected to be ethical, inclusive, rights-respecting, and sustainable solutions.
So what is then irresponsible AI in healthcare?
2018, the World Health Organization (WHO) and the International Telecommunication Union (ITU) established the Focus Group on Artificial Intelligence for Health. A 2021 report authored by this group, titled Ethics and Governance of Artificial Intelligence for Health, provides a set of ethical principles to guide the development, regulation, and implementation of AI solutions in health. The six principles serve as a foundation for the responsible use of AI, and include:
- Protecting Human Autonomy to ensure that individuals remain central in the collection, analysis, and use of data;
- Promoting Human Well-Being and Safety, echoing the broader medical principle of “do no harm;”
- Ensuring Transparency, Explainability, and Intelligibility that will ensure all stakeholders have the relevant information about the AI solutions to make informed decisions;
- Fostering Responsibility and Accountability to ensure AI solutions deliver on their potential while meeting ethical requirements;
- Ensuring Inclusiveness and Equity, proactively contributing to the ultimate goal of health equity without further exacerbating the digital divide;
- Promoting Responsive and Sustainable AI that are designed to meet the health needs of communities and individuals over time.
To delve deeper into these principles, you can refer to publications like Ethics and Governance of Artificial Intelligence for Health, the International Development Research Center’s AI for Global Health Initiative, the European Commission’s High-Level Expert Group on AI framework in the Ethics Guidelines for Trustworthy Artificial Intelligence, and a journal publication from Information Systems Frontiers.
To answer your question, irresponsible AI in healthcare can be defined as a health system that does not respect and/or reject the aforementioned principles.
Does society even need to be protected from the harmful effects of AI? Isn’t it better to focus on strengthening AI literacy instead?
Although literacy is insufficient to ensure that citizens are kept safe, it is a critical element of the activities that the global health community will need to foster to address the challenges and opportunities related to AI.
In our three-year organizational strategy at HealthAI, we call out the need for current healthcare regulations to evolve to provide more comprehensive and nuanced regulation for AI. This involves working with global and local experts to develop certification processes and ethical frameworks that address the specific challenges posed by AI technologies.
Working with governments, we aim to support the development of regional and national regulatory capacities and connect these different entities into a global regulatory network. Therefore, regulating AI and validating AI solutions for health are core elements of our mission. In that context, investing in the literacy of healthcare professionals and the general population about these technologies is an essential part of the work these regulators will participate in.
AI should be actually considered in the plural because it can be both generative AI and certified AI algorithms used in radiology. Doesn’t the current debate about AI in the singular give the impression that AI is inherently dangerous?
Our collective responsibility is to highlight the enormous benefits and transformative capacity that Artificial Intelligence—in its different formats—can bring to patients and societies. It is equally our responsibility to ensure that all stakeholders are conscious of potential risks, if they exist, to mitigate these and thus promote trust in adopting such technologies.
Our focus is on ensuring that all AI solutions deployed in health systems comply with the principles of responsible AI. How we approach each type of AI—be it gen AI, static AI developed tools or others—will always work with the standards defined by international normative and standard-setting organizations, such as WHO.
Whatever the nature of AI, we must ensure that any technology being used in clinical or other health settings has been validated according to its safety, quality, and effectiveness. Moreover, we aim to ensure post-market surveillance of these technologies and put in place a ‘global early warning system’ that will detect potential harm that can be caused by an AI tool in health to mitigate risks and avoid any further harm through the application of ‘safety breaks’ when warranted.
Perceptions of AI vary significantly by region of the world. Meanwhile, AI knows no boundaries, and consistent regulation won’t be possible.
The international standards to be developed and published by normative agencies such as the World Health Organization will be a critical building block for regulators to find some common ground or a ‘minimum bar’ of understanding and thus lead to some level of global harmonization. At HealthAI, we aim to ensure that regulatory capacity exists in each country precisely so that each country can validate AI tools within its sovereign independence.
We’re working to connect regulatory teams worldwide into one global network to allow the sharing of knowledge to help accelerate the validation process safely and tackle any challenges that may arise, for example, by enabling the sharing of early warning systems to highlight issues or unintended effects. At its core, our work is about helping to ensure safe, quality, equitable, and effective AI tools for all citizens when it comes to health, and it’s also about supporting investment and adoption of technology with the potential to improve health outcomes.
Many European companies complain that Europe is again trying to regulate first and innovate later. This will lead to the loss of the global AI race, which will determine the power of future economies. Do you agree with this?
The EU AI Act is an important step toward regional regulation. It needs to be aligned with global efforts and further investments towards maximizing the potential of AI in each of the economic sectors, including health. At HealthAI, we are about bringing global knowledge and local expertise together, and we believe that the European Commission, along with the EU’s Member States, have much to contribute to and benefit from by actively being part of this global effort. Together, we can ensure that the needed regulatory capacity exists in each country so that each country, with its own sovereign independence, can validate AI tools in accordance with the principles of responsible AI.
If Europe aims to position itself as a leader in the field, it will need to do more than promote standards and regulatory capacity. Europe must equally invest in R&D and promote economic incentives that lead towards homegrown development of innovations capable of competing with other more advanced regions in the world.
We have failed to regulate social media, and bot farms can determine election results today. Why should it be any different with AI?
Yes, there have been failures in the past, and we need to learn the lesson from those processes and get the regulations right this time. That’s why we are moving fast and have positioned HealthAI ahead of the curve to ensure that all stakeholders are involved in creating the global regulatory ecosystem from the start—even before the international regulatory standards are published.
This is an opportunity for all of us to co-create the future by bringing together governments, regulators, technologists, scientists, and civil society movements, including patient organizations. We’ve designed a model in which we will end up with a global regulatory network for responsible AI in health, ensuring that each country has the tools necessary to validate AI technologies following international standards. We’re not the ones developing those standards—that’s the remit of WHO and other normative organizations, and we have been partnering with WHO since our foundation.
ChatGPT reached 100 million users in two months. At this rate of technology development, do regulatory attempts stand any chance?
Unquestionably, the exponential speed at which AI capacity is evolving will demand solutions that draw from our global collective intelligence, and we must address such challenges with humility as they present themselves to us.
As a former four-term Member of Parliament, I know how quickly technological advancements can outpace legislation. Based on that experience, I firmly believe that we only stand a chance with AI if we create an ecosystem founded on legislation that grants regulators the authority and flexibility to adapt as necessary, along with the resources needed to keep pace.
Regulators will play an essential role in presenting guidance on addressing these challenges, which is another reason HealthAI is so engaged in ensuring that every country has its own regulatory capacity for AI and health.
For our agency, there is a clear path forward: regulations can and must evolve to keep pace with technological advancements. Our 3-year strategy involves creating flexible, collaborative frameworks that ensure technology develops in safe, fair, and beneficial ways for everyone. By working together across borders and sectors and focusing on principles rather than prescriptive rules, we can build a regulatory environment that supports innovation while protecting public interests.
However, I consider that this issue requires a more in-depth response, given the ChatGPT example cited in the question. It seems clear to those in the field that certain technologies will not need to go through a formal regulatory process, and that’s why investment in digital literacy for healthcare professionals and patients is so essential.
Over the last 20 years, practitioners have used internet search engines daily in hospitals and clinics worldwide. Now we see those same professionals using LLM’s such as Chat-GPT. In the face of such a reality, a pragmatic approach is to embrace that this is the case and ensure that proper codes of conduct are put in place and that training is offered to these professionals to enlighten them on how to benefit from these technologies but also to understand its many faults and potential risks for patients and systems.
As the CEO of Health AI, you can be seen as someone who wants to impose restrictions on artificial intelligence. Rightly so?
On the contrary, we know AI has the potential to save lives, and we want humanity to reap the benefits of what the technology has to offer.
To that extent, we need an ecosystem that aligns the incentives of all stakeholders, including the private sector. That’s why we believe that the private sector needs to be part of these efforts from the start and to co-create a future, along with civil society movements, governments, and other international organizations, where artificial intelligence is trusted, used responsibly, is accessible equitably and adopted by citizens within and among countries. Ultimately, our mission at HealthAI aims to accelerate the investment in and the development, scalability, and adoption of AI-driven technologies so they can be put to the benefit of every citizen across the planet. This will only be possible if we join efforts among all stakeholders to establish a global regulatory community.
It’s 2030. AI is widely used in healthcare. What problems are we likely to face if regulatory initiatives fail?
A world without safeguards and safety breaks can put citizens at risk of potential harm. It will inevitably lead to widening the digital divide, leaving low- and middle-income countries further behind. Most importantly, if we fail to ensure that the principles of responsible AI are not abided by all of society, then the people living in the most vulnerable situations—be it young women and children or other minority groups—will be the main victims of AI’s potentially malicious impact. Moreover, in the absence of any regulatory oversight, even within the richer countries of the world, we will inevitably see growing social and economic inequities. Artificial Intelligence has the potential of being the greatest equalizer or the greatest divider of our time. Ultimately, our collective future as humans depends on the decisions we make as a global community today.
Our vision at HealthAI is one of a world where artificial intelligence produces equitable and inclusive improvements in health and well-being for all individuals and communities. Our approach is about helping to ensure safe, quality, equitable, and effective AI tools for all when it comes to health. We are laser-focused on our mission so patients and families worldwide can have healthier futures. That’s the future we are working for. It’s time for all of us to join efforts to ensure we get there together.

Can I ask you a favor?
Please donate to aboutDigitalHealth.com (€1+) to support independent (and human) journalism. Thank you!
€1.00