
75% of all machine-learning–enabled medical devices cleared by the Food and Drug Administration (FDA) are applied in medical imaging. Embedded in MRI, X-ray, ultrasound, and PET scanners, algorithms help detect abnormalities hidden among millions of pixels, protect patients from excessive radiation exposure, and automate radiology reporting.
How do doctors use algorithms in medical imaging? Why do many of them remain inaccessible to patients in Europe? To find out, I visited the Philips Customer Experience Center and Catharina Hospital in Eindhoven, the Netherlands.
Author: Artur Olesch | 15-minute read

How many times have you seen images of a metallic robot in a doctor’s white coat, a common image you get when asking ChatGPT or Midjourney to visualize AI in medicine, or when searching online for such graphics? “AI in medicine is everything but not that,” explains Dr Eva Deckers, Head of the AI Center of Excellence in the Catharina Hospital, showing a photo of a doctor reviewing medical data and images floating in the air, just like Tom Cruise did in “Minority Report.”
Read an interview with Eva Deckers, “Hospital AI adoption must be guided by process design.”
So began my 2-day journey through AI solutions in medical imaging, which I can summarize in one sentence: good AI is an invisible AI. Among the newest and most modern MRI and PET scans, I expected to see the latest AI advances, but I realized how people give AI meaning.

Counting down the waiting days
Modern diagnostics is buckling under its own success. Imaging devices are increasingly powerful, providing detailed, high-resolution images of human organs. A capability that doctors have long sought. But with image quality came large volumes of data that must be viewed and interpreted. Human eyes cannot keep up with the rising resolution of medical images measured in micrometers. The human brain has no cognitive capacity to connect the dots in the universe of health data.
The second challenge is that too many people still lack access to diagnostic services or are waiting months to undergo MRI screening. It delays diagnosis, worsens outcomes. Patients face days and nights of uncertainty, fear, and big questions about their and their families’ future.
In Europe alone, patients may wait up to 100 days to see a specialist, far longer than the global average, according to the Philips Future Index. Globally, healthcare systems are projected to face a shortage of 10 million professionals by the end of this decade.
Healthcare workloads are rising, and healthcare professionals know exactly what’s wrong: bureaucracy and administrative tasks, necessary as they may be, are not why they chose to become doctors. It is unsurprising that already four out of five clinicians say they hope AI can help close the widening gap between demand for care and the capacity to deliver it.
Three-axis approach to protect against overload
There is no single AI system that will solve these problems overnight. Instead, we need a disciplined framework for AI-driven tools focused on usefulness, safety, and trust. All three pillars are embedded in Philips’ AI strategy, which centers on three principles: automation, augmentation, and agility.
Automation is about removing routine work. In cardiac ultrasound, for example, a scan might take twenty minutes, followed by another long stretch of manual measurements. AI can now perform those measurements automatically, reducing post-scan work to a single click. The clinician still interprets the results.
Augmentation means enhancing expertise. In image-guided cardiac procedures, Philips uses AI to fuse X-ray and ultrasound images into a single navigational view. The system tracks instruments within a beating heart in three dimensions, offering physicians a level of spatial awareness previously impossible.
Agility is about speed. AI systems can continuously monitor data and raise alerts when changes occur. In remote cardiac monitoring, algorithms can detect potentially life-threatening arrhythmias and notify care teams immediately, rather than days later when data are reviewed manually.
Clara Sattler de Sousa e Brito, Head of the European Region at Philips, says the company’s 130-year history in healthcare engineering has reached a turning point. Philips now invests about 10 percent of its revenue in research and innovation, with roughly half of that directed toward artificial intelligence.

“You can’t think about innovation without AI these days,” according to Patrick Mans, Head of Data Science & AI Engineering at Philips. AI has transitioned from an experimental “add-on” to the essential foundation of modern health technology. This integration is so complete that AI is no longer viewed as a separate category of technology, but rather as an inherent component of how healthcare solutions are designed, developed, and delivered. There is a saying for that: “the end of moonshots.”
“We are moving away from ‘magic’ toward tangible, real-world applications that are already operational in hundreds of hospitals,” explains Mans.

There is a sort of magic in algorithms
My journey continues through several spaces where the latest company’s diagnostic devices are showcased, starting from X-ray and Ultrasound to MRI and Remote Monitoring. At each stop, engineers and scientists explain how the algorithm works and what benefits it offers.
In Magnetic Resonance Imaging, Philips uses AI in two complementary ways. One system accelerates image acquisition, allowing scans to be completed up to three times faster. Another automatically analyzes brain structures, comparing them with large reference datasets to identify subtle changes associated with conditions such as Alzheimer’s disease. What for?
“While new Alzheimer’s drugs can now slow the progression of the disease, they require patients to be scanned three to four times a year to monitor for side effects like microbleeds. Without AI to automate these readings, the global shortage of radiologists would lead to a massive backlog, effectively blocking access to life-changing treatments,” explains Nadine Reijnders, Clinical Product Management Leader MRI.
These “microbleeds” are often difficult to see with the naked eye. Still, AI-enhanced tools like SmartReading can automatically measure brain shrinkage and highlight early warning signs with 85% better image quality.

Faster scans mean less time in the machine for patients and higher throughput for hospitals. At the same time, automated analysis enables earlier detection of disease patterns that might be missed by the human eye, particularly in early stages.
In ultrasound, AI-driven measurement tools automatically identify anatomical landmarks and select the correct frames for analysis. This reduces variability among operators and cuts measurement time by approximately half. The tool, AutoMeasure, is also referred to as a “magic button.” Previously, after a 20-minute scan, a sonographer would have to spend significant time manually clicking on pixels and frames to calculate chamber size and blood flow.

Even in X-ray imaging, a field that might seem technologically mature, AI can still make a difference. Using a simple three-dimensional camera, Philips’ Smart Collimation Thorax (SCT) system can automatically position detectors and optimize collimation for chest imaging. What once depended on individual technique has become more consistent, faster, and safer, thereby reducing unnecessary radiation exposure.
“Human operators achieve an ‘optimal’ chest X-ray in only 20% of cases, whereas the AI-enabled system achieves this benchmark 80% of the time.”

The longest stop is where I experience how AI begins to support clinical decision-making during the procedure itself. It’s DeviceGuide, Philips’ AI-powered system for one of cardiology’s most complex tasks: repairing a leaking mitral valve without open heart surgery. It’s the first time today that I have to ask many questions to make sense of the interconnected imaging devices, tools, and screens occupying the space of a large living room.
“Traditionally, interventional cardiologists repairing a leaking mitral valve must rely on multiple flat, two-dimensional X-ray and ultrasound screens, mentally reconstructing a three-dimensional anatomy while navigating a moving target,” explains an operator. DeviceGuide automatically recognizes and tracks the implant in real time, transforming complex imaging streams into a single, shared three-dimensional navigational view. It continuously displays the device’s location, direction, and orientation within the heart.

Using X-ray, ultrasound, MRI, or CT scans, doctors can guide miniature instruments through openings as small as a pinhole. For patients, it’s less invasive, leading to faster recovery with no long scars.
Infrastructure nurtures AI. It’s invisible, thus ignored
I board the bus and head to Catharina Hospital to see how sophisticated tech is used in clinical settings. Doctors are pragmatic. Every new tool must bring added value to the procedure or care. An AI tool that requires additional logins or forces clinicians to leave their primary EHR will not be used, regardless of its accuracy. Instead, AI must disappear into existing workflows. The hospital has even developed the “silent ICU” concept, using AI to reduce alarm fatigue by ensuring that only the most critical alerts reach the healthcare professional closest to the patient.
Before artificial intelligence can do its work in a hospital, the hospital must first build a solid foundation: clean, well-structured, and representative data. And this is what makes AI algorithms particularly challenging to apply. Hospitals must integrate imaging systems, electronic records, and monitoring devices into a coherent whole, and this infrastructure can be costly and take years to become AI-ready. That’s a task requiring high-level expertise and data engineers that not every hospital can afford.
Catharina Hospital has become a reference point for others, but its success did not come overnight. It reflects years of consistent strategy and future-oriented leadership, with the hospital’s AI Center of Excellence serving as a key driver. Thus, today, AI is seen by healthcare professionals as something natural. They even develop their own algorithms.
One of the hospital’s most successful in-house developments is an AI algorithm for oncology. This tool automates the “delineation” process for breast cancer radiation planning, which involves drawing precise contours around the breast’s tumor and surrounding organs at risk. Traditionally, a manual task that took 40 minutes, this AI-driven process now takes roughly three to twenty minutes, enabling more efficient treatment planning. Another example is the FACE-Q AI Project. Developed by plastic surgeon Dr Maarten Hoogbergen, it uses AI to assist patients undergoing reconstructive surgery following skin cancer. The algorithm is trained on a database of prior patient photographs and clinical outcomes to generate visual simulations of how a patient’s face might appear after various reconstruction procedures. This approach supports shared decision-making by helping patients choose between options based on their personal preferences for cosmetic results versus recovery time.

The list is much longer: an award-winning predictive model designed to identify elderly patients at high risk of falling; a multimodal data platform in the ICU that integrates monitoring, ventilation, and brain support data to predict patient deterioration, enabling staff to intervene before a crisis occurs. The hospital’s innovativeness is evident throughout. Virtual reality is used to train medical students, while a pediatric coaching app helps reduce children’s distress before MRI scans. The people here seem to know exactly how to extract the most outstanding value from technological progress. And it feels good.
Europe. It’s not a continent for med tech
However, Catharina Hospital is an exception rather than the rule. Engaged teams and organizational culture drive its innovativeness, yet regulatory and reimbursement constraints continually constrain it.
Europe is also where new technologies face their steepest barriers and where health systems value volume over value.
Under the current Medical Device Regulation (MDR), bringing a new device to market can take four times longer than before and cost several times more. For AI-driven products, this complexity is compounded by overlapping regulatory frameworks governing data protection and algorithmic accountability.
The result is alarming: European companies such as Philips often launch innovations elsewhere first, typically in the USA, where approval can take months rather than years. Patients wait longer for technologies developed in their own region. Philips argues that part of the problem lies in how value in healthcare is assessed. Procurement systems often focus on upfront costs rather than long-term system benefits. Time saved, errors avoided, and staff burnout reduced rarely appear on balance sheets.
Without new methods for measuring the economic and societal impact of health technologies that align with AI’s capabilities, innovation won’t scale up or, simply put, won’t benefit patients and healthcare professionals as it could. Philips wants to change it by leading data-driven efforts to quantify how technology can improve efficiency across society, from reducing hospital stays to enabling people to return to work sooner. The company also works with university hospitals, research institutions, and policymakers to streamline regulation and build shared data resources, including large imaging databases such as the Cancer Image Europe initiative.
“The aim is to remove friction from the system so innovation can move more quickly from the lab to the bedside,” highlights Sattler de Sousa e Brito.
AI raises the bar of medical expertise
AI remains unevenly distributed. For healthcare systems, it still feels like a bonus; “nice” but not “necessary”. If nothing changes, AI-driven innovations will remain available at flagship university medical centers and will be rarely available in rural medical centers, where they are most needed.
Before leaving Eindhoven, I met Shez Partovi, Chief Innovation and Strategy Officer at Philips, to gain a broad overview of AI. He is convinced that, as surveys confirm, clinicians want automation to remove tasks that do not require their training. They seek an augmentation that provides sharper insight and greater confidence in decision-making. And they want agility, systems that reduce reaction time across entire care pathways, not just isolated steps.

Yet Partovi is explicit about what healthcare does not want: more errors. Human supervision, he insists, remains essential. AI must operate within clear guardrails, with clinicians always in control, in the loop.
Partovi points to a future that is only beginning to take shape, and AI agents – systems that perform defined tasks autonomously under supervision – seem to be the next stage of this evolution. In other industries, AI agents already operate around the clock, reporting their progress to human teams each morning. Healthcare, he suggests, will follow. Early uses are likely to be administrative rather than clinical, handling tasks that today go unfilled because there are not enough people to do them. Over time, this could reshape what a care team looks like, with humans and AI systems working side by side, each focused on what they do best.
Over the past few hours, I’ve heard at least ten times: “AI won’t replace clinicians, but it will change how they work.” It’s become a standard disclaimer that anyone developing AI in healthcare feels obliged to say. Almost as if creating AI were something to be ashamed of, a kind of sin.
I understand its roots: ensuring that the human touch in medicine does not disappear. However, in some regions, there is already nothing to replace: Patients have no chance of seeing a doctor. In numbers, 4.6 billion people lack access to essential health services, according to the WHO. So every time you hear the phrase “AI will not replace doctors,” you can be sure you are lucky.
I agree with Partovi, who concludes that AI will make deep medical expertise even more critical. The better clinicians understand their domain, the more effectively they can work with intelligent systems. AI, in this view, raises the bar – a positive message that should dominate the AI debate.


Multimedia:
Listen to an 18-minute AI-generated podcast created from all my notes and interviews I collected during the visit to Catharina Hospital and Philips Customer Experience Center.
Watch a short video about AI in medical imaging and doctors’ daily work. It’s also AI-generated, but it’s a good wrap-up.
Disclaimer: This article is independent and non-sponsored, based on a press visit to Philips in November 2025. All images courtesy of Philips. AI-generated video and audio were created with NotebookLM.