Noise In Medicine

Posted by
Daniel Kahneman is the world’s most influential living psychologist

“The treatment a patient gets might be a product of a lottery,” says Prof. Daniel Kahneman. I interviewed the co-author of “Noise. A Flaw in Human Judgement” and the Nobel-Prize winner to explore the nature of disruptions in decision-making.

How big is the problem of “noise” in healthcare?

There are many examples in the book that convey the pervasiveness of noise: variability in interpreting angiograms, diagnosing endometriosis or tuberculosis, the disagreement between pathologists analyzing skin lesions for the presence of melanoma. Radiologists often assess the same image differently. One study found that two psychiatrists who independently diagnosed 426 patients agreed only in merely 50% of cases. The existence of noise is not surprising, but its magnitude is enormous.

Many medical errors, which often remain underresearched, stem from errors in judgment.

I think that healthcare professionals are pretty aware of the noise in medicine. It’s obvious that when two doctors give you different diagnoses, at least one of them is wrong.  In medicine, variability in judgments is very problematic—it is unfair for two patients with the same disease to be treated differently. A healthcare system where professional judgments are inconsistent can lose credibility and trust.

In our book, I gave an example of a patient diagnosed with high blood pressure. He was prescribed three different medications, but none of them worked. When the patient moved to another city, where he was consulted by a new primary care doctor, he got diagnosed with white coat syndrome. The patient’s blood pressure was normal at home but high at the doctor’s because he was just stressed.

Many diagnoses are routine, and some do not involve judgment at all. For example, if a patient has a fasting blood sugar level of 126 milligrams per deciliter or higher, or an HbA1c of at least 6.5, the diagnosis is diabetes. But when it comes to more complex decisions that include judgment, noise can appear.

There have already been many efforts to make medicine more connected and evidence-based by implementing procedures, diagnostic guidelines, and rules. The shift from judgment to calculation can reduce the level of noise. However, some diagnoses can’t be reduced entirely to computation—there always will be an element of noise.

The truth is healthcare has managed to address the noise much better than, for example, the justice system.

Is there a method to spot the level and type of noise in healthcare settings?

The visible marker of noise in healthcare is the amount of disagreement which can be generated either by level noise or pattern noise. To detect the level of noise in a system, you have to run an experiment in which several physicians make multiple, independent judgments of the same cases. This is a noise audit aiming to measure the variability of judgments, in the radiology department, for example. If there is variability in radiologists’ decisions, there is noise.

Secondly, use the noise audit for a more detailed analysis to determine where the noise comes from.

There are different types of noise that noise audit can help detect. However, more challenging to measure is occasion noise, which results from external factors like weather, mood, and other factors. Our book describes a study demonstrating that doctors are more likely to order cancer screenings early in the morning than late in the afternoon. Similar variations also apply to hand-washing in hospitals. In this case, stress and fatigue can play a significant role. So as you can see, occasion noise is hard to spot; however, it can be reduced.

How is it to be done?

In the book, we suggest co-called decision hygiene, a preventive measure against an unidentified enemy. One of the applications of decision hygiene is replacing judgment with rules of algorithms. Don’t take me wrong—I’m not suggesting replacing doctors with algorithms. Algorithms won’t take over medicine. Individuality and human values in healthcare are essential. It’s about improving judgment.

The fact is that AI is increasingly able to deal with complex problems. In some domains, like dermatology, diagnostics of retinal diseases, and radiology, the precision of AI algorithms is already similar or higher than the accuracy of doctors. The rate of AI development is enormous, and we can expect that they will become superior in the future. As a result, the medical profession will rely on algorithms more and more.

Whenever there is a tradition of individual autonomy in a profession, that makes efforts to reduce noise more difficult.

So what would you advise doctors that are already aware of noise and want to minimize its impact on their judgment?

The easiest guidance would be: if you are tired, you will make poor decisions. I’m aware that doctors have their schedules and want to help patients whenever they need help.

Continuous learning and increasing skills are critical. When the decisions get complex, aggregation of multiple expert judgments and frequent discussions on particular cases can help. I mean a discussion that has a particular form whereby every doctor proposes a diagnosis independently. From this point on, you can start measuring the level of noise—there is room for dialogue on what should be the best procedure to make the diagnosis. It’s a matter of organizing the conversation in a way that illuminates the problem.

Medicine already has a variety of simple tools in place. I mean guidelines. For example, the Apgar score allows physicians and midwives to evaluate the health of newborns after birth. It provides clear indicators, simplifying each cue-level judgment. Apgar score is an example of structuring judgment by breaking it down into a set of smaller tasks. This way, physicians can avoid ignoring information that does not fit a preexisting diagnosis.

There are many more scores that can contribute to reducing noise. And there is strong evidence that they work—medical guidelines minimize variability, and therefore they are beneficial.

Medicine is facing one more problem—it’s objective ignorance. Sometimes doctors tend to rely more on their intuition and experience than on objective facts. Is there a way to use instinct more wisely?

One of the examples of expert intuition that can be trusted is a medical diagnosis. There is good evidence that physicians who have expertise in particular diseases are able to make diagnoses very quickly and accurately just following their intuition. We have specified a list of conditions that are required for impulsive decision-making to be trustworthy, just to mention experience and the complexity of the problem. If the critical criteria are not met, people should be careful about their intuitions.

One of the biggest dangers is that if physicians have expertise in one domain, they may feel that they also have expertise in another area when they actually don’t. Such confidence can be very deceptive.

It won’t be easy since it’s not about changing procedures, but mostly about transforming the work culture.

There will be resistance every time there are attempts to reduce noise. In the justice system, despite the high level of noise, there is a systematic refusal to address it. However, in most business organizations, once they detect the problem, managers try to do something about it in order to improve the quality and the results.

As you see, whenever there is a tradition of individual autonomy in a profession, that makes efforts to reduce noise more difficult.

My guess is that physicians are more willing to accept noise-reducing initiatives. This is because they already use tools like scores and guidelines that help them improve accuracy in their work. Nonetheless, I expect some challenges regarding the acceptance of algorithms by doctors. The truth is that if we want to make better decisions, we should take noise reduction seriously.

In the bestseller “Thinking, Fast and Slow,” you describe two models of thought—“System 1” which is fast, instinctive and emotional, and slower, more logical “System 2”. Following the logic, it’s in the intuition-based “System 1” that the prevalence of noise is more likely. Is there a specific rule, or does noise relate to both thinking systems?

There is no general rule. It depends on the level of expertise, knowledge, and quality of the feedback. I would highlight another factor here—when people do not receive good feedback, they can get noisy because they become more confident than they should be.

What is the key conclusion from the “Noise. A Flaw in Human Judgement”?

Well, it is the same conclusion that led us to write this book: There is a noise problem whenever there is a judgment. The first step in dealing with noise is to make people aware of it. The second step is to measure noise in an organization. And the third is to try to do something about it.

We discussed judgment as a particular case of measurement. And the relevant theory is the theory of measurement in which noise and bias have equivalent roles. In many situations, noise is a more important source of errors than bias is. Yet bias is generally recognized, and noise is often not viewed as a major problem. In the book, we try to deal with this symmetry.


Please support independent journalism

This content is free of charge. This website is free of commercials. Please donate to aboutDigitalHealth.com (€1+). It only takes a minute. Thank you!

€1.00

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s