Innovations in healthcare need to be discussed as never before, footnotes and small print for user manuals of health apps need to be read carefully by all of us. Instead of simplifying or demonizing digital healthcare, we need more research and deeper debates. Conclusions and questions after this year’s conference on internet and society re:publica (Berlin, 6-8 May 2019).
Capitalism: Is there a place for equity and solidarity in (digital) healthcare?
“A key challenge is to ensure that all people enjoy the benefits of digital technologies for everyone. We must make sure that innovation and technology helps to reduce the inequities in our world, instead of becoming another reason people are left behind. Countries must be guided by evidence to establish sustainable harmonized digital systems, not seduced by every new gadget,” postulates Dr Tedros Adhanom Ghebreyesus, Director-General, World Health Organization in the foreword for the latest report “WHO Guideline: recommendations on digital interventions for health system”. The WHO calls for #HealthForAll. But equal access to healthcare in the era of digitalization is in threat. Healthcare is too complex for its problems to be solved in a simple way.
Digital health is not a silver bullet – although it creates new opportunities for health services distribution, new challenges arises. Digital natives profit from brand new smart watches to monitor heart health or blood glucose, in modern clinics designed in Silicon Valley patients have access to the newest telemedicine innovations for early disease detection. Some say that sooner or later digital health will also be adopted in public health systems and become available for everyone. But the digital revolution is exponential – those with better digital literacy, education and financial status will always have access to better care supported by newest innovations. Wearables will be followed by DNA-editing, injectable sensors, robotics to assist patients.
Let’s be realistic: it is estimated that hundreds of millions people globally need glasses but don’t have them (can’t afford them); modern, personalized prosthetics are affordable for a small percentage of disabled people; the first tooth implants were created in 1965 but even today many public systems don’t reimburse them – not everybody can pay a few thousand euros to improve the quality of life. We can expect the same in digitalization. Inequalities are present in democracy, and tend to grow where the invisible hand of the market set the rules. Even in social health systems, supported by digital health innovations, solidarity is a pure myth. So how can we tackle it?
Dr Algorithm: The patient in the world of calculated health risks and rationalized behavior
We don’t have to agree but it’s a matter of time: Artificial Intelligence already outperforms physicians in screening X-rays for certain diseases and spotting abnormalities in human bodies, soon it will be better at diagnosing patients and prescribing personalized treatment plans. While AI can analyze millions of data sets in a second, a human doctor would need weeks to complete the task. Computing power doubles approximately every 1.3 years. The data generated on every patient rises exponentially. Algorithms suggest what to buy on Amazon or which information might be worth of reading on Facebook, in medicine they calculate possible complications during surgery or the health risk. Data gathered on electronic health records reflects our behavior, habits and lifestyle. Some insurance companies rewards patients for physical activity. Omnipresent sensors and apps in smartphones monitor life signs, performance.
AI sees patients through data sets, not through personal narration. A glass of wine or piece of cake, small daily pleasures, are being converted into calories lowering our health score. We are still far away from China’s social monitoring system in which every person has a personal score that evaluates how good or bad a citizen he or she is. But rising healthcare costs bring pressure to tighten expenditures and introduce savings. Efficient, personalized prevention at the cost of privacy is unavoidable. In 2016 Stephen Hawking said that the creation of AI might be not the best but the worst thing for the mankind.
Techno-religion and the cyber world: Better and longer life in quantified-self societies
For many digital health is more than digital devices – it’s an ideology, a post-modern philosophy. On the cover of The Economist, Steve Jobs was once named a magician and presented as a Jesus titled: “The Book of Jobs: Hope, hype and Apple’s iPad”. Technology is to improve our life and to save the world. Silicon Valley startups play the role of new messiahs, entrepreneurs with superpowers. Every Apple conference and presentation of a new product is like a spiritual experience where ecstasy mixes with desires and expectations. Announced in December 2018 Apple Watch Series 4 with FDA-approved ECG was to “change the history of medicine”. Theranos founder, Elizabeth Holmes, promised a breakthrough in medicine. Many believed her blindly and invested millions of dollars to support her visions (the product didn’t even exist). The fascination of new technologies converts into technology populism, where rational thinking is supplemented by trust and admiration. Quantifield-self, the phenomenon of tracking life signs, shows the new health tech culture – now patients have the power to control their health and in this way they are not dependent on physicians anymore. In a digital, perfect world based on binary scales, emotions, personal needs, fears and expectations play no role. The impact of digital technology on mental health is still unexplored but the negative side effects already visible in highly-technologized societies, like Japan. Isolation, loneliness and constant competitiveness powered by measurements have consequences.
Medical fake news, hoax and bias in media, Google searches, filter bubbles: Access to knowledge is a threat
Let’s face it: the Internet is full of bullshit. Democracy on the World Wide Web means – no matter if we like it or not – a freedom to express opinions. Many of them are disseminated anonymously or under a falsified identity. Different motivations stand behind hoaxes on the internet and social media: making money, dividing and disrupting, changing minds, feeling part of a group. Anti-vaccine campaigns based on conspiracy theories, pseudoscience or manipulated evidence. On social media they find a huge audience of anti-vacciners who believe that the only profit of vaccinations is a profit made by pharma industry. According to the WHO, vaccine hesitancy is one of the biggest 10 threats to global health. Measles have seen a 30% increase in cases globally. Medical fake news causes not just disinformation but can lead to inadequate decisions, harms to health or even deaths. Recently Facebook declared it would be removing anti-vaxx groups from ads and recommendations, making it harder for users to find such pages.
When looking for a diagnosis on the Internet, algorithms used by Google show the most popular searches, not the evidence-based ones. Lack of digital (health) literacy causes sometimes patients to trust “Dr Google” or anonymous opinions on social media more than a doctor. Algorithms used by Facebook lock users in filter bubbles so they step by step get an opinion based on information displayed on their social media wall. Many indicate digital education as a panacea for those challenges. But fake news is improving quickly making it hard to distinguish between truth and false. A video with a known scientist talking about the harmfulness of vaccinations? Deep learning makes it possible. So called “deepfake” is a realistic synthetic video that has been created using computer-generated imagery powered by artificial intelligence.
Tl:dr: Digital literacy, transparency and information overcharge
Too long; didn’t read (tl;dr) – a leading theme of re:publica 2019 relates well to health literacy in the digital era. How to make science as interesting as pseudoscience based on emotions, personal stories? How to restore faith in valuable knowledge in an information noise? While medical universities and healthcare organizations are producing thousands of pages of scientific papers, the knowledge remains in silos of experts who understand them. We face information overload: the average attention span of a human being amounts to 8 seconds. For a goldfish it’s 9 seconds… Terms and conditions of digital health apps are too long and in the case of most users are not read.
People give consent for using their personal health data by third parties, to be processed by algorithms many times without their knowledge. Some say, there is no danger as long as the data are anonymous. Experts claim, that anonymized data can be easily “deanonymized” using for example location-tags etc. There are over 318,000 mobile health apps. The market remains out of control – everybody can create a health prevention or behavior change app without obligation to validate. “If it’s available in the Apple or Google store it doesn’t mean it’s reliable”. How to ensure quality for digital health innovations? How to make algorithms transparent so people are informed why they have got such a diagnosis from a symptom checker without having to understand machine learning at all?
Digital ethics: Wrong questions about killing machines
The debate about ethics in digitalization has been dominated for years by the scenario of autonomous car that – in an emergency – has to make a rapid decision between killing one group of people or another one (for example two children versus five adults). This is a completely wrong question that we ask in the public discussion about robotics, Artificial Intelligence and algorithms. In healthcare the typical ethical considerations concerns mistakes made by surgery robots. Who will be in charge? The manufacturer, the assisting doctor or maybe a patient who signs a consent form and was aware of the potential risks? Well, of course such scenarios are real but too marginal to absorb all the attention.
We have to ask first what kind of healthcare we expect in the future, where to set the frontiers between privacy and prevention, how to ensure patients the freedom to decide about their own data (and the right to be forgotten in the electronic health records repositories). Can a machine make independent decisions about a patient’s health? What if the doctor’s opinion doesn’t match algorithm’s decision? What do we expect from doctors in the future? How to avoid social unrest that follows from (digital) health inequities? Is it ok to leave an elderly person alone under the care of a robot? How to eliminate discrimination or abuse through big data analyses? And finally, which values – social, human, economic – should be fundamental, regardless of the technological innovations?
In the upcoming years the wave of new technologies are going to transform healthcare and medical professionals’ work like never before in the history of humankind. We should be more worried about the changes in the patient-doctor relationship or automatization in care instead of feeling afraid of robots getting out of our control. Ethical challenges have nothing to do with science-fiction problems.
Doctor or App: Who do I trust and how will AI change the medicine?
“AI won’t ever replace doctors”. Really? Someone who declares it already today has a very idealistic vision of technologies in healthcare (or wants to stay politically correct). We don’t know it but we can foresee, with a high probability, that Artificial Intelligence will conquer doctors in some competences. Besides it’s hard to forecast if digitalization will make healthcare cheaper, more effective, improve prevention, shorten medical errors, strengthen the quality of care or help to make better treatment decisions. Maybe we should fear the opposite scenario. When facing new challenges – ageing populations, rising burden of non-communicable diseases and galloping costs – we tend to stay optimistic, sometimes even naive. The power of technologies in healthcare lies in augmenting doctors’ competences, abilities, qualifications, intelligence, capacities, performance. Nonetheless healthcare professionals need to be ready to adapt to digitally-driven changes. It’s impossible to control patients’ expectations – some of them will prefer an AI to make a diagnosis, for some a doctor will stay irreplaceable. Let’s stop this meaningless argument which blocks us in making a next step forward. In healthcare systems based on democratic principles, patients are free to choose between an app an doctor. At last, what’s most important is quality of care and efficiency of treatment, not the way of achieving the goal.