
How Google, YouTube, TikTok, ChatGPT educate and misinform about health.
Every minute, Google processes 70,000 health-related queries. Every day, millions watch TikTok shorts filled with questionable health advice. Whether seeking support online or simply scrolling, patients often find themselves trapped in a labyrinth of misinformation—one that can be dangerous, even deadly.
Influencers playing physicians
In 2023, news spread about a teenager who died after taking 14 over-the-counter pills containing diphenhydramine. This tragic event wasn’t an accident but rather the result of a TikTok trend called the ‘Benadryl Challenge.’ The ‘game’ involved consuming large doses of the drug to induce hallucinations, with participants recording their reactions.
Such challenges are not rare on TikTok. Some may be amusing yet harmless, like placing potatoes in socks overnight to remove toxins from the body. However, others pose serious health risks, such as practicing intense yoga in 40-degree Celsius heat (#HotYoga), taping one’s mouth shut to stop snoring (#mouthtaping) or drinking borax—a popular household cleaner—to boost energy levels (#boraxchallenge).
With 1.9 billion users, TikTok is the primary social media platform for Generation Z, with 43% of its users aged 18-24. On average, young people spend 58 minutes daily consuming its content. A study by PlushCare found that 83.7% of mental health advice on TikTok is misleading, and 14.2% is potentially dangerous. Young people often blindly trust influencers with millions of followers, sometimes even diagnosing themselves with conditions they don’t actually have. Why? Because social identity—the desire to belong to a group—can be stronger than rational analysis of the group’s actions.
A prime example of “cyberchondria” is ADHD, which has dominated TikTok hashtags with billions of views. The disorder gained sudden and problematic popularity when celebrities began claiming they had ADHD, boosting their appeal and reinforcing a hyperactive, high-energy persona. Their followers quickly jumped on the trend, leading to a widespread self-diagnosis phenomenon. One of the studies found that 92% of videos tagged #adhdtest shared inaccurate or misleading information, while another research suggests that 44% of health-related videos posted by non-medical influencers contain misinformation.
ChatGPT: Lifesafer and liar
According to new Eurostat data, 60% of Europeans aged 16-30 use ChatGPT. In a study User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes, 78% of users admitted they would consult ChatGPT for a diagnosis if needed.
ChatGPT impresses with its conversational abilities. For the first time, artificial intelligence (AI) can communicate like a human, write essays, or even complete homework. The latest model, ChatGPT-4o, easily passes medical exams, scoring 94% on the UK medical licensing test and 90% on the US equivalent.
In 2023, a story made headlines about a 4-year-old boy whose chronic pain confounded 17 doctors over three years. In a last-ditch effort, his mother entered his test results into ChatGPT, which correctly identified a rare disease: Tethered cord syndrome.
Of course, this is just one side of the story. ChatGPT frequently hallucinates and can spread misinformation—often persuasively. It confidently presents fabricated facts with seemingly logical reasoning, making it easy to believe. Available statistics indicate that people most commonly ask ChatGPT about diseases, symptoms, treatments, and medical terminology. Alarmingly, an Australian study found that 61% of users ask AI about health problems that require a doctor’s visit. While no specific data exists yet on delayed treatments or incorrect AI-driven diagnoses, such cases are emerging and will likely become more common as GenAI is getting on popularity. Currently, only about 5% of the world’s population uses ChatGPT.
Good old Dr. Google is getting better but it’s still not a medical encyclopedia
Approximately 7% of all Google searches relate to health. According to statistics, users primarily seek basic information. In 2024, the most searched health-related questions included:
- Is bronchitis contagious?
- Is pneumonia contagious?
- What is lupus?
- How much water should I drink daily?
- Is strep throat contagious?
- How long does the flu last?
- What causes high blood pressure?
People search for nutrition tips, diet advice, sleep recommendations, and home remedies for the flu. It’s fantastic that such information is so easily accessible online. Google plays a crucial role in health education, disease prevention, and patient empowerment. Some even call it the “democratization of health.” But imagine a world without Google, where people go to a doctor for every single question—the healthcare system would collapse within days. Yet, Google is also the largest misinformation machine and a key factor in eroding patient trust in doctors.
A study of 2,000 Americans found that two-thirds of people who Google their symptoms receive an incorrect diagnosis of a severe illness. For instance, searching “headache” often brings up results about brain cancer, while “fatigue” links to thyroid disease and anemia.
Google was never intended to be a tool for diagnosing diseases, but over time, it has become just that. Alphabet Inc., the company behind the world’s most popular search engine, recognizes the issue and has taken steps to address it. Health-related searches now increasingly direct users to trusted medical institutions rather than unreliable sources. The tipping point came during the COVID-19 pandemic when Google supported public health efforts by redirecting searches about the disease to trusted healthcare organizations’ websites and official, verified sources.
Still, Google is not a medical encyclopedia. Its search algorithms consider factors beyond the quality of content, such as citation frequency, page layout and speed, mobile friendliness, user experience, etc. A study from Edith Cowan University confirmed this: only 36% of top Google search results for symptoms provided correct diagnoses.
Big tech is entering health with return-on-investment priorities
Social media, Google, and AI chatbots have both pros and cons. Without them, patients might visit doctors for every minor issue. Self-diagnosis on the internet has saved many lives when alarmed patients consulted professionals before symptoms appeared. Videos on TikTok also promote a healthy lifestyle and motivate people to eat healthier and exercise more.
Yet, these tools often do more harm than good. Social media is rife with misinformation, fostering anti-vaccine and anti-science movements while influencers promote harmful content. It’s not new: TikTok, Instagram, Facebook, and other platforms are full of tips on how to get Ozempic without a prescription, promoting beauty ideals that can lead to unhealthy behaviors. In this virtual world, it’s hype, not facts and evidence, that decide what’s liked and followed.
AI chatbots like ChatGPT can sound convincing, leading people to trust them without question when asking about symptoms. Meanwhile, popular YouTube videos spread false health claims to get attention and generate income.
Cleaning up this mess often falls on healthcare professionals. When patients come to a doctor with printouts from Google or ChatGPT, doctors must debunk myths, calm anxieties, and clarify facts. Unfortunately, this situation isn’t likely to change anytime soon—after all, Google, YouTube, TikTok, Facebook, and ChatGPT are profit-driven tech giants, not bound by the Hippocratic Oath.

Can I ask you a favor?
Donate to AboutDigitalHealth.com and help sustain this not-for-profit knowledge portal. Every contribution makes a difference. Thank you!
€1.00