ChatGPT Doesn’t Care

Posted by
ChatGPT creates new things by mindlessly following billions of patterns. But it does it damn well.
ChatGPT creates new things by mindlessly following billions of patterns. But it does it damn well (credits: Artur Olesch)

Is generative AI a brilliant assistant or a creativity killer? A copy of ourselves that prove that the original is copyable? Just a machine good at calculations? A talking parrot repeating words? The answer lies in mathematics and behavioral science.

I’m writing this article and wondering if AI could do it better: smarter and funnier. And even if it can’t yet, I’ve already lost: AI will swallow this text like a hungry monster to learn my style, copy it and use it against me without any ethical dilemmas or doubts about copyrights.

Also, healthcare professionals are confused. According to the latest studies, ChatGPT is more empathetic and has better manners than human doctors. The machine never gets annoyed or stressed. It’s never in a hurry. It remembers every pixel and performs well, even on Mondays.

AI is like a classmate you didn’t like because they always had better grades: Med-PaLM—another Large Language Model (LLM) from Google—has recently passed the medical exam by answering 83% of the questions correctly. Two years ago, we still could enjoy schadenfreude: it failed with a score of 33%. No surprise that the Boston Children’s Hospital is already looking for an AI prompt engineer.

Even more, AI has talents. Dall-E, an AI system that creates images from a text description, makes pictures far more stunning than this “not-so-perfect sketch you made and was proud of because you found it creative and charming in its own way”.

AI perfectly mastered juggling 0 and 1 to look smart

Philosophers discuss the twilight of humanity as we know it, while some experts herald Armageddon in the labor market. Scientists welcome great applications of AI in research. Some futurists—in tech ecstasy—foresee the dawn of “tech care.”

And the rest of the people? They are amused by the photos of King Charles dancing in a pink outfit at a coronation after-party that never happened.

All the breaking headlines are boiling in an optimistic-pessimistic soup of facts, pseudo-truths, over-interpretations, and personal beliefs. While some say AI already has consciousness or even a soul, others say it’s just a machine that has perfected computation.

It feels like a roller coaster with uphill rides of euphoria and scary turns.

But what can really be generative AI?

The answer: It can data.

AI is a stunning masterpiece of computer engineering. It searches patterns in a lake of billions of data drops to write an e-mail that doesn’t require geniuses to be composed or to give you the next aha moment. Let it be a drug needed by millions, a worrying shadow on an X-ray a tired doctor overlooked, artwork that critics will contemplate or a brief summary of a 1000-page book you always wanted to read but never had time for.

AI brutally flattens the sense of humanness

Already 1.6 billion people worldwide use ChatGPT to search for information, boost creativity, and improve productivity. AI is what we make of it. However, AI is also what it makes of us.

ChatGPT is sometimes a counterfeiter and sometimes a creative artist. And it means that AI powers might be overestimated while its real impact is underestimated.

Gina Neff, the Executive Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge, calls generative AI “the largest ever social experiment.” In her recent keynote, Making AI Work for Us*, she warned of the unsupervised development of AI driven by private companies, or bluntly speaking, by profits.

Read an interview with Professor Gina Neff >

“If the operational costs of ChatGPT amount to 600 000 Euro per day, who can afford to create systems like that?” she asks.

Professor Neff is, however, optimistic. AI will not shape the world—every innovation is “negotiated” by people who use it. Here, I expanded her arguments to a healthcare perspective:

  • The future will be human, not technocratic, as envisioned by some experts inspired by science fiction. Medicine is based on trust, and trust is an outcome of human-human interactions;
  • Humans are negotiating every innovation. Technologies are deployed only if we accept their value. If you wonder why some breakthrough technologies have it hard, it’s probably because they failed the “negotiating innovation” process. Think about virtual reality that, due to some predictions, we should already be living in. People decide what innovations to adapt and what to reject;
  • The appropriation the generative AI takes—by reproducing copyrighted people’s work—to create texts and pictures is questionable. LLMs will increasingly sit in courts;
  • In medicine, we are ready to forgive people, but not algorithms. 97% of accuracy is ok for people but not acceptable for algorithms—someone will have to shoulder the responsibility for the 3% of mistakes;
  • We accept technologies that we trust. The bad news is that overall trust—in institutions and governments—is declining in Western societies. If data infrastructure, algorithms, and AI won’t persuade people, the AI revolution will end before it starts.
  • Generative AI might be impressive, but AI is operating in an artificial ecosystem. It learns from data, not from real-world interactions. It doesn’t have senses and body. Although the act of empathy can be programmable, it’s not empathy but a fake of an original feeling humans have;
  • ChatGPT can have better manners than doctors when answering patients’ questions. However, patients get just a sequence of sentences generated by LLMs trained to predict the next word or phrase. Communication is about the past, understanding correlations, and reasoning, not about making sense of puzzles of data;
  • In other words, LLMs have managed to build their map of language, but they can’t navigate through the complexity of the real world ;
  • ChatGPT is a static model. The answers we get are not aligned with the world here and now, culture, and social transformations. Without context, knowledge alone is sometimes useless;
  • Even the exponential growth of AI has limits—we are running out of data to train AI models;
  • The sense of humanness built into algorithms is a flattened one. Algorithms break down the construct of pixels to reconstruct an image. Artists do not start with pixels. They begin with a concept and continue with pixels.
Division of competencies between humans and AI according to health status.
Division of competences between humans and AI according to health status (credits: Artur Olesch)

Data is the limit

The green logo of ChatGPT—shaped as an ֍ Armenian symbol of infinity—is supposed to mean eternal life. But for some, it reminds just a braided roll cake: Visions of individuals are not always understood and accepted by the masses.

Generative AI will accelerate changes in many areas of life at a speed we haven’t experienced yet. The human touch will be embedded with algorithmic, synthetic empathy. But ChatGPT won’t see you soon. It doesn’t care; it just uses billions or trillions of parameters to put together words in a way it follows a pattern. In healthcare, a process, not just an outcome, matters.

ChatGPT—if you let it—will optimize your life, reminding you what to do so you can enjoy your life at its finest and longest. But it won’t hold your hand when death casts its first shadow on mind and body.

Questions you are confronted with while being sick, doubts that arise from existential, primal fear can’t be answered by perfectly crafted formulas: “Here are 5 possible options…” or “common approaches include…”.

Event organized by The Alexander von Humboldt Institute for Internet and Society (HIIG) and the Federal Agency for Civic Education (bpb). Click here to watch the full keynote by Professor Gina Neff.


Can I ask you a favour?

Please support independent journalism. Donate to aboutDigitalHealth.com (€1+). It only takes a minute. Thank you!

€1.00

Leave a comment