
AI isnโt just a toolโitโs an “alien agent,” says Yuval Noah Harari. In his latest book Nexus, published on September 10, Harari warns of the dangers of recklessly advancing AI technology.
We drive fast with AI but donโt learn how to brake
AI offers great benefits, helping us address issues ranging from climate change to healthcare. However, as a powerful technology, it also comes with immense risks. AI is the first technology capable of making decisions and generating new ideas independently. Thus, itโs not just a toolโitโs an agent. As an autonomous agent, AI could potentially escape our control and even enslave or annihilate us.
People often say that every invention has risks: if you invent the car, you also invent the car accident. This doesnโt stop us from using cars, but it does mean we must invest heavily in safety. When learning to drive, the first lesson is how to brake, not how to accelerate. But with AI, weโre accelerating without learning how to brake. Weโre developing AI at a rapid pace without implementing robust systems to ensure it stays under our control or to stop it if something goes wrong. Thatโs the real danger.
People often compare AI to previous technologies like the printing press or trains, arguing that we adapted to them. But there are two key differences. First, AI is unlike any previous invention. The printing press could print books, but it couldnโt write them. Autonomous weapon systems, like drones, can decide who to attack and kill. AI can even create more sophisticated versions of itself. Secondly, while we adapted to older technologies, the process was often dangerous and costly.
AI is not a tool; itโs an agent
The critical difference between AI and traditional computer programs is that AI can learn on its own. Itโs a system capable of learning things its creators did not directly program. Imagine AI as a kind of “baby” that can learn, develop, and change independently. Once it interacts with the world, it can learn things we didnโt foresee and devise strategies we never taught it.
We are unleashing millionsโpossibly billionsโof independent, “alien” agents into the world. These agents think and make decisions in ways fundamentally different from humans. This is particularly concerning in sectors like finance. Unlike self-driving cars that must navigate the physical world, finance operates in the realm of information and pure mathematics, which makes it an ideal playground for AI.
Today, we struggle to fully understand our financial systems, and most politicians and regulators have difficulty keeping up. Imagine a scenario where corporations grant AI more autonomy in finance. AI could devise new financial strategies far more complex than anything humans have ever conceived. Things may work fine for a while, but when a financial crash happens, no human will understand what went wrong.
Look back at the 2007โ2008 financial crisis, triggered by new financial products that few regulators understood. It worked until it didnโt, leading to a collapse with repercussions that are still felt today. A similar, even larger-scale disaster could happen with AI, especially in finance. The real danger isn’t robots like the Terminator; itโs a financial meltdown caused by autonomous AI systems.
AI will take jobs if you canโt adapt
Harari doesnโt believe AI will create an absolute lack of jobs, but it will certainly bring volatility to the job market. Old industries will vanish while new ones emerge. The challenge lies in retraining the workforce, and the biggest issue will be global inequality. Countries like Germany, the U.S., and China, which have the resources to retrain their workforces, will benefit from the AI revolution. On the other hand, countries like Bangladesh, Pakistan, Egypt, and Guatemala, which may lose jobs in sectors like textile production, lack the resources to retrain their workers.
This could lead to a world where skilled labor is in demand in countries like Germany or the U.S., while severe unemployment hits countries like Bangladesh or Egypt. Immigration restrictions might exacerbate the problem, with some nations facing labor shortages while others struggle with high unemployment.

AI will use the power of intimacy to sell and influence
For the first time in history, we are witnessing the mass production of intimacy. Until now, only humans and animals could form intimate relationships with us, but now AI can do the same. This is far more powerful than simply capturing our attention.
Social media companies have long battled for our attention, but a new fight is emergingโfor intimacy. Intimacy goes deeper than attention; it can change our views, shape our minds, and influence our decisions. In the past, totalitarian regimes could only mass-produce attention, but they couldnโt mass-produce intimacy. AI can. Governments or corporations can deploy millions of bots to form intimate relationships with millions of people. This is a dangerous new tool.
AIโs power lies in its ability to influence culture
AI will not only transform economics and politics but also culture. For thousands of years, cultureโwhether it was paintings, theater, or televisionโwas created by human beings. However, with generative AI, more and more “culture” will be produced by non-human intelligence controlled by a few large corporations.
These corporations wonโt just wield economic and political power; they will also have enormous cultural influence. The problem is that these corporations donโt represent anyoneโthey werenโt elected by the people. We are entering a new imperial age, where a handful of countries and corporations can dominate the economy, culture, and politics on a global scale.
Soft versions of social credit systems are likely to spread
With todayโs technology, governments can virtually eliminate privacy. You no longer need human agents to follow people around; smartphones and cameras are everywhere. No human analysts are required to analyze the data; the system can always track everyone, effectively turning your entire life into one long job interview. Every action you take could affect your chances of getting a job or accessing other opportunities in the future, creating an incredibly stressful and disruptive environment for human well-being.
Human beings are organisms governed by cycles. Sometimes, we are active; other times, we rest. We engage in social activities, but we also need privacy and downtime. Consider the stock market, which operates in cycles. The market wonโt react until Monday morning if something happens on a Friday afternoon. This is crucial because, like markets, organisms need time to rest. If you keep an organism constantly active and stressed, it will eventually collapse. These social credit systems, along with other aspects of modern life, are forcing us to remain active and alert all the time, which is highly disruptive on an individual level.
The text is based on an interview with Yuval Noah Harari at the Big Bang AI Festival in Berlin on September 12.

Can I ask you a favor?
Please donate to aboutDigitalHealth and help sustain this not-for-profit knowledge portal. Thank you!
โฌ1.00