Dorothee Bär, Minister of State for Digitization, called for “rethinking political structures to speed up digitalization”. She claims that “age is just a number and not a factor in being open-minded to the new technologies”
Scientists around the world are delving deeper into even newer powers of machine learning and artificial intelligence, while society is experiencing fear fuelled by science-fiction movies, and in turn politicians are having to face new regulatory challenges. Will AI help us tackle climate change and cure cancer? Or is it a new authority that threatens our freedom? How can we use these increasing levels of computing power to confront the problems facing humanity?
“Artificial intelligence could spell the end of the human race,” said Stephen Hawkins. Nonetheless, at the 5th Digital Future Science Match 2019 (Berlin, 14 May), hopes towards AI outnumbered apocalyptic predictions. Representatives of the sciences and experts from around the world presented the latest developments in AI.
Neither like China nor the USA
Europe is seeking its own road to digitalization. It must be one that is fully secure, transparent, fair and functional. Algorithms should not discriminate at work, in the public service sector or in daily life. The wish list is long. Instead of stimulating the development of AI solutions driven by innovative companies and academia, new barriers and regulations appear instead. There is no perfect strategy that leads to sustainable, people-centred, fair technological progress – technology itself changes too quickly and today’s visions may already be outdated tomorrow.
This is a time to act, to experience and experiment, as the global leaders are doing, such as in the USA and China. The big four tech companies – Google, Facebook, Amazon and Apple – and the startups based in Silicon Valley rule the world of global technologies, setting new standards for digitalization. The U.S. Food and Drug Administration clears increasing numbers of digital health solutions, allowing AI to detect cancer or irregular heart rhythms. China has big ambitions for AI and is already a leader in AI research. This Asian economic tiger profits from the enormous data generated by the world’s biggest population. The USA and China are not role models for Europe, which is looking for a new digital economic pattern, far away from China’s authoritative attitude to data that violates human rights, but also more socially-oriented than America’s industry-driven digitalization. So far the Old Continent has been stalled in endless discussions, slowly degrading to a receiver of the AI-based solutions made by others.
“Technology is always political. Establishing it is not carried out functionally alone, but socially as well,” said Christian Katzenbach (Alexander von Humboldt Institute of Internet and Society). In the public debate on AI, the question “in what kind of future dominated by AI technologies do we want to live?” remains unanswered. From this perspective, the US “utility-driven” ethics in innovation can be a well-balanced compromise: “if something is good for society, it’s good in general.” After a period of fascination with new technologies, even in the USA the time comes for self-reflection. The latest example is San Francisco, a technological heart of the country, which has banned the use of facial recognition software by the police and other municipal agencies.
“We should see the regulations as a business opportunity, not just a barrier,” concluded Ulrich Kelber (Federal Commissioner for Data Protection and Freedom of Information). He also emphasized that although GDPR sets some limitations for developing digital solutions, it is also an enabler for secure and transparent data exchange in digital societies.
AI already decides which route you take to work, what news content you see on Facebook, and whether you get a loan or not.
AI mirrors human prejudices
The implementation of digital strategies takes many years, with the involvement of numerous parties and many aspects to be taken into account. Any rush is a good advisor, as there is much to win as well as to lose. Too much regulation leads nowhere, while too little can have catastrophic consequences. We still do not know in which direction digitalization is heading, and how new technologies will disrupt the future; meanwhile too much attention is being paid to the threats instead of the opportunities. Moral judgments on AI, often passed by the media, do not make constructive discussion easier.
The truth is that AI has already had a big impact on societies and all our lives. It decides which route you take to work, what news content you see on Facebook, and whether you get a loan or not. Machine learning personalizes adverts, tracks social behaviour in social media and adjusts prices to the consumer’s location and shopping patterns. In this way, mostly unconsciously, many of us are being discriminated by algorithms… or rather by the people who code IT systems. Let’s be honest: algorithms are not prejudiced, they do not think: fed by data, they copy and multiply human mistakes and biases dependent on experience or culture. Discrimination-free societies can be driven by technologies, but they must be injected with objective criteria to make independent and impartial decisions.
To guarantee the basis for ethical AI development, education from the early years is essential. We are living in a digital society, so children should learn about the latest technological advancements. Geography classes make theories about a flat earth simply ridiculous. Analogically, coding classes could lead to a better understanding of algorithms and suppress the fear of machines killing people.
IT speeds up climate change
The production of energy during the 18th and 19th centuries was based purely on coal. Images from the time of the third industrial revolution show black smoke over trains, cars and factories. Smog was killing people in big cities. The invention of the steam engine triggered the dramatic pollution of air, water and earth that today threatens humanity. The fourth industrial revolution, with AI, 3D printing, biotechnologies, robotics, nanotechnologies, quantum computing and the Internet of Things is seen as a “clean” game changer. Autonomous cars should reduce air pollution in cities, smart homes should save energy, and AI should make agriculture smarter.
By 2040, more than 50% of the world’s energy will be spent on IT.
It was only 76 years ago, in 1943, that Konrad Zuse built the first programmable computer, but since then computing power has increased by a factor of 1 quadrillion. Today there more than 2 billion personal computers in the world, including huge data centres, and they need energy. “In Germany CO2 emissions generated by IT is greater than emissions from air traffic. More than 90% of software uses less than 10% of computer hardware. By 2040, more than 50% of the world’s energy will be spent on IT,” stated Petros Koumoutsakos (ETH Zurich), pointing out an overlooked aspect of computerization. “The energy footprint of the IT sector is already estimated to consume approximately 7% of global electricity,” according to the Greenpeace report “Clicking Clean”. According to Climate Home News, billions of internet-connected devices could produce 3.5% of global emissions within ten years and 14% by 2040, while the communications industry could use 20% of all the world’s electricity by 2025. Outdated smartphones, computers and electronics are the fastest growing waste problem globally. Fortunately, positive changes can be seen on the horizon: global tech leaders have already made commitments to use 100% renewable energy sources within the next few years.
“Generalized AI exists mainly in science-fiction movies, where computers start to become smarter than people. Mostly in a bad way. Specialized AI is about using algorithms and computers to figure out how to tackle increasingly complex tasks, and it is beginning to participate in many aspects of our lives, from medicine and travel to how energy is distributed,” said Tim Conrad (Freie Universität Berlin).
In the study “Deep Learning for Identifying Metastatic Breast Cancer”, the researchers showed that human accuracy in detecting metastatic breast cancer is about 96%, while for AI-based software it is only 92%. However, by combining the predictions from deep learning systems with that of human pathologists, the accuracy increased to 99.5%. There are many more examples of how AI can support physicians: a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep-learning model that can predict from a mammogram whether a patient is likely to develop breast cancer within the next five years. The team’s model was significantly better at predicting risk than existing approaches: it accurately placed 31% of all cancer patients in its highest-risk category, compared to only 18% by traditional models. Deeper cooperation between doctors and algorithms can unleash a new power for medicine.
At Digital Future Science Match 2019, scientists showed their latest research on machine learning, algorithms and Artificial Intelligence, proving that a digital future might be a bright one. Computers and AI are already helping in many areas of life, and so far they have not plotted against humanity.