Log in
News

Harari About AI

December 30, 2024 by Twan van de Kerkhof

It is always a joy to read Yuval Noah Harari, because he challenges me to look with new perspectives at important themes, as he did in his bestsellers Sapiens and Homo Deus. Harari has an impressive list of admirers, amongst others Bill Gates, Barack Obama and Mark Zuckerberg. And me.

In his new book the Israeli historian explains the dangers of artificial intelligence (AI), not so much by looking at the excitement of the day, but by taking his readers on a journey through millions of years about how humans have become the dominant species on earth by building large networks of cooperation, using information as the glue that holds these networks together. “The clearest pattern in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation.” The main argument of Harari’s book is that humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to use that power unwisely. Our problem, then, he writes, is a network problem, more specifically, an information problem. Harari argues that the invention of AI is potentially more momentous than the invention of prior information technologies, such as the telegraph, the printing press of even writing, because AI is “the first technology that is capable of making decisions and generating ideas by itself”. This will lead to historical changes in the ways we live and work, including the shape of armies, religions, markets and nations.

In Harari’s view information is not about truth, but about connection. Its role in history hasn’t been to represent a pre-existing reality. “Rather, what information does is to create new realities by tying together disparate things”; its defining feature is connection rather than representation. Information sometimes represents reality, and sometimes doesn’t, but it always connects. This is its fundamental characteristic. “The history of information is a constant rise in connectivity, without a concomitant rise in truthfulness or wisdom.” Because the most important role of information is to weave new networks rather than represent pre-existing realities, the invention of new information technology is always a catalyst for major historical changes.

Intersubjective reality is the most important part of information, according to Harari. These are the powerful stories that are created, shared and repeated by people. Laws, gods, nations, corporations and currencies exist in the nexus between large numbers of minds; they aren’t objective realities, but intersubjective realities, conjured into existence through shared narratives. All relations between large-scale human groups are shaped by stories and myths that are constantly confirmed, challenged and revised.

These stories don’t have to be true. When it comes to uniting people, fiction enjoys two inherent advantages over the truth, Harrari writes. First, fiction can be made as simple as we like, whereas the truth tends to be complicated. Second, the truth is often painful and disturbing, while fiction is highly malleable.

Another important insight in the book is that, to survive and flourish, every human information network needs to maintain a balance between truth and order. Truth is pursued by establishing strong self-correcting mechanisms, order can be kept by centralizing and controlling the flow of information. Information networks throughout history have often privileged order over truth.

Dictatorships and religions are examples of information networks that value order over truth. Dictatorial information networks are highly centralized. First, the centre enjoys unlimited authority, second, the center is infallible. Self-correcting mechanisms are missing. The infallible centers of religions are their holy books, in which errors don’t exist and therefore nothing is ever rectified.

Democracies on the other hand are information networks that appreciate truth. They have strong self-correcting mechanisms to keep pursuing that goal. Self-correcting mechanisms are vital for the pursuit of truth, but they are costly in terms of maintaining order.  The most common method strongmen use to undermine democracy is to attack its self-correcting mechanisms one by one, often beginning with the courts and the media.

Populism, at the other end of the spectrum, posits that there is no objective truth at all and that everyone has ‘their own truth’, which they wield to vanquish rivals. According to this world view, power is the only reality: all social interactions are power struggles, because humans are only interested in power. The claim to be interested in something else – like truth or justice – is nothing more than a ploy to gain power.

For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favour of the facts. Scholars and universities are such curators. Scientific institutions gained authority because they had strong self-correcting mechanisms that exposed and rectified the errors of the institution itself. The scientific project started by rejecting the fantasy of infallibility and proceeding to construct information that takes error to be inescapable. As an information technology, the self-correcting mechanism is the polar opposite of the holy book. Institutions die without self-correcting mechanisms.

Harari juxtaposes the naive view and the populist view of information. The core tennet of the naive view of information is that information is essentially a good thing, and the more we have of it, the better. But that is not true. The internet did not end totalitarianism and thousands of exposed lies by fact-checkers didn’t keep Donald Trump from being re-elected. Nobody disputes that humans today have a lot more information and power than in the Stone Age, but it is far from certain that we understand ourselves and our role in the universe much better. We are as susceptible as our ancient ancestors to fantasy and delusion.

AI will change the world, according to Harrari, because AI, as I quoted him earlier, is the first technology that is capable of making decisions and generating ideas by itself. AI isn’t a tool – it’s an agent. For the first time in history power is shifting away from humans and toward something else. A completely new kind of information network is emerging, controlled by the decisions and goals of an alien intelligence. Information revolutions create new political structures, economic models and cultural norms. Since the current information revolution is more momentous than any previous information revolution, it is likely to create unprecedented realities on an unprecedented scale.

Harari gives the example of Myanmar, where the persecutions and killings of Rohingya, a Muslim part of the population, got dramatically worse because of Facebook’s algorithms. These algorhitms were making active and fateful decisions by themselves that spread the outrage against the Rohingya. Facebook and other social media platforms didn’t consciously set out to flood the world with fake news and outrage. But by telling their algorithms to maximise user engagement, this is exactly what they perpetrated. Computers operate differently from humans and use methods that their human overlords didn’t anticipate. They can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw. Just because engagement – how many minutes people engaged with the content and how many times they shared it – was defined as catch-all category, fake news and outrage spread like wildfire.

In previous networks, every chain had to pass through humans, and technology served only to connect the humans. In the new computer-based networks, computers themselves are members and there are computer-to-computer chains that don’t pass through any human. “If power depends on how many members cooperate with you, how well you understand law and finance and how capable you are of inventing new laws and new kinds of financial devices, then computers are poised to amass far more power than humans. (…) What would it mean for humans to live in a world where catchy melodies, scientific theories, technical tools, political manifestos and even religious myths are shaped by non-human alien intelligence that knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind?” Humans might become an increasingly powerless minority.

Never summon powers you cannot control, Harari warns. The time to act is now. Decisions made about AI today will shape humanity’s future, just as decisions made in the fourth century AD about which books to include in the Bible turned out to have far-reaching consequences centuries later.

But the good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. “To create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms.”

Yuval Noah Harari. Nexus. A Brief History of Information Networks from the Stone Age to AI. Fern Press, 2024