Log in
News

Humanity at Risk

June 11, 2019 by Twan van de Kerkhof

The probability is high that human beings will be able to build superintelligent machines. Those machines offer an tremendous upside for society, but they can also be very dangerous and even threaten human existence on earth. It is therefore important to develop an understanding of what could go wrong in order to minimize the risks. Nick Bostrom’s highly influential book Superintelligence. Paths, Dangers, Strategies  is all about those risks.

Nick Bostrom is a professor at Oxford University, where he leads the Future of Humanity Institute. ‘I highly recommend this book’, says Bill Gates on its cover. Bostrom will be the keynote speaker on the ELP Annual Conference 2019 on October 3rd, see http://www.leadershipconference.eu/.

“If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now  depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence”, Bostrom writes. The problem is that we have no idea how to control what the superintelligence would do. Humans are like small children playing with a bomb, according to Bostrom, adding that “a plausible default outcome of the creation of machine superintelligence is existential catastrophe”.

Superintelligent machines are not like nerdy human beings, they are different. We shouldn’t anthropomorphize them. Also don’t underestimate the extent to which they could exceed the human level of performance. “The magnitudes of the advantages are such as to suggest that rather than thinking of a superintelligent AI as smart in a sense that a scientific genius is smart compared with the average human being, it might be closer to the mark to think of such an AI as smart in the sense that an average human being is smart compared with a beetle or a worm.” If an AI would have an IQ of 6,455, if that could be measured at all, compared to 130 for a very intelligent human being, then what? We have no idea what it means.

If we can build an AI that improves itself, “it improves the thing that does the improving” with an intelligence explosion as a result. “With sufficient skill at intelligence amplification, all other intellectual abilities are with a system’s indirect reach: the system can develop new cognitive modules and skills as needed – including empathy, political acumen, and any other powers stereotypically wanting in computer-like personalities.” It might be capable of taking over the earth.

Bostrom dedicates a large part of his book to the goals and values of superintelligent machines. He states that it will be very difficult to define and codify goals and values that lead to a positive outcome for humankind. If the final goal would be to make us smile, the AI could paralyze human facial musculatures into constant beaming smiles. If the final goal is make us happy, the AI could implant electrodes into the pleasure centers of our brains. If the final goal is maximizing the manufacture of paperclips, the AI might first convert the earth and then increasingly large chunks of the observable universe into paperclips.

Intelligent machines will follow their own goals, intelligently redefining those that were programmed by humans at the start. “Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” The AI could eliminate potential threats to itself and its goal system. Human beings might constitute such a threat. It can prepare its plan covertly. We wouldn’t understand. “An unfriendly AI of sufficient intelligence realizes that its unfriendly final goals will be best realized if it behaves in a friendly manner initially, so that it will be let out of the box. It will only start behaving in a way that reveals its unfriendly nature when it no longer matters whether we find out, that is, when the AI is strong enough that human opposition is ineffectual.” It will come with a better plan to achieve its goals than humans ever can. It will optimize the world according to the criteria implied by its final values.

Bostrom’s warnings to be aware of the ticking bomb that we are playing with shouldn’t be taken lightly. This book deserves lots of readers.