Should We Be Worried About the Existential Risk of Artificial Intelligence?

By Tony Czarnecki | 30 August 2022
Medium

(Photo: Dreamstime.com)

The late physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have all expressed concerns about the possibility that AI could evolve to the point that humans could no longer control it, with Hawking theorizing that this could “spell the end of the human race”[1]. Other AI researchers have recognized the possibility that AI presents an existential risk. For example, professors Allan Dafoe and Stuart Russell, both eminent AI scientists, mention that contrary to misrepresentations in the media, this risk does not have to arise from spontaneous malevolent intelligence. Rather, “the risk arises from the unpredictability and irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it.”[2]

Elon Musk has been urging governments to take steps to regulate the technology before it is too late. At the bipartisan National Governors Association meeting in Rhode Island in July 2017 he said: “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” He also added that based on what he had seen, AI is the scariest problem. Musk told the governors that AI calls for precautionary, proactive government intervention: “I think by the time we are reactive in AI regulation, it’s too late”.[3]

If we consider that 99.9% of all species have disappeared[4], then why should we be an exception to the Fermi’s Paradox, of which one of the explanations is that no civilisation has contacted us, because once they had achieved a certain level of technological advancement, they destroyed themselves. So, if we want to avoid an extinction, we must mitigate existential risks such as climate change, pandemics, nanotechnology, global nuclear wars and most importantly the threat arising from developing a hostile Superintelligence. It is this threat that could be the most imminent of all because it could annihilate the human species, possibly within the next few decades.

References:

[1] BBC News, https://www.bbc.co.uk/news/technology-30290540, 2/12/2014.

[2] Allan Dafoe and Stuart Russell, Technology Review, https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/, 2/11/2016.

[3] Camila Domonoske, Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk’, 17/7/2017.

Reprinted with permission from the author.

Tony Czarnecki is an economist, a futurist and a member of the Chatham House, London. He is deeply engaged in global politics and the reform of democracy, with wide range of interests such as politics, technology, science and culture. He is also an active member of London Futurists. This gives him the necessary insight into exploring complex subjects discussed in the three books, of the POSTHUMANS series. He is the Managing Partner of Sustensis, London, a Think Tank for inspirations for Humanity’s transition to coexistence with Superintelligence.

How might we control AI, before it starts controlling us – Tony Czarnecki

Elon Musk, Nick Bostrom, Ray Kurzweil on Superintelligence

Artificial Superintelligence Documentary – A.G.I

Be sure to ‘like’ us on Facebook

LEAVE A REPLY

Please enter your comment!
Please enter your name here