By Tony Czarnecki | 30 August 2022
The first problem we face when attempting to control AI is that we need to convince the public and most importantly, the world leaders, that such an invisible threat is real. One may call a maturing Superintelligence ‘an invisible enemy,’ assuming it turns out to be hostile towards humans, similarly as the current Covid-19 pandemic. Calling Covid an invisible enemy was an excuse used by governments that it was not possible to see the threat as coming, hence they were not responsible for the consequences. Governments seldom see that spending money now to minimize the risk of potential future disasters is an insurance policy. The implications of such short-termism in controlling AI development are profound. In the worst-case scenario, given an immense power of Superintelligence, it would be enough for such an agent to make a single error to cause the humans’ extinction.
The second problem is that not many AI experts are willing to say when Superintelligence is most likely to emerge. That allows politicians to dismiss any calls for taking serious steps towards controlling Superintelligence, saying it is hundreds of years away, so we do not have to worry about it now. The predictions by AI scientists and leading practitioners are generally vague without a clear definition of what is meant by Superintelligence. Ray Kurzweil is perhaps an exception. Being one of the most reliable futurists, he says that a mature Superintelligence may emerge by 2045. At the AI conference in 1995, the participants estimated that it may emerge in two hundred years. But four averaged surveys of 995 AI professionals published in February 2022 indicate that the most likely date for a mature Superintelligence is about 2060, just 15 years after the Kurzweil’s prediction getting close to his prediction. In any case, if his predictions are correct, most people living today will be in contact with Superintelligence, which may be our last invention, as the British mathematician I. J. Good observed in 1966.
Should We Fear Artificial Superintelligence? https://t.co/epfUuYSQOL
— Lifeboat Foundation (@LifeboatHQ) September 8, 2022
Perhaps even more important than the time by when Superintelligence emerges is an approximate time by when humans may lose control over AI, operating as a global system. Here again, AI scientists and top AI practitioners prefer not to specify such time, using instead more elusive terms like ‘in a few decades or so.’ However, without setting a highly probably time when we may lose control over AI, the world leaders will not feel obliged to discuss this existential risk for humans, which such a momentous event may trigger. Therefore, those who see that problem, should be bold enough to spell out the most likely time and justify it. Ray Kurzweil is again an exception here, saying in June 2014: “My timeline is computers will be at a human level, such as you can have a human relationship with them, 15 years from now,” i.e., by 2029. Since then, he has been sticking to that date.
For me, the loss of control of AI can be compared in some way to the loss of control over the operation of the Internet. No country can switch off the Internet. Doing so, would be theoretically possible but it would mean a civilisational collapse, although even then such a switch off may still be incomplete. We may soon face a similar situation with a globally networked AI, controlling billions of sensors and millions of robots. We can safely say that a desktop computer power will increase 1,000 times by 2030 (from 2014), reaching the intelligence level of an average human, if measured by the no. of neurons, and vastly exceeding our memory and processing power (Ray Kurzweil’s reasoning). But that does not include the potential progress in neuromorphic neurons, quantum computing and several other related areas, which will immensely increase the capabilities of such an intelligence.
Therefore, I have taken 2030, as the most likely date by which humans may lose an effective control over AI, which I would call an Immature Superintelligence. This is the AI’s tipping point, likely to happen at the same time as for the climate change. Such an AI may have an intelligence of an ant, but immense destructive powers, which it may apply either erroneously or in a purposeful malicious way. There may be several such agents by the end of this decade, who might even fight each other, especially if deployed by some psychopathic dictators, hoping to achieve AI Supremacy and use it to conquer the world.
Hot on the heels of selling its IBM Watson AI healthcare unit for peanuts, IBM's CEO says that AI is nearing a tipping point, when 50% of companies are using AI effectively.https://t.co/JdDAd7MOhy
— Calum Chace (@cccalum) May 13, 2022
However, it is not so much important, who specifies a concrete date but that such a date is widely publicised and supported by eminent AI scientists. There is a saying ‘What is not measured is not done’ illustrated by the fact that despite many attempts for fighting the climate change no real progress was made until very recently. It was always argued that a potential climate change impact was far away. Only when at the Paris conference in 2015 and at COP26 in Glasgow in 2021, when a firm target of a maximum 1.5C temperature rise was set, have we started to see concrete global action. But COP26 also specified a pivotal date 2030, as a tipping point beyond which we may lose the battle for controlling climate change. Importantly, both indicators, 1.5C or 2030, are just best guesses as it would be for losing humans’ control over AI. Notwithstanding that, a global AI control is urgently needed and should be measured by some critical thresholds. Additionally, no advanced AI system should be released without being primed with Universal Values of Humanity and its long-term goals. The warning signs of humans potentially losing control over AI might be when one, or all, of these events happen:
- Number of artificial neuromorphic neurons exceeds the no. of neurons in a human brain
- Incidents when AI network of globally connected robots goes out of control leads to global chaos
- AI processing speed measured in flops exceeds the performance of a human brain
- First simple cognitive AI Agent emerges
Realistically it will be more difficult and dangerous for humans when these AI thresholds are surpassed than when the global temperature increases above 1.5C. These warning signs about humans losing control over AI may start the process of human species’ evolution or extinction. For humans harnessing AI may be like climbing a big mountain. Humans may perish during this endeavour. But if we are properly equipped, we will succeed in delivering a friendly AI and reach the world of unimaginable abundance and opportunities.
Reprinted with permission from the author.
Tony Czarnecki is an economist, a futurist and a member of the Chatham House, London. He is deeply engaged in global politics and the reform of democracy, with wide range of interests such as politics, technology, science and culture. He is also an active member of London Futurists. This gives him the necessary insight into exploring complex subjects discussed in the three books, of the POSTHUMANS series. He is the Managing Partner of Sustensis, London, a Think Tank for inspirations for Humanity’s transition to coexistence with Superintelligence.
How might we control AI, before it starts controlling us – Tony Czarnecki
Elon Musk, Nick Bostrom, Ray Kurzweil on Superintelligence
Artificial Superintelligence Documentary – A.G.I
I Tried Warning Them – Elon Musk on Superhuman AI
Be sure to ‘like’ us on Facebook