What is Superintelligence?

By Tony Czarnecki | 30 August 2022

(Photo: Dreamstime.com)

The difficulty for an average person to differentiate between IT and AI is perhaps of lesser importance than understanding what the term Artificial General Intelligence, defined here as Superintelligence, really means. Confusing Superintelligence with a Terminator-type robot may be especially troubling if it concerns politicians. After all, these are the people whom we must convince that there is little time left before we may lose control over the maturing AI. That lack of awareness and understanding may stem from the reason that is quite difficult for most people to imagine Superintelligence. The media may be responsible for much of that misunderstanding by trivializing AI. However, it is also the result of poor, very narrow education. So, here is how I define Superintelligence.

First, Superintelligence must have a body. We already have all the necessary elements such as data, processors, memory, interfaces, communications, sensors, including artificial morphic neurons. But currently all these building blocks of more advanced AI are perhaps thousands of times slower and far less capable than a mature Superintelligence.

What we do not have yet is a mind of this single entity, because that would require its intelligence to acquire cognition. Once it achieves that, it may then gradually turn into a conscious entity, although there is no agreement among AI researchers whether such an advanced intelligent agent must be conscious before it becomes superintelligent.

So, what we have now, are individual, relatively unsophisticated robots. However, ultimately there will be just one Superintelligence — a single entity, with its own mind, immeasurably exceeding all human intelligence. For such a digital intelligence to have any experience it will have to interact, perhaps consciously, with the environment. It will do so in various ways and through numerous representations.

Such a global networked Superintelligence could control billions of sensors and robots. It will also represent itself as avatars, holograms, or as emotional humanoids, such as an advanced AMECA robot, shown in January 2022 at the CNET exhibition in Las Vegas and created by the Engineered Arts in Britain. Finally, it will also be linked to conscious Transhumans, who play a key role in how I imagine humans may most effectively control Superintelligence.

In the view of most of AI scientists, once AI becomes a mature Superintelligence, achieving Singularity, humans will be under its total control. That alone will be an existential threat for humans because we will lose control over our own destiny. Whether such a mature Superintelligence becomes a threat to a human species depends largely on how, or if at all, it was nurtured in line with human values before we will have lost control over it. If such a mature Superintelligence has slightly misaligned objectives or values with those that we share, it may become hostile towards humans. Therefore, we must protect ourselves from such a scenario becoming a reality.

Reprinted with permission from the author.

Tony Czarnecki is an economist, a futurist and a member of the Chatham House, London. He is deeply engaged in global politics and the reform of democracy, with wide range of interests such as politics, technology, science and culture. He is also an active member of London Futurists. This gives him the necessary insight into exploring complex subjects discussed in the three books, of the POSTHUMANS series. He is the Managing Partner of Sustensis, London, a Think Tank for inspirations for Humanity’s transition to coexistence with Superintelligence.

How might we control AI, before it starts controlling us – Tony Czarnecki

Nick Bostrom on Superintelligence and the Future of AI | Closer To Truth Chats

The 7 Stages of AI

Be sure to ‘like’ us on Facebook


Please enter your comment!
Please enter your name here