Is Artificial General Intelligence (AGI) possible?

By Calum Chace | 30 April 2023
Calum Chace


Most media coverage of AI is weak

The launch of the large language model known as GPT-4 has re-ignited the debate about where AI is going, and how fast. A paper by some researchers at Microsoft (which is the major investor in OpenAI, the creator of GPT-4) claimed to detect in GPT-4 some sparks of AGI – artificial general intelligence, a system with all the cognitive abilities of an adult human. The Future of Humanity Institute, an organisation based at MIT that studies existential risks, published an open letter calling for a six-month pause in the development of advanced AI.

But with a few honourable exceptions, and despite the best efforts of many individual journalists, the coverage of AI in most media outlets remains pretty poor. The usual narrative is “Look at this shiny, scary new thing. But don’t worry, it will all turn out to be hype in the end.”

The Economist, an honourable exception

One of those honourable exceptions is The Economist, and Kenneth Cukier, its Deputy Executive Editor, joined the London Futurists Podcast to discuss the prospects for advanced AI. Until recently, Cukier was the host of the paper’s weekly tech podcast Babbage, and he is a co-author of the 2013 book “Big Data”, a New York Times best-seller that has been translated into over 20 languages.

It has been said that The Economist is great at predicting the past but bad at predicting the future. Recently, it has improved its coverage of the future a great deal in one respect – namely its coverage of AI. For the first few years after the 2012 Big Bang in AI, The Economist used to delight in sneering about AI hype, but now it is hard to think of any media outlet that understands today’s AI better or explains it better. Not sparing his blushes, Cukier played a significant role in that change.

One thing that The Economist still doesn’t do with regard to AI is to cast its eye more than five years or so into the future. It avoids discussing what AGI and superintelligence will mean, or a genuine exploration of the Economic Singularity, when machines do pretty much everything that humans can do for money. Cukier suggests that these developments are probably fifty years away, and that although this is within the probable lifespan of some of the paper’s younger staff members, newspapers have not generally been in the business of looking that far ahead.

Increasingly sceptical

He acknowledges that informed speculation could be useful to readers, as perceptions of the future have important secondary effects, such as determining choices about what to study, or what careers to aim for. But he is increasingly sceptical that machines will ever fully replace humans in the workplace, or that AGI is possible. In this respect, he seems to be heading in the opposite direction to most observers, and he seems to be at odds with the people who are working towards the goal of AGI, including Sam Altman and Demis Hassabis, who run the world’s two leading AI labs, OpenAI and DeepMind respectively. Both are confident that AGI is possible, and Altman thinks it may be created within a decade. The central estimate on the prediction market Metaculus for the arrival of a basic form of AGI is currently 2026, just three years away. (It was 2028 when we recorded the episode, which was before the release of GPT-4.)

Cukier thinks that the debate over advanced AI is challenging because people have different definitions of things like AGI, and some of the underlying concepts have turned out to be unhelpful. For instance he suggests that machines first passed the Turing Test some years ago, but the test turned out to be about deception and human frailty rather than about machine capability. This is only true if you regard the test as being passed in a few minutes, whereas its more interesting version would take at least 24 hours, and involve a number of people well-versed in AI. The futurist Ray Kurzweil has bet the entrepreneur Mitch Kapoor $20,000 that a machine will pass this version of the test in 2029. (Personally, I think the Turing Test identifies consciousness, not intelligence.)

AGI is a tricky concept – even a crazy one

The concept of “general” intelligence is tricky. Pretty much all humans have it, but the level varies enormously between us. So what level does an AI have to reach to be considered an AGI?

Cukier goes further. He thinks the idea of an AI which has all the cognitive abilities of an adult human is “crazy” and unattainable. He also thinks it would be undesirable even if it was possible, because humans do so many unwise things – falling in love, smoking cigarettes, getting confused about maths.

He also thinks there is a magical, spiritual dimension to human intelligence, which can never be replicated within a machine. This leads him to conclude that machines can never become conscious, whatever AI engineers may claim about consciousness (as well as intelligence) being substrate independent.

The ship of Theseus

A useful thought experiment to test this claim is to consider a future person who has been diagnosed with a fatal brain disease. They can be rescued by replacing their fleshy neurons, one at a time, with silicon equivalents. Obviously we don’t have the technology to do this today, but there seems to be no reason why it couldn’t happen in the future. At what point in the changeover process would the person’s consciousness disappear? This is known as the Ship of Theseus question, after a famous ship in ancient Greece which underwent so many repairs down the years that not one original component remained. There was vigorous debate among Greek philosophers about whether it remained the same ship or not.

In the case of the ship, the question is academic: it doesn’t really matter whether the ship’s identity is preserved. In the case of the patient, it matters a great deal whether consciousness is preserved.

J.S. Bach and Carl Sagan

Until recently, it was thought that only humans could write music that would inspire profound emotions. Today, machines can write music in the style of Bach which affects listeners even more profoundly than the original. The choice of Bach is not random: Carl Sagan was asked why no Bach was included in the music that is accompanying the tiny spaceship Voyager on its journey into deep space. His reply was that Bach’s music is so sublime that it would have been boasting.

Cukier responds that machines can only imitate existing creators; they cannot create a new type of art in the way that human innovators have done repeatedly throughout history.

Understanding flight

One argument advanced by Cukier and others to support the claim that machines can never be created that will equal the human mind is that we do not understand how the mind works. The problem with this argument is that we know it is not necessary to understand something to be able to create or re-create it. The Wright brothers did not understand how birds fly, and they did not understand how their own flying machines worked. But they did work. Likewise, steam engines were developed before the laws of thermodynamics.

Cukier offers a final, dramatic thought on the subject of fully human-level AI: it would be the pinnacle of human hubris, and even idolatrous of us to seek to become godlike by creating AGI.

Reprinted with permission from the author.

Calum Chace is an English writer and speaker, focusing on the likely future impact of Artificial Intelligence on people and societies. He became a full-time writer in 2012, after a 30-year career in business. He is the author of Surviving AI, The Economic Singularity, and the philosophical science fiction novel Pandora’s Brain.

Calum Chace – ‘We cannot stop the progress of AI’

Breakthrough potential of AI | Sam Altman | MIT 2023

How to create superintelligent AI | Demis Hassabis and Lex Fridman


Be sure to ‘like’ us on Facebook


Please enter your comment!
Please enter your name here