By Robin Pomeroy | 6 January 2022
World Economic Forum

- AI – artificial intelligence – is transforming every aspect of our lives.
- Professor Stuart Russell says we need to make AI ‘human-compatible’.
- We must prepare for a world where machines replace humans in most jobs.
- Social media AI changes people to make them click more, Russell says.
- We’ve given algorithms ‘a free pass for far too long’, he says.
Six out of 10 of people around the world expect artificial intelligence to profoundly change their lives in the next three to five years, according to a new Ipsos survey for the World Economic Forum – which polled almost 20,000 people in 28 countries.
A majority said products and services that use AI already make their lives easier in areas such as education, entertainment, transport, shopping, safety, the environment and food. But just half say they trust companies that use AI as much as they trust other companies.
But what exactly is artificial intelligence? How can it solve some of humanity’s biggest problems? And what threats does AI itself pose to humanity?
Kay Firth-Butterfield, head of artificial intelligence and machine learning at the World Economic Forum’s Centre for the Fourth Industrial Revolution, joined Radio Davos host Robin Pomeroy to explore these questions with Stuart Russell, one of the world’s foremost experts on AI.
Transcript: The promises and perils of AI – Stuart Russell on Radio Davos
The transcript has been shortened.
Kay Firth-Butterfield: It’s my pleasure to introduce Stuart, who has written two books on an artificial intelligence, Human Compatible: Artificial Intelligence and the Problem of Control. But perhaps the one that you referred to, saying that he had ‘literally written the book on artificial intelligence’, that is Artificial Intelligence: A Modern Approach – and that’s the book from which most students around the world learn AI. Stuart and I first met in 2014 at a lecture that he gave in the UK about his concerns around lethal autonomous weapons. And whilst we’re not going to talk about that today, he’s been working tirelessly at the UN for a ban on such weapons. Stuart’s worked extensively with us at the World Economic Forum. In 2016, he became co-chair of the World Economic Forum’s Global AI Council on AI and Robotics. And then in 2018, he joined our Global AI Council. As a member of that Council, he galvanised us into thinking about how we could achieve positive futures with AI, by planning and developing policies now to chart a course to that future.
Robin Pomeroy: Stuart, you’re on the screen with us on Zoom. Very nice to meet you.
…
Robin: Where you’re a professor. I’ve been listening to your lectures on BBC Radio 4 and the World Service, the Reith Lectures. So I feel like I’m an expert in it now, as I wasn’t a couple of weeks ago. Let’s start right at the very beginning, though. For someone who only has a vague idea of what artificial intelligence is – we all know what computers are, we use apps. How much of that is artificial intelligence? And where is it going to take us in the future beyond what we already have?
Stuart: It’s actually surprisingly difficult to draw a hard and fast line and say, Well, this this piece of software is AI and that piece of software isn’t AI. Because within the field, when we think about AI, the object that we discuss is something we call an ‘agent’, which means something that acts on the basis of whatever it has perceived. And the perceptions could be through a camera or through a keyboard. The actions could be displaying things on a screen or turning the steering wheel of a self-driving car or firing a shell from a tank, or whatever it might be. And the goal of AI is to make sure that the actions that come out are actually the right ones, meaning the ones that will actually achieve the objectives that we’ve set for the agent. And this maps onto a concept that’s been around for a long time in economics and philosophy, called the ‘rational agent’ – so the agent whose actions can be expected to achieve its objectives.
Stuart Russell, author of “Human Compatible: Artificial Intelligence and the Problem of Control”, describes the promise and perils of domesticated AI in this week’s “Babbage” podcast https://t.co/f79nYmdSZZ
— The Economist (@TheEconomist) October 11, 2019
And so that’s what we try to do. And they can be very, very simple. A thermostat is an agent. It has perception – just measures the temperature. It has actions – switch on or off the heater. And it sort of has two very, very simple rules: If it’s too hot, turn it off. If it’s too cold, turn it on. Is that AI? Well, actually, it doesn’t really matter whether you want to call that AI or not. So there’s no hard and fast dividing line like, well, if it’s got 17 rules then it’s AI, if it’s only got 16, then it’s not AI. That wouldn’t make sense. So we just think of it as as a continuum, from extremely simple agents to extremely complex agents like humans.
AI systems now are all over the place in the economy – search engines are AI systems. They’re actually not just keyword look-up systems any more – they are trying to understand your query. About a third of all the queries going into search engines are actually answered by knowledge bases, not by just giving you web pages where you can find the answer. They actually tell you the answer because they have a lot of knowledge in machine readable form.
Your smart speakers, the digital assistants on your phone – these are all AI systems. Machine translation – which I use a lot because I have to pay taxes in France – it does a great job of translating impenetrable French tax legislation into impenetrable English tax legislation. So it doesn’t really help me very much, but it’s a very good translation. And then the self-driving car, I think you would say that’s a pretty canonical application of AI that stresses many things: the ability to perceive, to understand the situation and to make complex decisions that actually have to take into account risk and the many possible eventualities that can arise as we drive around. And then, of course, at the very high end are human beings.
Robin: At some point in the future, machines, AI, will be able to do everything a human can do, but better. Is that is that the thing we’re moving towards?
Stuart: Yes. This has always been the goal – what I call ‘general purpose AI’. There are other names for it: human-level, AI, superintelligent AI, artificial general intelligence. But I settled on ‘general purpose AI’ because it’s a little bit less threatening than ‘superintelligent AI’. And, as you say, it means AI systems that for any task that human beings can do with their intellects, the AI system will be able to, if not, do it already, to very quickly learn how to do it and do it as well as or better than humans. And I think most people understand that once you reach a human level on any particular task, it’s not that hard then to go beyond the human level. Because machines have such massive advantages in computation, speed in bandwidth, you know, the ability to store and retrieve stuff from memory at vast rates that humans human brains can’t possibly match.
…
Kay: Back in 2019, I think it was, you came to me with a suggestion, and that was to truly optimise the benefits for humans of AI and, in particular, general purpose AI, which you spoke to Robin about earlier. We need to rethink the political and social systems we use. We were getting to lock people in a room and those people were specifically going to be economists and sci-fi writers. We never did that because we got COVID. But we had such fantastically interesting workshops, and I wonder whether you could tell us a little bit about why you thought that was important and the sort of ideas that came out of it.
AI-pioneer Stuart Russell explains risks with superintelligent AI & how his team is working for a good outcome: https://t.co/3Sx1nu5NXn pic.twitter.com/0uzag9kphH
— Max Tegmark (@tegmark) February 14, 2017
Stuart: I just want to reassure the viewers that we didn’t literally plan to lock people into a room, but it was a metaphorical sense. The concern, or the question, was: what happens when general purpose AI hits the real economy? How do things change? And can we adapt to that without having a huge amount of dislocation? Because, you know, this is a very old point. Even, amazingly, Aristotle actually has a passage where he says: Look, if we had fully automated weaving machines and fully automated plectrums that could pluck the lyre and produce music without any humans, then we wouldn’t need any workers. It’s a pretty amazing thing for 350 BC.
That that idea, Keynes called it ‘technological unemployment’ in 1930, is very obvious to people, right? They think: Yeah, of course, if the machine does the work, then I’m going to be unemployed. And the Luddites worried about that. And for a long time, economists actually thought that they had a mathematical proof that technological unemployment was impossible. But if you think about it, if technology could make a twin of every person on Earth and the twin was more cheerful and less hungover and willing to work for nothing, well how many of us would still have our jobs? I think the answer is zero. So there’s something wrong with the economists’ mathematical theorem.
Over the last decade or so, opinion in economics has really shifted. And it was, in fact, the first Davos meeting that I ever went to, in 2015. There was a dinner supposedly to discuss the ‘new digital economy’ . But the economists who got up – there were several Nobel prize winners there, other very distinguished economists – and they got up one by one and said: You know, actually, I don’t want to talk about the digital economy. I want to talk about AI and technological unemployment, and this is the biggest problem we face in the world, at least from the economic point of view. Because as far as they could see it, as general purpose AI became more and more of a real thing – right now, we’re not very close to it – but as we move there, we’ll see AI systems capable of carrying out more and more of the tasks that humans do at work.
Many computer science experts believe that, in this century, machines will be able to do most tasks better than humans. These are the scenarios that represent possible trajectories for humanity, by CHAI in cooperation with @wef & @RolandBerger https://t.co/TnU6iYV5UU https://t.co/wNqbUxvBuK
— Center for Human-Compatible AI (@CHAI_Berkeley) November 29, 2021
…
Kay: And so these destinations, we’re talking about something in the future. I know that this may be crystal ball gazing, but when might we expect general purpose AI, so that we can be prepared? You say we need to prepare now.
Stuart: I think this is a very difficult question to answer. And it’s also it’s not the case that it’s all or nothing. The impact is going to be increasing. So with every advance in AI, it significantly expands the range of tasks that can be done. So, you know, we’ve been working on self-driving cars, and the first demonstrated freeway driving was 1987. Why has it taken so long? Well, because mainly the perceptual capabilities of the systems were inadequate, and some of that was just hardware. You just need massive amounts of hardware to process high-resolution, high-frame-rate video. And that problem has been largely solved. And so, you know, with visual perception now a whole range, not just self-driving cars, but you can start to think about robots that can work in agriculture, the robots that can do the part-picking in the warehouse, et cetera, et cetera. So, you know, just that one thing easily has the potential to impact 500 million jobs. And then as you get to language understanding, that could be another 500 million jobs. So each of these changes causes this big expansion.
And so these things will happen. The actual date of arrival of general purpose AI, you’re not going to be able to pinpoint. It isn’t a single day – ‘Oh, today it arrived – yesterday we didn’t have it’. I think most experts say by the end of the century we’re very, very likely to have general purpose AI. The median is something around 2045, and that’s not so long, it’s less than 30 years from now. I’m a little more on the conservative side. I think the problem is harder than we think. But I liked what John McCarthy, who was one of the founders of AI, when he was asked this question, he said: Well, somewhere between five and 500 years. And we’re going to need, I think, several Einsteins to make it happen.
Robin: On the bright side, if these machines are going to be so brilliant, will there come a day when we just say: Fix global hunger, fix climate change? And off they go and you set them six months, or whatever, a reasonable amount of time. And suddenly they’ve fixed climate change. In one of your Reith Lectures you actually broach the climate change subject. You actually reduce it to one area of climate change, the acidification of the oceans. And you envisage a scenario where a machine can fix the acidification of the oceans that’s been caused by climate change. But there is a big ‘but’ there. Perhaps you can tell us what the problem is when you set at AI off to do a specific job?
Stuart: So there’s a big difference between asking a human to do something and giving that as the objective to an AI system. When you ask a human to fetch you a cup of coffee, you don’t mean this should be their life’s mission and nothing else in the universe matters, even if they have to kill everybody else in Starbucks to get you the coffee before it closes. That’s not what you mean. And of course, all the other things that we mutually care about, you know, they should factor into their behaviour as well.
The long, hype-strewn road to general artificial intelligence #GAI #AI #Innovation https://t.co/1Zg8AlNrgR
— Alex Jiménez (@RAlexJimenez) June 5, 2022
And the problem with the way we build AI systems now is we we give them a fixed objective. The algorithms require us to specify everything in the objective. And if you say: can we fix the acidification of the oceans? Yeah, you could have a catalytic reaction that does that extremely efficiently, but consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over over the course of several hours. So how do we avoid this problem? You might say, OK, well, just be more careful about specifying the objective, right? Don’t forget the atmospheric oxygen. And then of course, it might produce a side-effect of the reaction in the ocean that poisons all the fish. OK, well, I meant don’t kill the fish, either. And then, well, what about the seaweed, OK? Don’t do anything that’s going to cause all the seaweed to die – and on and on and on. Right? And the reason that we don’t have to do that with humans is that humans often know that they don’t know all the things that we care about. And so they are likely to come back. So if if you ask a human to get you a cup of coffee, and you happen to be in the hotel George V in Paris, where the coffee is, I think, 13 euros a cup, it’s entirely reasonable to come back and say, ‘Well, it’s 13 euros. Are you sure? Or I could go next door and go get one for much less’, right? That’s because you might not know their price elasticity for coffee. You don’t know whether they want to spend that much. And it’s a perfectly normal thing for a person to do – to ask. I’m going to repaint your house – is it OK if I take off the drainpipes and then put them back? We don’t think of this as a terribly sophisticated capability, but AI systems don’t have it because the way we build them now, they have to know the full objective.
And in my book Human Compatible that Kay mentioned, the main point is if we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviours, like asking permission before getting rid of all the oxygen in the atmosphere. And they do that because that’s a change to the world and the algorithm may not know is that something we prefer or disprefer. And so it has an incentive to ask because it wants to avoid doing anything that’s dispreferred. So you get much more robust, controllable behaviour. And in the extreme case, if we want to switch the machine off, it actually wants to be switched off because it wants to avoid doing whatever it is that is upsetting us. It wants to avoid it. It doesn’t know which thing it’s doing that’s upsetting us, but it wants to avoid that. So it wants us to switch it off if that’s what we want. So in all these senses, control over the AI system comes from the machine’s uncertainty about what the true objective is. And it’s when you build machines that believe with certainty that they have the objective, that’s when you get sort of psychopathic behaviour, and I think we see the same thing in humans.
The Dangers of Artificial Intelligence – Stuart Russell on AI Risk
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Eric Schmidt — The Promises and Perils of AI, the Future of Warfare, Profound Revolutions, and More
Be sure to ‘like’ us on Facebook