A.I. Will Enable You To Find Purpose In Life Instead Of Just Flipping Burgers All Day

By Bertalan Mesko, PhD | 2 March 2019
The Medical Futurist

(Image: The Medical Futurist / CC)

Automation has the potential to uproot every part of our social system, and artificial intelligence will get into a territory that no human creation dared ever before: the capacity to know. There are plenty of ways how the transformation before us could go down, but it mostly depends on the human ability to adapt to the changing environment as quickly as possible. In the latest episode of the Great Thinkers series, we asked Martin Ford, a leading expert on A.I., robotics and job automation about the future of work, robotic dogs, dystopic visions and why his daughter studies art.

When people approach you to ask about whether their job will get replaced by either AI or robotics, what do you reply to them?

Well, I say that the jobs that will get threatened, at least in the near term, are the ones that are fundamentally predictable. The kind of professions where you go to work and do the same sort of things again and again. In that scenario, data will be collected, your tasks will be encapsulated in that data in a way that a machine learning algorithm is going to be able to extract that capability and be able to do a lot of that job. That’s approximately half of the roles and tasks done by the people in the economy.

Three areas are relatively safe from the disruption we are going to see in the coming 10-20 years and maybe beyond. First of all, the ones that are not predictable. That would involve creativity, building something new, thinking outside the box. Secondly, professions that rely on building sophisticated relationships with other people. Caring roles like nurses and doctors in health care, for example. You are running around dealing with all kinds of unpredictable situations, interacting with people. Or it might be in the business world, where you are building a deep relationship with clients. Interpersonal, sophisticated interactions with people are again something that machines are not going to be especially good at in the near term. And the third category of jobs will be those that require a lot of mobility, dexterity and unpredictable environments, e.g., skilled trade jobs, electricians, plumbers. It could be a very long time before we get a robot that can do a better job than an electrician does. That would be something like C3PO from Star Wars. The truly advanced sci-fi robot. That is not going to happen anytime soon.

Based on your book, Rise of the Robots, automation replacing jobs is nothing new. It has been around for decades. At least from the early 20th century on. But when you look into science fiction or historians describing that era – you don’t see it as the threat it is today. What do you think has changed that made this menace of such proportions in the 21st century?

Well, it is true that this alarm has been raised many times throughout history. People have been terrified of going back all the way to the Luddites 200 years ago. In my book, I tell the story of what was the ‘triple revolution report’. Back in the 1960s, this was a very prominent report handed to U.S. President Lyndon B. Johnson, and it argued that the country would be upended and chaotic because all this automation was going to put people out of work. So people worried about this in the past, it just hasn’t happened yet. At least not over the long term. There have been short term disruptions for sure.

But I believe we are indeed entering an age when the technology is finally here. So this time, the story could be different. Of course, I should say that people in the past have always felt that and it has turned out not to be and so, for this reason, many people are very skeptical, especially economists.

But the argument I would make is that we are now seeing the beginnings of real artificial intelligence in the sense that in a limited way, machines are beginning to think and have cognitive capabilities, to learn, to solve problems. What this means is that machines are starting to compete with what you might think of as our competitive advantages as a species, our core competence. The thing that really sets humans apart is our ability to learn. In the past, we have had lots of technology that enhanced our muscles, that has made us faster, stronger but now we are entering an age for the first time where technology is going to, to some extent, replace or greatly enhance our intellectual capability. And I think that is going to be disruptive in a way we haven’t seen before.

If you can think in terms of 15-20 years from now, how do you think an average person’s life will change based on the notion that we would not reach artificial intelligence by then, but we would be very close?

Well, central to my prediction about automation is that people might spend less time working since a lot of the work will be automated, and we will have to figure out a way to adapt to that. Maybe there could be a universal basic income so that if people are not working as much, they can still survive, still spend money because for a vibrant economy we need consumers.

So assuming we solve that problem, I guess that people would have more leisure time. They might spend that extra time in positive and negative ways. One thing I have some concern about is the rise of virtual reality environments. They are going to become excellent and very compelling. If we are not careful, that might be the way many people choose to spend their time. We might have people checking out from the real world and just living in virtual space, especially if they can be happier there than in the real world. So that’s something that we are going to have to watch.

But in many ways, our lives will definitely be better. There will be all kinds of technology, new forms of entertainment. 15-20 years from now there will be robots to assist us. Maybe home robots will be quite popular by then. We will very likely have self-driving cars that will begin to really scale up. That will be really transformative for our whole society in terms of how we move around, the way cities are structured. In terms of the amount of space given to parking, much of it would be freed up. Maybe that would solve the housing crisis. Now, we have all these commercial buildings with oceans of cubicles inside with knowledge workers. If there will be fewer people working in the cities in 20 years because a lot of jobs will already be automated, maybe some of that space will be freed up and converted to housing.

I also really believe that A.I. will become a tool in science, medicine and it will be applied to deal with numerous issues. So you will see revolutionary breakthroughs in a lot of areas, e.g., clean energy solving climate change. In fact, to some extent, I would think that regarding climate change we better hope that A.I. works out and will help us because without it we don’t seem to be doing that well. So there is a lot of reason for optimism there, but it is going to be a very different world.

(Credit: Dreamstime.com)

When thinking about what happens when someone has to meet a fully functional social companion robot for the first time, or someone has a new job and has to learn to work with an A.I. system, how hard do you think it would be to adjust to new technologies? I assume there will be strange characteristics. When I met some big robots that could move around like human beings, I noticed they had a special smell stemming from their metal parts. I’m quite sure it would affect my relationship with the robots that they smell differently from what I’m used to. So what kind of features would make it easy or hard to get accustomed to them?

In my latest book, I interviewed numerous people working on social A.I. and social robots that are designed to create relationships. One of the findings of their ongoing research is that they make robots that are not too close to looking like humans. They often use cartoon faces so they won’t be too creepy.

It is also important to remember that the kinds of interactions that are going to occur aren’t necessarily going to be with physical robots or humanoid robots but just A.I. systems. A great example of that already is the Amazon Echo, Alexa, right? Already little children are having conversations with these machines. I think that is going to become dramatically more prevalent. Maybe engaging with these virtual systems will be even more important than actual physical robots.

That’s going to have two sides. It’s going to be a potent tool for all of us. Maybe in 20 years from now, keyboards are not going to be anywhere near as standard as they are today. Perhaps you’ll be talking to your computer. But it will also have some risks and challenges in the sense that people might begin to lose track of where the line is between a human being and a machine. People will confuse the two, get addicted to technology and build relationships with these machines – not with real people.

I’m personally very much looking forward to working with such technology in everyday life, but still, I often think about the fact that there are people from an older generation or people with low IT skills today who look at standard technology like a computer as a kind of a threat. If you have to work with a laptop, it has some functionalities you have to learn about. But when an A.I. argues, now that’s a whole new level. How do you think we would react? Do you think we as humans would have an existential crisis stemming from not being able to find our place in this new society? Would we feel inferior?

I think different people will adapt at different rates. There’s evidence that it’s going to be easier for children who grow up with these technologies. If you are already middle-aged or older when these innovations become disruptive, it is going to be harder to adapt. That is, once again, one of the reasons that I wrote Rise of the RobotsThese impacts whether we are talking about jobs disappearing or rapidly changing or any other cultural shifts are going to happen in society. That’s why we really need to think and talk about these changes and develop a plan to adapt to them.

Otherwise, yes, I think there’s every possibility that it will be a crisis for many people: significant backlashes, political upheavals. In truth, we have seen it already. In the US, we have this raging opioid epidemic, and that is somewhat related to disappearing jobs, especially in some communities, particularly in the industrial Midwest as well as a rapid change in both the job market and society as a whole. That is killing enormous numbers of people, and it could get worse in the future when these impacts become more disruptive.

Politically, we saw, of course, the election of Donald Trump in 2016. Many people voted for him because they feel that they are being left behind, they don’t have the opportunities anymore, especially the middle-class jobs that were once there but are now gone. They see greater inequality with a few people doing very well. They are angry. One way they expressed that anger was by voting for Trump. In the UK, I think you saw something very similar with the Brexit vote.

So, you know, we have already seen disruption and things could get a lot worse. If things get really bad, it definitely creates an environment where demagogues capitalize on the fear people feel and will be in a position to come to power. Now that’s pretty scary. Again, I think it’s imperative to take this seriously and have a discussion about how we are going to adapt to all these changes.

Do you have any suggestions on how to work on that relationship with these automated technologies? So that when we really have to work with them or live with them in the future, we will not start from scratch?

From an economic standpoint – addressing inequality is of utmost importance. I think something like a basic income is going to be required in the future to make sure that everyone can live off something, at least. That by itself is not going to solve the problem because people also need a purpose in life. If they are not working, they need something to do. Everyone needs that feeling that if they weren’t there, they would be missed, that they are doing something important. Most people now get that feeling mostly from their jobs.

If in the future people aren’t working that much, we need to think of a way to address that. Not having to flip burgers all day but being able to do something more meaningful with your life. An idea that I have suggested is building real incentives into a basic income. For example, we will give everyone at least a minimal income, but you can also earn more. Higher salaries for doing certain things like staying in schools studying or working in the community. You know, paying people a little more if they do that instead of just staying home and playing videogames.

There’s also a role for education in terms of helping people prepare for this. Emphasizing things like creativity more than learning because that is a skill that will be more important in the future. So, there are a lot of things we can do, but there is a chance that this shift could happen quite quickly. We really don’t know how fast all of this is going to occur. If it is going to go soon, it will be terribly tough for people to adapt.

How would you prepare the children of today for that futuristic vision?

As I have said, I think the areas that will be the safest going forward will be creativity, building relationships with people and jobs that require a lot of dexterity. I have a young daughter, and we sent her to art class. Anything that builds creativity and team building. To make sure children have lots of opportunities to learn how to work together. I would extend that even to the university level. We really want to build those interactive skills. But the main point is that at any age, we don’t want to be just educating people to do routine, repetitive, predictable things because machines are going to take a lot of those tasks over.

We get many questions from medical students which profession to choose. They ask us about which medical specialty we believe has the biggest chance for survival. When you see A.I. algorithms in radiology and pathology, dermatology, robots working in surgeries and so on, it is hard for students today to find out which profession to pursue. So we decided not to give them a list containing which specialties will be the most transformed by technology but instead we provided them with a list of skills they should work on because those skills will be crucial even 20 years from now. What would you tell a medical student who comes to you for advice about what to pursue?

The important thing that is really going to keep a doctor safe is the ability to engage with patients directly and nurture that relationship with empathy. I am talking about a sort of bedside manner type thing. The skill to use that “source” in diagnosing problems in a way that a machine will not be able to. Machines are going to struggle to do these.

Obviously, machines are going to be the best at pure information processing. That is why people talk a lot about radiology and pathology where you are looking at medical images. That aspect of it is going to be susceptible to automation. I would expect that it won’t be too long until computers are better at reading medical images than any radiologist would be.  So, you can be a radiologist, but you have to make sure your skillsets extend beyond just looking at pictures of skin lesions and diagnosing cancer. You need to have a role in working with patients or teaching. Those are the main things. If the aspects of your job are purely about manipulating information, you will be susceptible to automation. Even in areas of diagnosis. I am not too worried that all doctors are going to be replaced. I don’t think that that is going to happen by any stretch. Physicians are probably among the safest professions overall.

But there are going to be more tools, such as artificial intelligence diagnosing. I can easily imagine a future where if a doctor does not utilize an A.I. system to get a second opinion when making a diagnosis or setting out a treatment plan, that could be considered as malpractice because not using those resources would be crazy. I also think that given the shortage of doctors in the future due to an aging population and the explosion of chronic conditions, there could be situations in the future for a professional who is trained at a lower level than a doctor but will engage with the A.I. system. As kind of the interface between the A.I. system and the patient to perform certain basic forms of care. Maybe like something a nurse practitioner does today. That would help address some of the problems we have where we have a shortage of doctors, especially in rural areas.

Still, what if a student comes to you and says radiology is their only choice. We know that A.I. systems are getting better and better at imaging and diagnosing. Would you still advise the person to pursue that specialty?

If they are passionate about it, sure. Also, the world is still going to need radiologists. It’s a bit unpredictable in terms of how fast this is going to unfold and what the impact is going to be. If this was a decision that I would have to make I might probably still study radiology, but I would also probably hedge my bets and get some exposure to another specialty in case I had to make a change at some point. Also, be very very aware of this technology. I would want to be at the leading edge of understanding how this technology is going to be deployed in radiology and therefore be best positioned to be the guy who is actually leveraging it instead of getting replaced by it. There are going to continue to be opportunities, but it is absolutely essential to understand the A.I. systems that are going to be deployed in this area if you are going to make it your specialty.

You mentioned empathy before as one of the most critical skills for pursuing futuristic professions. Have you seen examples or do you think it is possible to develop robots, A.I. systems or chatbots with empathy?

People are definitely working on it. I interviewed Andrew Ng, who was involved in the development of a chatbot called Woebot designed to help people with depression and other mental health issues. You can say that you find it disturbing that someone would talk to a machine when they should be talking to a human therapist. But it’s not always possible. It’s not possible at 3 am when you wake up in bed, and you don’t feel well. Your psychologist is probably not going to be there to talk to you at that time of the night. It’s also not an option for people who can’t afford professional help or for people in developing countries where resources are simply not available and aren’t covered by the health care program.

You have mental health problems ranging from mild depression to something much more severe, and these technologies are already demonstrated to be helpful. That is a terrific thing, and I definitely think there is going to be an acceleration in that. And I hope we will see the benefits because at the moment we do have a kind of a mental health crisis.

Can we talk about reverse empathy here? When we saw Boston Dynamics’ robotic dogs being kicked for demonstration purposes, many of us cringed from feeling bad for the little creature. How do you see this? Will there be an ethical standard for people to respect automated systems and robots or is it just about machines that look like biological beings?

There is a range of issues you brought up there. The thing with the Boston Dynamics’ robot dog came up purely because it looks like a dog, but hopefully, we can all agree that that machine has no sense of anything. So kicking that robotic dog is no different than booting your washing machine. Empathy is understandable because it looks like a dog, but there is no reality to it. There is nothing to really worry about there at all in terms of actual impact on the machine. One thing you might worry about is if we begin to have robots that look like people or animals, but especially people.

If people don’t treat those well, then that can carry over to the real world. People worry a lot about this, especially with sex robots. These are really going to objectify women, and if someone mistreats their robot which resembles a female, then they might do the same thing with real women in the real world. That is a real issue.

And you see a milder version of this with Amazon’s Alexa. There have been complaints that children are learning to be quite rude in conversation with Alexa because they understand that it’s not a real person. Parents are worried that they are learning bad habits which could carry over to discussions with people in the real world. Amazon responded to that, and I think they added a mode in Alexa that parents can enable where if you are rude to her she will complain and tell you that you shouldn’t speak like that and ask you to say please. These are real issues which come about because of the potentially blurry line between machines and humans.

Having said that Nick Bostrom, for example, brought up the issue that someday a computer could really be conscious. If it could really be an intelligent, cognizant entity, then you really would have moral concerns about whether you are mistreating that machine and causing it to suffer. But that is way in the future. That is one of the questions I have asked in my latest book. I have asked people how far off this kind of thing is, and I got predictions ranging from 11 years – which is pretty unrealistic – all the way up to almost 200 years. So I think it’s pretty far out. But someday in the future that could be a real concern in terms of our moral culpability if we actually abuse or enslave an intelligent machine.

Thinking about robots, I have seen some being used in healthcare settings: robots disinfecting hospital rooms with UV light; telemedical ones moving around with the help of GPS supporting the work of nurses and other medical professionals. We have also seen that their adoption into clinical conditions, their inclusion into teams is very challenging. Have you seen good examples or challenges that make it so hard to adopt new robots in the workplace?

There is a lot of work on collaborative robots that are designed to work with people and that presents challenges. Because historically robots have been these big dangerous machines in factories in cages. People weren’t even allowed near them. This is the first time that we are building robots to interact and work together directly with people. It’s an ongoing process that we are getting better at. You have heard of Baxter, the manufacturing robot that works right next to people. That’s being developed. And there are machines in hospitals that work with people. I think we will learn how to build these systems to do a better job of interacting and collaborating with people.

You have been talking about how automation could help improve jobs, save time, allow us to have a basic income. But could you please describe to me your worst dystopian scenario, a real nightmare in 20 years from now? What would happen if everything we cherish in technologies today went berserk?

Well, the number one thing I worry about is inequality. That we don’t adapt to these changes. Lots of jobs disappear, jobs are deskilled, there are fewer opportunities especially for average people. So I think we could see massive levels of inequality, maybe outright unemployment. People losing their economic security that would, in turn, have an enormous impact on the economy because all these affected people wouldn’t have money to spend to drive demand. They couldn’t pay their debts, their mortgage, their student loans, their credit cards so we could have another financial crisis like we had back in 2008.

All of this would result in political upheaval. You can imagine really evil people getting elected, demagogues who prey on fears of the population, more scapegoating where people blame each other rather than understanding that it’s about technology. So all of these things could come together and make everything worse.

In conjunction with that, you have also got climate change, which is having more and more of an impact. I mean, I am sitting here in California, and it’s the first day when the air isn’t all smoggy from the fires that have been burning. So I worry about climate change as well. The reality is that if technology has a really negative impact on people’s lives and they are worried about the economy then it gets even harder to address the climate problem. Because people are concerned about how they are going to get through the given week rather than how to tackle climate change.

And certainly, a lot of terrible things can happen with A.I. Weaponized A.I. is a concern which many people worry about. You know, swarms of drones that could be used in warfare or to assassinate people and so forth. So there are a lot of bad things that could happen for sure. That’s why we need to really have a conversation about what this technology means on many levels. Find ways to adapt to it, regulate it where it is appropriate and make sure that it produces a positive outcome instead of negative ones.

Do you see any positive efforts in fighting these threats?

I see an increasing discussion about adapting to growing inequality in the future. There is more discussion about basic income, also people like Mark Zuckerberg got involved. Various experiments are going on in the world concerning basic income. Politicians and technocrats in government in the US and Europe are getting more interested in this issue. So yes, I think we definitely have grounds to be hopeful, but we need to do a lot more. The main reason I have devoted my life to writing and talking about this issue is to really try to get that conversation going even at a higher level because it is imperative. Once again, we really don’t know how fast this is going to happen. If we are lucky, it could be very gradual, and maybe we will have lots of time to adapt to the changes, but it could also go down much more rapidly.

Yes. There is definitely a discussion regarding autonomous weapons in the UN. Numerous experts I talked to are involved in trying to regulate or ban the use of artificial intelligence in autonomous weapons that wouldn’t have a human being controlling them.

And what are you the most excited about?

A.I. as a resource to solve problems. The person I talked to articulating the best is Demis Hassabis, CEO of DeepMind, which created AlphaGo, the system designed to play the board game Go. His vision for his company is to first build A.I. and then use it to solve everything else. It’s a very ambitious vision, but it states what it is all about. Everything we have in our civilization, in our society, everything that makes our lives better today than it was 100 or 200 years ago is a result of human intellect and creativity. Our ability to think, dream, build and solve problems. And now for the first time in history, we are developing a new resource, A.I. which is going to vastly amplify that capability.

It is going to be the ultimate resource. You will see it deployed in medicine, science, energy research, against climate change, maybe in geopolitics, sociology, against issues like poverty. It’s just going to be this general purpose resource that can be leveraged across the board, and that’s really the reason to be optimistic about the future and about A.I., provided, of course, that we can adapt to the challenges that it will bring with it.

AI & the Threat of a Jobless Future | Martin Ford | Rubin Report

Martin Ford – The Rise of Artificial Intelligence & Technological Unemployment

* 158 TIP: Artificial Intelligence & The Rise of Robots w/ Martin Ford

Will a robot take my job? | The Age of A.I.

Be sure to ‘like’ us on Facebook

LEAVE A REPLY

Please enter your comment!
Please enter your name here