Elon Musk, Artificial Intelligence and OpenAI

    By Tim Ventura | 22 November 2019
    Medium

    Elon Musk has been a vocal critic of artificial intelligence, calling it an “existential threat to humanity” back in 2014, and saying “AI will make jobs kind of pointless” at the World Artificial Intelligence Conference in Shanghai.

    He told Alibaba founder Jack Ma that humanity is merely a “biological boot loader for digital super intelligence”, and at SXSW 2018, Musk unequivocally stated, “Mark my words — A.I. is far more dangerous than nukes”, describing it as an even bigger threat that North Korea.

    Musk is heavily invested in AI research himself through his OpenAI and NeuroLink ventures, and believes that the only safe road to AI involves planning, oversight & regulation. He recently summarized this, saying:

    “My recommendation for the longest time has been consistent. I think we ought to have a government committee that starts off with insight, gaining insight… Then, based on that insight, comes up with rules in consultation with industry that give the highest probability for a safe advent of AI.”

    Across dozens of media appearances, Musk’s message about AI has indeed been remarkably consistent. He says it’s dangerous, and says it needs regulation, or else “AI could turn humans into an endangered species”.

    The OpenAI Initiative

    According to Vanity Fair, Elon’s concerns about artificial intelligence are the reason he cofounded OpenAI, “a billion-dollar nonprofit company, to work for safer artificial intelligence”.

    “Musk believes that it is better to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites — even when the tech elites happen to be his own friends, people such as Google founders Larry Page and Sergey Brin.”

    Of course, OpenAI is all about general artificial intelligence — not specific applications of it. Musk is involved in those areas, however, with projects like the development of AI chips for Tesla cars, as well as being a co-founder of Neuralink, a San-Francisco based startup developing “ implantable brain–computer interfaces”.

    Musk has discussed his concerns on this very publicly, as well as past conversations he’s had with Larry Page & Sergei Brin on the topic, stating:

    “I’ve had many conversations with Larry about A.I. and robotics — many, many,” Musk told me. “And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role. The phrase used is ‘We are the biological boot-loader for digital super-intelligence.’ ”

    For Musk, OpenAI is an initiative he’s using to try and create safer alternatives to machine-superintelligence by developing the safety protocols now rather than waiting until unlimited AI arrives on its own.

    Musk Is Not Alone

    The same Vanity Fair article that talks about Elon Musk’s AI concerns has a handy graph of the top critics & supporters of artificial intelligence.

    The late physicist Stephen Hawking held a very pessimistic view of AI, saying, “The development of full artificial intelligence could spell the end of the human race.”

    A more moderate approach comes from Bill Gates, calling AI “both promising and dangerous”, but still advocating open, managed research. Meanwhile, at the other end of the spectrum, well-known futurist Ray Kurzweil is very much an AI advocate, saying “AI Will Not Displace Humans, It’s Going to Enhance Us”.

    Other notable critics of Musk’s pessimistic vision include AI experts like Subbarao Kambhampati, Francois Chollet and David Ha, who don’t see “machine super-intelligence” as being anywhere close enough to reality to pose any kind of existential threat to mankind. “I also have access to the very most cutting-edge AI and frankly I’m not impressed at all by it,” said David Ha.

    In their view, Musk is being irrationally alarmist — pointing at a single-celled Amoeba in the mud, saying “we need to be prepared for when that thing evolves.”

    What makes his concern credible or not isn’t whether the evolution happens — but whether it happens soon enough to for it to matter.

    When Will AI Surpass Us?

    2050. That’s the consensus of experts in the Futurism article Separating Science Fact From Science Hype: How Far off Is the Singularity?

    The Technological Singularity is a term coined by John von Neumann, based on the idea of an “intelligence explosion” by I.J. Good, and later popularized by Vernor Vinge. Like many others, I learned about the concept from Ray Kurzweil and “The Age Of Spiritual Machines”.

    In Ray Kurzweil’s view, evolution is an accelerating natural process that will soon outstrip biology’s ability to keep up. He supported this argument by back-plotting major historical evolutionary events on a timeline, which shows an exponential curve leading to a predicted massive future event that Kurzweil called the “Technological Singularity”.

    Kurzweil suggested that the events that occur after the singularity would be nearly beyond prediction, when machine intelligence surpasses human intelligence, and then rapidly boot-straps itself up to “godlike intelligence”. Musk’s concern is what happens to humanity if this nascent superintelligence hates us — something written about at length by Oxford futurist Nick Bostrom.

    His claim that AI is an “existential threat” means that it quite literally threatens the very existence of mankind, although not all Singulatarians share this belief. Kurzweil himself offered a much different vision of the Singularity — in which human beings merge with machines over time, eventually reaching a point where “natural” vs. “artificial” life and/or intelligence bear no distinction from each other.

    In brief, the idea is that the Technological Singularity will happen with the development an artificially intelligent computer able to “upgrade” itself. Such a machine would rapidly surpass human intelligence and quickly achieve a level of intelligence so far beyond our own that it would effectively become unstoppable.

    Should We Be Worried?

    In my opinion, Musk has a valid, high-level concern about the creation of truly intelligent AI based on the notion of the coming “Technological Singularity”. It stands to reason that anyone willing to accept the premise that mankind can create machines capable of truly independent, human-level thought should also seriously consider what happens if the machines turn on us.

    The fear that our creations may one day try to destroy us isn’t a new theme — it’s present in Mary Shelley’s Frankenstein (1818), and goes back at least as far as the Jewish Golem myth several thousand years ago.

    In the 20th century, this fear was refocused on artificial intelligence, at first in stories such as Harlan Ellison’s “I Have No Mouth, I Must Scream”, and later in cinema including Westworld (1973), Wargames (1983), The Terminator (1984), The Matrix (1999), and more recently the BattleStar Galactica TV series (2004–2009).

    In addition to cinema, we’re also beginning to see AI products enter the commercial marketplace, putting AI back in the spotlight. Siri, Alexa, Google Assistant & automated chatbots aren’t truly intelligent, but they remind people about advances happening in the world of AI, and give immediacy to Musk’s concerns about its dangers.

    If It’s Dangerous, Why Develop It?

    Should we be developing a technology that may ultimately make us extinct? It’s definitely a question worth asking. However, despite the possible dangers of AI, it’s important to remember the tremendous potential for good from artificial intelligence, and the importance that technologists are placing on it:

    “AI is one of the most important things humanity is working on. It is more profound than electricity or fire,” says Google CEO Sundar Pichai, “We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too.”

    We live in a world that keeps getting more complex; average people have a hard time keeping up. Complexity continues to increase across the board in systems & technology — AI is a great tool for overseeing complex systems and keeping them optimized.

    Elon Musk’s own OpenAI company is a great testament itself to the promise of AI. Despite its founder’s concerns, the company just picked up a $1 billion investment from Microsoft for the development of true, general artificial intelligence to mimic the human brain.

    “The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said OpenAI CEO Sam Altman.

    If these claims about the promise of AI are true, perhaps fire is the best analogy for it — an incredibly powerful tool, capable of being used to create or destroy. Even people with deep concerns about it promote its ethical development — with Bill Gates saying, “The power of artificial intelligence is so incredible, it will change society in some very deep ways.”

    Sundar Pichai adds to this, saying, “AI holds the potential for some of the biggest advances we are going to see. You know whenever I see the news of a young person dying of cancer, you realize AI is going to play a role in solving that in the future, so I think we owe it to make progress.”

    Conclusion

    Elon Musk isn’t vacillating on his concerns about AI — he thinks it’s dangerous, and requires regulation.

    “I am not normally an advocate of regulation and oversight,” Musk has said, “I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public.”

    Despite the dangers, he’s invested in the development of it through two different ventures, and seems to agree with industry leaders like Gates & Pichai the promise of AI outweighs the risks — if those risks are planned for beforehand.

    Musk believes, “It is really all about laying the groundwork to make sure that if humanity collectively decides that creating digital super intelligence is the right move, then we should do so very very carefully — very very carefully. This is the most important thing that we could possibly do.”

    Reprinted with permission from the author.

    Tim Ventura is a futurist, marketing executive and sometime writer with 25+ years of industry experience and a passion for the future. Follow him at LinkedIn and Twitter.

    Elon Musk’s Last Warning About Artificial Intelligence

    Joe Rogan – Elon Musk on Artificial Intelligence

    Elon Musk and Y Combinator President on Thinking for the Future – FULL CONVERSATION

    OpenAI – Non-profit AI company by Elon Musk and Sam Altman

    Be sure to ‘like’ us on Facebook

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here