By Phil Torres | 4 April 2016
Church and State

A look at current scholarship about the future reveals two distinct and divergent trends. On the one hand, there’s evidence that the world is becoming less violent and, with respect to things like gender equality, gay rights, and the treatment of animals, increasingly moral. The fact is that there’s less murder, war, rape, child abuse, and so on, today than ever before in history, even stretching back into the Paleolithic. Steven Pinker and Michael Shermer have each masterfully surveyed the oceanic evidence of this counter-intuitive conclusion that “things are getting better.” And while Pinker shies away from projecting such historical trends into the future, Shermer is explicit that he’s “optimistic,” given the long arc of the moral universe, which bends toward justice.
If the conversation were left at this, one might walk away with a warm feeling about the future. But it turns out that this is only half the picture. On the other hand, one finds a growing number of scholars in the nascent field of “existential risk studies” (or “existential riskology”) who are worried about a dramatic increase in violence this century and beyond. According to the leading figures of this field, the probability of human civilization collapsing or our species going extinct is appreciably higher today than at any point in our 200,000-year history.
For example, the founder of existential risk studies, John Leslie, argues that we have a 30 percent chance of perishing in the next five centuries. And Nick Bostrom, the Director of the Future of Humanity Institute at Oxford University, has claimed that the likelihood of an existential catastrophe before 2100 is “at least” 25 percent. Even more dismally, the astronomer Sir Martin Rees, who co-founded the Centre for the Study of Existential Risk at Cambridge University, writes that civilization has a mere fifty-fifty chance of avoiding total ruination in the twenty-first century. While such estimates may sound hyperbolic, one is reminded of Bertrand Russell and Albert Einstein’s 1955 claim that those who know the most are often the gloomiest – not the other way around.
It would be extremely worrisome if the "chief risk officer" were someone from the Future of Humanity Institute. In addition to my "mini-book" (https://t.co/Q9BDi89gcb), I have four publications forthcoming that explain why.https://t.co/XCgDOuJ1Vu
— Phil Torres (he/him) #BLM (@xriskology) June 6, 2021
So, here we have an intriguing juxtaposition: according to Pinker and Shermer, the world is getting safer. Yet according to Leslie, Bostrom, and Rees, the world is getting more dangerous. What should one make of this situation? Is there a contradiction here? The answer is yes, but it’s only apparent. The world really is getting both safer and more dangerous at the same time. The idea that resolves this tension is the following: while the global prevalence of violence is indeed falling, the capacity of rogue states, terrorist groups, and even lone wolves to wreak unprecedented havoc on civilization is simultaneously increasing.
Why? Because of advanced technologies. Putting this in perspective, humanity acquired the capacity to annihilate itself for the first time in 1945. The Atomic Age marked the beginning of a qualitatively new epoch in which extinction brought about by our own misdeeds became a real possibility. This alone suggests an elevation of existential danger today. In fact, the threat of nuclear annihilation still haunts our species, as the US and Russia enter a “new Cold War” and terrorist groups like the Islamic State fantasize about acquiring a nuclear weapon from Pakistan.
But many risk scholars are worried that by the end of this century, the threat posed by nuclear weapons could be the least of our concerns. Emerging fields of research like biotechnology, synthetic biology, and nanotechnology are not only enabling humanity to manipulate and rearrange the physical world in increasingly profound ways, but they’re becoming more accessible as well. Indeed, scientists demonstrated in 2002 that it doesn’t take much for someone to synthesize the polio virus by ordering DNA from commercial providers. And the genomes of many pathogens are now publicly available online, such as the Ebola virus, which can be found here. As a recent Global Challenges Foundation report suggests, it could soon become possible to create a super-pathogen that combines the lethality of rabies, the incurability of Ebola, the long incubation time of HIV, and the infectiousness of the common cold.
Similarly, a future terrorist with a background in nanotechnology could design a deadly self-replicating nanobot that selectively targets a particular group of people, the human species, or even the entire biosphere. The latter scenario is known as “grey goo,” and while it remains highly speculative, scholars such as Bostrom and Rees consider it within the realm of futurological possibility – and therefore worthy of serious study. A future terrorist could also use a portable “nanofactory” to rapidly manufacture huge arsenals of military weaponry, thereby destabilizing the power dynamics that underlie the social contract of modern societies, as Benjamin Wittes and Gabriella Blum explore in their book The Future of Violence.
The lure of building arsenals of immensely powerful weapons with nanofactories could also lead to arms races between states. Unlike the Cold War, though, a nanoweapons arms race wouldn’t be regulated by the logic of mutually assured destruction (MAD), which was later replaced by the doctrine of self-assured destruction (SAD). As a result, a nano-arms race could be highly unstable, involving multiple actors, surgical strikes on the enemy (thereby avoiding SAD), and quick recoveries.
The point is that the growing power and accessibility of advanced technologies will vastly increase the number of agents capable of initiating a major catastrophe. At the extreme, it’s not outrageous to imagine a future situation in which a large portion of humanity has access to its own doomsday machine, in the form of suitcase nuclear devices, biohacking laboratories, nanofactories, or some as-yet unknown type of artifact. This is where analyses according to which “things are getting better” fall short: they fail to recognize that, as Yogi Berra once quipped, “The future ain’t what it used to be.”
Phil Torres puts forward an argument for radical cognitive enhancements, like selecting embryos for high intelligence, and creating super-intelligent humans can be a solution for existential risks that lay ahead of humanity https://t.co/HbuzIQ0YJQ pic.twitter.com/C2wWGP7DsX
— H+ Weekly (@hplusweekly) March 17, 2020
A final scenario is worth noting. It could be that the type of agent that destroys Homo sapiens isn’t itself human, but a hardware-based alien of our own creation. As Elon Musk, Stephen Hawking, Bill Gates, Nick Bostrom, and many others have worried aloud, a superintelligence could pose the greatest threat to our long-term survival on Earth. This risk has nothing to do with robots rising up against humans, as seen in movies like Terminator. Rather, the concern is that a computer program – or algorithm – with greater-than-human problem-solving abilities could exploit any advanced technology within electronic reach to achieve its various goals. If these goals are even slightly incompatible with those of humanity, then we could be exterminated for the same reason that we kill trees to build a house: they’re simply in the way. A superintelligence need not be malicious to pose a threat, it need only to be motivated by a value system that fails to align with ours.
So, where does this leave us? For a deep understanding of where the great experiment of human civilization is headed, one needs to recognize both the promising and perilous trends that define our unique moment. The fact that moral behavior is on the rise due to Enlightenment values usurping religious moral codes, as Shermer argues, offers some genuine hope for a better world. Perhaps we could one day establish a global society in which the overwhelming majority of human beings are peaceable and rational, motivated by belief systems based on evidence and observation rather than faith and revelation. The relevant existential question, though, isn’t how small the fringe of violent agents becomes, but whether there’s a fringe at all. The power and accessibility of advanced technologies could enable a single outlier motivated by nefarious intentions to single-handedly ruin the party for everyone. If a fringe persists, then we remain vulnerable to a catastrophe of unthinkable proportions.
As the MIT cosmologist and co-founder of the Future of Life Institute, Max Tegmark, likes to say, humanity now finds itself in a race between wisdom and technology – between what we can do and what we ought to do. By analogy, humanity is like a pyromaniac kid whose matches have suddenly been replaced by a flamethrower. By merely pointing the weapon in the right direction and pulling the trigger, we could potentially burn down the entire global village. This is already true today because of nuclear weapons – of which there are more than 15,000 in the world – but nearly all risk scholars anticipate a swarm of novel and even more formidable risks to soon appear on the threat horizon before us.
Survival is the great challenge of the twenty-first century – and it could prove to be a far greater challenge than at any time in the past. In the new era of advanced technology, the lesser angels of our nature will be empowered like never before.
Phil Torres is an author, Affiliate Scholar at the Institute for Ethics and Emerging Technologies, and founder of the X-Risks Institute for the Study of Extremism. He has been published in both popular and academic journals, including Skeptic, The Humanist, Common Dreams, Salon, The Progressive, AlterNet, Erkenntnis, Metaphilosophy, and the Journal of Evolution and Technology, and has appeared on numerous podcasts and television shows. His most recent book is called The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing, 2016). He is also a musician whose music has been featured in commercials around the world, including a 2014 GoPro video.
What are Existential Risks and Why Should You Care? | Phil Torres at Envision Conference 2019
How civilization could destroy itself – and 4 ways we could prevent it | Nick Bostrom
Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
Be sure to ‘like’ us on Facebook