By Bryan Johnson | 28 June 2017
Here is a recent thought experiment I’ve been doing with family and friends: Imagine you’re reading the history of humanity in 2500 and you could choose to be a character at anytime in the past. You could build the pyramids; found an empire or nation; create the Mona Lisa; invent the steam engine; go to the moon or Mars; found a religion; be attached to the great scientific discoveries, technological breakthroughs and cultural revolutions of all humanity.
What would you choose and why?
The answers I hear surprise me. The majority of people choose to go back in time before today. As in, sometime before 2017, and not anytime between now and 2500. Why is this? Why do people choose from the history books? Why not wish to live the life of a newborn in 2499, letting you live as far into the future as possible?
I have a hunch: Humans are biased towards the familiar — it’s warm and fuzzy, helping us feel good about the world. This bias is also what makes us pretty terrible at predicting the future. The future is unknown and dealing with unknowns is stressful. It is why imagining the future can be so scary for people and why anxieties become runaway fears. It is why conservatism is such a common response, especially in an age where technological change introduces such extremely divergent futures to contemplate.
It is natural to accept that our memories can sometimes be biased towards the past, but how often do we consider whether our ability to predict the future is equally so? If we can pinpoint our familiarity bias, what about our future biases?
Recognizing any such biases is of great importance, individually and collectively. We are fundamentally a future-planning species — sadly, more for ourselves and less for future generations. But in attempting to create a pleasant future, we must first be able to trust that our imagination, perhaps our best navigational tool, is sound and unbiased.
“It is accurate prediction of the future, more so than accurate memory of the past per se, that conveys adaptive advantage.”
-Thomas Suddendorf, Science Magazine
A familiarity bias is, in some ways, a rational response to predicting — and soothing oneself — in an uncertain world. The comfort of the familiar is that both the upside and downside are bounded. (The opposite, in some ways, of good investment strategy, where the downside is bounded but the upside infinite.) We know from decades of psychology research that people will take worse odds if they are more familiar with the setup. Why?
One interpretation of this is that we are terribly bad at odds. Another is more generous — familiarity is rational when calculating odds because the unfamiliar is, essentially, a variable of unknown depth and horror. You know what you are getting into with the past. If you choose being a newborn in 2499, there is some chance you are going to be born into an apocalyptic wasteland. But if you choose 1840 you are, at worst, stuck in the 19th century lighting oil lamps.
To increase our odds of being able to build a future, in an increasingly complex world, that we care to live in, we must proactively confront and overcome these biases.
I’ll give an example. I was recently in Saudi Arabia, staring out at the desert expanse with a few business associates. We were discussing the future of nation states and a gentleman shared the 2030 goals of his country. His goals were many, but they had a fatal assumption: that he could count on using similar foundational assumptions (the familiar) about the world today as would be applicable in 2030.
When I heard this, I immediately recognized his familiarity bias. In response, I proposed a thought experiment. I pointed to a sand dune, the tallest I could see. How would you, I asked him, build a robot to get to that sand dune?
The difficulty of course is that sand dunes are more like a liquid than a solid over time — they constantly change. Simply providing a current topology snapshot and programming the robot to navigate this landscape would likely fail as the landscape will shift dramatically and the robot may be unable to handle the newly configured terrain.
Even the old hockey adage of skating to where the puck will be, not where it is, falls short when we have no idea what the world will be like in 2030.
Instead, you must build a robot able to adapt and reorient to a continually shifting goal. Technological change is fast, and getting faster. There is emerging complexity of managing it all, individually and as a society. I’d argue that we’re beginning to bump up again, or we’re already past, our collective ability to manage this emerging complexity. Society is at great risk of becoming destabilized.
The sands are shifting.
To me, future success depends upon our ability to not confront our future biases but to actively embrace uncertainty. In our evolutionary history, the fear of uncertainty served us well. Is it now a liability and risk?
“Consider the cattle, grazing as they pass you by. They do not know what is meant by yesterday or today, they leap about, eat, rest, digest, leap about again, and so from morn till night and from day to day, fettered to the moment and its pleasure or displeasure, and thus neither melancholy nor bored.”
— Friedrich Nietzsche
Building a successful future depends upon our ability to come up with new and unexpected ideas in governance, technology, social structures, cultural values and norms, among dozens of other things. These are some reasons why as an investor, I only invest in futures I cannot predict. I invest in tools because tools open up unknowns. They lead to the unfamiliar.
In “The Book of Imaginary Beings” the Argentine writer Jorge Luis Borges imagines many dozens of fantastic creatures. A Chinese plant “shaped like a lamb, covered with golden fleece”. An “immortal bird that makes its nest in the tree of science”. And the “Pinnacle Grouse”, which has only one wing and can fly only in a circle around a mountaintop.
Many of these wild, rich beings, though, are simply rearrangements of creatures that already exist. Why is someone with one of the richest imaginations in fiction, limited to that which is already known? Familiarity bias.
A lot of recent neuroscience research is telling us that, in some ways, Borges had no choice. Our ability to remember and our ability to imagine the future are undeniably linked.
Could this be why people are historically awful at predicting what will happen with the emergence of a new technology? (I don’t mean when technology will arise, a la Kurzweil. I mean, what technology will become.)
Acknowledging my own familiarity and future biases while starting Kernel, our task is to directly engage in our cognitive evolution. I am consumed with the question of what currently or could exist that I can’t imagine. It’s this adventure into the cognitive unknown which I think is the single most exciting and epic adventure in history.
In my next post I will propose a solution to the familiarity bias. Future Literacy. Stay tuned.
Reprinted with permission from the author.
— Church and State (@ChurchAndStateN) July 3, 2018
Bryan Johnson wants to put a chip in your brain | Code 2017
Rebooting The Brain | Bryan Johnson | Web Summit Keynote 2018
Bryan Johnson – The Brain on Blockchain
Human Intelligence meets AI
Be sure to ‘like’ us on Facebook