Rebooting the Brain

By Bryan Johnson | 14 November 2017

(Credit: Bryan Johnson / Medium)

Web Summit, Nov. 6th 2017. Lisbon, Portugal. Lightly edited.

Hi, everybody. What a privilege it is to be here tonight, especially having Stephen Hawking speak before me. Paddy, thank you for that kind introduction. I’m here to speak to you tonight about what Stephen was talking about. That is the future of the human race. It’s something that I think is pressing. In fact, I invested the $100 million in Kernel because I believe that we are about to enter into the most consequential revolution in the history of the human race and specifically, that we are going to build the tools to read and write our neural code, so that we can take control of our cognitive evolution.

Before I get into this, I want to tell you a little bit about how this got started. It was actually 20 years ago when I was in Ecuador working as a Mormon missionary, living among extreme poverty for two years. I returned back to the States and I had this overwhelming desire to try to improve the lives of others. I looked around at the options I had and I couldn’t really find anything I thought was a fit. So I thought, “You know what? I’ll become an entrepreneur. I’ll make a whole bunch of money by the age of 30. Then with abundance of money and freedom of time, I’ll do something useful.”

My naïve 21-year-old mind and lucky me, it happened. I sold my company, Braintree, to PayPal in 2013. With a fresh perspective, I got to ask this question, “Now I’m in this privileged position, I have the money, I don’t need the permission of anybody to do what I want to do, what do I do?” The question was, “How do I create the most value for the human race?” So I surveyed the world.

Before I get into it, let me get a feel of all of you here. When you look at the future and you think about all the problems we have, but all of the wonderful things we have going on, how many of you think, “You know what? We’ve got this. We can solve this.” Show of hands. Okay. Second question, how many of you look at the same set of problems and challenges and say, “Ah! I don’t really know if we can do this”? Show of hands. Okay. I’m with this latter group. I think the data does show that the world is getting better. We’ve never had less violence, the standard of living has never been higher. We’re generally trending the right direction. Yet, I think the future is complex for a number of reasons.

First, the pace of change is accelerating. That creates stressors on our society. Things we have to deal with, you otherwise didn’t have. When things move really fast, it puts us in a difficult spot. Number two, emergent complexity. It’s become increasingly hard to manage our societal complexity. The only image that comes to my mind when I think about the future is a category five hurricane that’s going to bear down on us with so much force. I have images of having it leave us without power and water and be able to cooperate.

The single greatest thing we can do as species is to work on our adaptability, our ability to evolve and adapt to this change. Now, when I went through these problems, I assessed the world and I formed my conclusion and I thought, “You know what? Maybe I’m off. Maybe I really don’t understand the reality.” So I thought, “I’ll do the most reasonable thing anyone would do.” I asked my seven-year-old daughter, “Baby, what’s on your mind? What do you think about the future?” And we got this list.

Her number one concern; monsters. Number two, her parents dying. I guess that’s me. Three, our snake, Nagini, coming back to life, who happens to be in the walls dying sadly. Fourth, animals taking over the world. Now, why does she think this is the case? Because monsters want to kill us, kill her and take over the world because they want to be rich to have a slide in their backyards, have bumpy cars and then force people to play with them.

Now, this seems cute and funny, but when I started thinking about her concerns, I actually found that my concerns are pretty similar to hers. Let’s take monsters, for example. I, too, am scared of monsters, just not monsters under my bed, monsters inside my head. My father suffered from drug addiction for the first 25 years of my life. My stepfather has signs of Alzheimer’s. It’s devastating to watch someone you love lose their humanhood right before their eyes. I suffered from chronic depression for a decade. I wanted with the greatest amount of intensity I could explain to just cease to exist. I had children, that wasn’t a way out, but I can tell you I wouldn’t wish that on anybody.

Number two, parents dying. I, too, am scared about dying. I grew up in this Mormon household where I was told there’s a God and this God has a plan and if you obey the rules of this God, you get this amazing afterlife. It was an amazing deal, so I obeyed the rules.

As I grew older, I decided I don’t know if I believe this anymore. I had this shocking realization where I had to reconstruct my existence from scratch. Why do I exist? What is my purpose? What is my meaning? Now, it’s all in my responsibility to create this life to be as amazing as possible, not the next life.

Number three, Nagini coming back to life. Even though the ball python was, well, in the wall, we recognize this fear as things we don’t see, things that come back to bite us, things that blindside us, things we create in the name of progress such as social media that come back and influence the elections for an undesirable outcome.

Fourth, animals taking over the world. It’s a wonderful opportunity for Stephen Hawking to have spoken about AI. 2017 seems like the year where we all became a little frantic about the realization that AI may take over the human race. I don’t think those concerns should be discredited. I think we should be very thoughtful about these things. At the same time, I personally am much more concerned about human behavior than I am about AI. The power for a single individual to create a biotech, to knock us off the electrical grid is devastating. We cannot let single individuals create so much destruction in society.

So in fact, in thinking about what Stephen Hawking spoke about, I drew this picture on a napkin this morning.

Let me see if I can pull this up. Whatever you believe about AI, let’s say you believe that the growth of AI is exponential. Let’s say you believe it’s more in the linear curve. Let’s say you think it’s punctuated equilibrium. Whatever you believe, you can say it’s up into the right. If we look at human ability, it’s flat lined.

Meanwhile, we have gone from clubbing each other with stones to shooting each other with automatic weapons. So at what point in time do we look at the difference between the capability set of our tools or AI and our own abilities and say, “We feel really uncomfortable with this”? How much lead time do we need from making that realization that it might make sense to make our own evolution, our ability increase the number one priority of the human race?

So in thinking about all this, the problems we have in the world, the things we need to do about them, this is why I’ve started Kernel. When I looked at the world and I tried to assess who’s investing in the brain, who’s trying to improve its abilities, who’s putting the capital there, is the government doing it, are entrepreneurs doing it, are venture capitalists funding it, how often do we talk about it? In fact, I had 12 dinners over the past year and I would start the dinners and say, “Let’s imagine we’re living in the world in 2050 and we’re very happy. We’ve built this remarkable world. What did we focus on in 2017 that allow us to build this amazing world?” Without fail, I got the same answers every time, climate science, education, AI, security. The same answers every single time.

You know what? Not a single person mentioned the brain. Yet, everything we are, everything we build and everything we aspire to become stems from the brain. So why is the brain such a blind spot in our society? Kernel is an attempt, a first attempt at trying to make a dent in the tools we have to interface with the brain. So what is the state of play? What can we do? We have MRI machines to image our brains. We have EEG to record when we meditate or sleep. We have a tDCS to stimulate for addiction. These tools are amazing. I don’t mean to belittle them at all, but they’re lacking the power we need to read and write our cognition.

So when I say read and write neural code, what does that mean? What are some examples to make that tangible? Imagine if I had a tool to interface with my brain, where I could walk a mile in someone else’s shoes. What if I could feel what it was like to be you? What if I could understand your contextual framework? What if I understand your memories and your emotions? Would that change the way we deal with each other, the way we cooperate, the way we make decisions? Would that change our creative abilities?

What if instead of destroying our enemies we destroy the concept of enemies? What if our imaginations on the potential of our own cognitive abilities was much like the 1800s, where if you went there and you said, “Dear Sir or Madam, could you please give me a rousing speech on the potential of electricity? What will it power one day?” They probably would have said, “Lights?” That’s the key. Our imagination constrains us to what we’re familiar with.

Now, in building these technologies and in any revolution, it is not guaranteed things are going to go well. We’ve seen that happen many times. Revolutions come, windows open up, the old is pushed aside and a new narrow window is opened up. So how might we be thoughtful in building these technologies? Let’s give you one example. When I use Facebook today, I log in and I socialize with my friends and my family.

In exchange for that, they acquire as much data as they possibly can about me. They know me better than my girlfriend. That’s the same relationship I have with Google and Twitter and the government, who is also collecting everything they can about me. I really dislike this relationship. Now, let’s imagine, if we have these interfaces and we’re streaming our thoughts and our secrets and our imaginations and our fantasies in real time, I really dislike that relationship.

So what if we had the power of these tools and what if we could drop this data into the blockchain and what if we said that human data privacy was a human right? In doing that, we leverage these tools to recreate societal structure on trust, security to working on things like terrorism or mass shootings, where if you are represented by the blockchain and you are secured with all your information and you give permission to others, but then people build algorithms of security to hit your blockchain and say, “Is this person safe to enter Web Summit?” We can rebuild societal trust mechanisms. The idea that a centralized government can keep society safe is insane. We have to rebuild these fundamental architectures.

Two final thoughts. We broadly agreed as society that basic literacy of reading and writing is a fundamental need for all of us. It allows us to communicate, cooperate, and share information. There’s another form of literacy that we need as a society and that is future literacy. I was recently in Saudi Arabia with a gentleman. He was telling me about his 2030 goal for his country. I thought that’s amazing because that’s 13 years away. The world is going to change 100 times over between now and then.

I said, “Let’s play a game here. Let’s imagine that we have a robot and we’re trying to get it to the far end of that sand hill over there.” So we program the robot. We make a topographical map of the area and we program the robot to get to that sand dune. The problem with that plan is the moment the robot begins, the sand shift and now the robot is stuck.

The better way to do it would be to program the robot to respond in real time to anything that comes its way. So no matter how the sands shift, it can move and hit the endpoint. We as a society need to become that adaptable. The key to becoming that adaptable is building the baseline tool of technologies and cognitively intervening.

In short, a revolution on the scale of something we’ve never seen is coming. It’s going to be on our front doorsteps in 15 to 20 years. A window is going to open up and we’re going to have a shot at building something remarkable. Once we’ve built it, it’s going to close. I’m going to go home to my daughter. I’m going to say, “Baby, I know you’ve got a lot in your mind. I get it. Monsters are scary. Nagini is in the wall. Animal’s are taking over. I hear you, but guess what? I’m working on these things and I just talked to 15,000 people in Lisbon and they’re working on it too.” Guess what? We’ll do this. So let’s not fuck this up.

Thank you.

Reprinted with permission from the author.

Bryan JohnsonBryan Johnson is founder of Kernel, OS Fund and Braintree. In 2016, he founded Kernel, investing $100M to build advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition. Kernel is on a mission to dramatically increase our quality of life as healthy lifespans extend. He believes that the future of humanity will be defined by the combination of human and artificial intelligence (HI+AI). In 2014, Johnson invested $100M to start OS Fund which invests in entrepreneurs commercializing breakthrough discoveries in genomics, synthetic biology, artificial intelligence, precision automation, and new materials development. In 2007, he founded Braintree (acquired Venmo) which he sold to PayPal in 2013 for $800M. He is an outdoor-adventure enthusiast, pilot, and author of a children’s book, Code 7. You can follow his work at, on his Future Literacy publication on Medium, and on Twitter.

Bryan Johnson wants to put a chip in your brain | Code 2017

Synthetic Biology: Changing the Nature of Nature | Ep. 3 – Explorations with Bryan Johnson

Be sure to ‘like’ us on Facebook


Please enter your comment!
Please enter your name here