Can humanity survive in the age of AI?

Can humanity survive in the age of AI?

We speak to Max Tegmark, AI researcher and co-founder of the Future of Life Institute, about his book, Life 3.0, and the future of artificial intelligence.

Published: October 8, 2018 at 1:00 am

Artificial intelligence (AI) is changing the world around us. From automated factories that build everything without human intervention, to computer systems capable of beating world masters at some of the most complex games, AI is powering our society into the future - but what happens when thisartificialintelligence becomes greater than ours? Should we fear automated weapon turning on us, or Hollywood-style “skull-stomping robots”?

We spoke to Max Tegmark, an MIT professor and co-founder of theFuture of Life Institute, about his book, Life 3.0, in which he answers some of the key questions we need to solve to make the future of artificial intelligence one that benefits all of humankind.

Can you describe your book in a nutshell?

There’s been a lot of talk about AI disrupting the job market, and enabling new weapons, but very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks? I want to prepare readers to join what I think is the most important conversation of our time. Questions like: “Will superhuman artificial intelligence arrive in our lifetime,” “Can humanity survive in the age of AI, and if so, how can we find meaning and purpose if super-intelligent machines provide all our needs and make all our contributions superfluous,” and above all, “What sort of future should we wish for?”

I feel that we’re on the cusp of the most transformative technology ever and this can be the best thing ever to happen to humanity or the worst, depending on how we prepare for it. I’m an optimist. We can create a great future with AI, and I want to influence people to plan and prepare for it properly.

Why do you think the matter of artificial intelligence is an important conversation to be having now?

Because it’s really only in the last few years that a large number of leading AI researchers are taking seriously that this might actually happen within decades. There’s been enormous progress in this field. If you look at some examples: when would computers be able to beat humans in the game of Go? Just a couple of years ago, most experts thought this would take at least ten years. Last year, it happened. We’re getting area after area where things that people thought were going to take ages happened a lot sooner, which is a sign of how much progress there is in the field.

I feel that the conversation is still missing the elephant in the room because people talk a great deal about disruption of the job market, mass unemployment, stuff like this, but there are almost no scientists who talk seriously about what comes after that. Machines keep getting better and better, but will they get better than us at everything, and if so, what then? We have traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. From my perspective as a physicist, intelligence is simply a certain type of information processing, performed by elementary particles moving around. There’s no law of physics that says we can’t build machines more intelligent than us in all ways. To me, this suggests that we’ve only seen the tip of the intelligence iceberg and there’s this amazing potential to unlock the full intelligence that’s latent in nature and use it to help humanity flourish. In other words, I think most people are still totally underestimating the potential of AI.

Life 3.0 by Max Tegmark is available now (£9.99, Allen Lane)

How does your physics research apply to the study of artificial intelligence?

As a physicist, as I’ve said, I don’t think there’s any ‘secret sauce’ to human intelligence. I think we are a bunch of elementary particles arranged in some particular way that helps us process information. But that’s also what a computer is. Many people take for granted that machines can never get as smart as us because there’s something magical about humans. As a physicist, from my perspective, that’s not the case.

Another way in which being a physicist shapes my thinking about this is that we physicists have often been told that something is impossible or science fiction, and then we’ve done it – it happened. If you went to somebody in 1915 and started telling them about nuclear weapons, they would dismiss you as a sci-fi dreamer who didn’t know what you were talking about. They would say “Why should I take this seriously when you can’t even show me one single video of one of these so-called nuclear explosions? That’s ridiculous.” Yet, thirty years later, it happened. In hindsight, it would have been good if we had planned ahead a little bit so we didn’t end up in a very destructive nuclear arms race. This time, I’m more optimistic that we can actually plan ahead and avoid the problems if we talk about them.

Traditionally, we’ve always stayed ahead with our wisdom by learning from our mistakes. When we invented less powerful technologies like fire, we screwed up a bunch of times, and then invented the fire extinguisher. We screwed up a bunch of times with cars, and then we invented the seat belt. With more powerful technology, like nuclear weapons and artificial intelligence, you don’t want to learn from mistakes. You want to plan ahead and get things right the first time, because that might be the only time we have. This is the mindset that I’m advocating for in this book.

If vast numbers of jobs are automated, and a lot of things like manual labour no longer require human attention, how do you think that will change society and what benefits might it bring to us?

If we can automate all jobs, that could be a wonderful thing, or it could cause mass poverty, depending on what we do with all this wealth that’s produced. If we share it with everybody who needs it, then effectively everybody’s getting a paid vacation for the rest of their life, which I think a lot of people wouldn’t be opposed to at all.

I think actually the European countries are key here because, especially in Western Europe, there’s this tradition now – and especially since WWII – of having the government provide a lot of services to its people. One can imagine that, as there’s increased automation generating all this wealth, you only need to bring in a small fraction of that wealth back to the government through taxes to provide fantastic services for those who need them and can’t get a job any more. Another question is: how can you organise your society so that people can feel a sense of purpose, even if they have no job? It’s really interesting to think about what sort of society we’re trying to create, where we can flourish with high tech, rather than flounder.

What do you think of the portrayal of AI in the media?

I think it’s usually atrocious. I think, first of all, there’s much more focus on the downside than on the upside because fear sells. Secondly, if you look at Hollywood flicks that scare you about AI, they usually scare you about the wrong things. They make you worry about machines turning evil, when the real concern is not malice but competence: intelligent machines whose goals aren’t aligned with ours. They also lack imagination, to a large extent. If you look at movies likeThe Terminator, for example: those robots weren’t even all that smart. They were certainly not super-intelligent.

There are very few of the films where you actually get a sense that these machines are as much smarter than us as we are smarter than snails. I think media, unfortunately, obsesses aboutrobotsjust because they make nice eye candy, when the elephant in the room isn’t robots. That’s old technology: a bunch of hinges and motors and stuff. Rather, what’s new here is the intelligence itself. That’s what we really have to focus on. We found it really frustrating in our work that whenever we tried to do anything serious, we would get British tabloids infallibly putting a picture of a skull-stomping robot to go with it.

Do you think the portrayal of AI in the media is getting in the way of having a meaningful discussion?

Absolutely. In fact, the reason we put so much effort into organising conferences with the Future of Life Institute and doing research grants is that we wanted to transform the debate from dysfunctional and polarised to constructive and productive. When we had these conferences, we deliberately banned journalists for that reason: we felt that the reason it was so dysfunctional was because a lot of the serious AI researchers didn’t want to talk about this at all because they were afraid it was going to end up in the newspaper next to a skull-stomping robot.

People who had genuine concerns in turn felt ignored. I was very happy that when we were actually able to bring together AI researchers in a private, safe setting: we ended up with this very collaborative and productive discourse where everybody agreed that these are actually real issues, but the thing to do about it is not panic, but rather plan ahead. To make a list of the questions we need answers to and start doing the hard work of getting those answers so we have them by the time we need them. I feel that things are going in that direction, but we need to go further.

Do you think that we can ever really know if an AI can be conscious in a way that we can understand?

Maybe. First of all, from my perspective as a physicist, you are just food, rearranged. More specifically, you are a bunch of elementary particles moving around in certain complicated ways which process information and make you do intelligent things. So, we know that some arrangements of particles have this subjective experience that we call consciousness, experiencing colours, sounds, smells, emotions. But there are also other arrangements of particles of the same kinds of quarks and electrons that presumably don’t experience anything at all, like your shoe. So what’s the difference, exactly, between a conscious blob of particles and an unconscious one?

If you have any kind of theory for what makes the difference then, as I argue in the book, you can test that on yourself. You can put yourself in a brain scanner and have a computer that looks at the data in your brain predict, at each time, what you’re actually experiencing. Then you can compare those predictions with what you actually experienced, because you actually know that. If those predictions are wrong, that theory goes in the trash, whereas if those predictions are always correct, you start to take that theory very seriously. Now you can apply that theory not just to your brain, but to the brains of your friends, and to computers as well. If the theory says that this computer is experiencing something, you take it very seriously. I think cracking this puzzle is not science fiction, it’s very doable. It’s definitely something we can and should do.

Life 3.0 by Max Tegmark is available now (£9.99, Allen Lane)

This interview was first published in August 2017

© Getty Images