We speak to Max Tegmark, AI researcher and co-founder of the Future of Life Institute, about his book, Life 3.0, and the future of artificial intelligence.
Artificial intelligence (AI) is changing the world around us. From automated factories that build everything without human intervention, to computer systems capable of beating world masters at some of the most complex games, AI is powering our society into the future – but what happens when this artificial intelligence becomes greater than ours? Should we fear automated weapon turning on us, or Hollywood-style “skull-stomping robots”?
We spoke to Max Tegmark, an MIT professor and co-founder of the Future of Life Institute, about his book, Life 3.0, in which he answers some of the key questions we need to solve to make the future of artificial intelligence one that benefits all of humankind.
Can you describe your book in a nutshell?
There’s been a lot of talk about AI disrupting the job market, and enabling new weapons, but very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks? I want to prepare readers to join what I think is the most important conversation of our time. Questions like: “Will superhuman artificial intelligence arrive in our lifetime,” “Can humanity survive in the age of AI, and if so, how can we find meaning and purpose if super-intelligent machines provide all our needs and make all our contributions superfluous,” and above all, “What sort of future should we wish for?”
I feel that we’re on the cusp of the most transformative technology ever and this can be the best thing ever to happen to humanity or the worst, depending on how we prepare for it. I’m an optimist. We can create a great future with AI, and I want to influence people to plan and prepare for it properly.
Why do you think the matter of artificial intelligence is an important conversation to be having now?
Because it’s really only in the last few years that a large number of leading AI researchers are taking seriously that this might actually happen within decades. There’s been enormous progress in this field. If you look at some examples: when would computers be able to beat humans in the game of Go? Just a couple of years ago, most experts thought this would take at least ten years. Last year, it happened. We’re getting area after area where things that people thought were going to take ages happened a lot sooner, which is a sign of how much progress there is in the field.
I feel that the conversation is still missing the elephant in the room because people talk a great deal about disruption of the job market, mass unemployment, stuff like this, but there are almost no scientists who talk seriously about what comes after that. Machines keep getting better and better, but will they get better than us at everything, and if so, what then? We have traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. From my perspective as a physicist, intelligence is simply a certain type of information processing, performed by elementary particles moving around. There’s no law of physics that says we can’t build machines more intelligent than us in all ways. To me, this suggests that we’ve only seen the tip of the intelligence iceberg and there’s this amazing potential to unlock the full intelligence that’s latent in nature and use it to help humanity flourish. In other words, I think most people are still totally underestimating the potential of AI.
If vast numbers of jobs are automated, and a lot of things like manual labour no longer require human attention, how do you think that will change society and what benefits might it bring to us?
If we can automate all jobs, that could be a wonderful thing, or it could cause mass poverty, depending on what we do with all this wealth that’s produced. If we share it with everybody who needs it, then effectively everybody’s getting a paid vacation for the rest of their life, which I think a lot of people wouldn’t be opposed to at all.
I think actually the European countries are key here because, especially in Western Europe, there’s this tradition now – and especially since WWII – of having the government provide a lot of services to its people. One can imagine that, as there’s increased automation generating all this wealth, you only need to bring in a small fraction of that wealth back to the government through taxes to provide fantastic services for those who need them and can’t get a job any more. Another question is: how can you organise your society so that people can feel a sense of purpose, even if they have no job? It’s really interesting to think about what sort of society we’re trying to create, where we can flourish with high tech, rather than flounder.
What do you think of the portrayal of AI in the media?
I think it’s usually atrocious. I think, first of all, there’s much more focus on the downside than on the upside because fear sells. Secondly, if you look at Hollywood flicks that scare you about AI, they usually scare you about the wrong things. They make you worry about machines turning evil, when the real concern is not malice but competence: intelligent machines whose goals aren’t aligned with ours. They also lack imagination, to a large extent. If you look at movies like The Terminator, for example: those robots weren’t even all that smart. They were certainly not super-intelligent.
There are very few of the films where you actually get a sense that these machines are as much smarter than us as we are smarter than snails. I think media, unfortunately, obsesses about robots just because they make nice eye candy, when the elephant in the room isn’t robots. That’s old technology: a bunch of hinges and motors and stuff. Rather, what’s new here is the intelligence itself. That’s what we really have to focus on. We found it really frustrating in our work that whenever we tried to do anything serious, we would get British tabloids infallibly putting a picture of a skull-stomping robot to go with it.
Do you think the portrayal of AI in the media is getting in the way of having a meaningful discussion?
Absolutely. In fact, the reason we put so much effort into organising conferences with the Future of Life Institute and doing research grants is that we wanted to transform the debate from dysfunctional and polarised to constructive and productive. When we had these conferences, we deliberately banned journalists for that reason: we felt that the reason it was so dysfunctional was because a lot of the serious AI researchers didn’t want to talk about this at all because they were afraid it was going to end up in the newspaper next to a skull-stomping robot.
People who had genuine concerns in turn felt ignored. I was very happy that when we were actually able to bring together AI researchers in a private, safe setting: we ended up with this very collaborative and productive discourse where everybody agreed that these are actually real issues, but the thing to do about it is not panic, but rather plan ahead. To make a list of the questions we need answers to and start doing the hard work of getting those answers so we have them by the time we need them. I feel that things are going in that direction, but we need to go further.
-  Website