Categories
podcast

Artificial intelligence and the nature of consciousness, with Hod Lipson

Hod Lipson (hodlipson.com) is a roboticist who works in the areas of artificial intelligence and digital manufacturing. I talk to Hod about the nature of self-awareness. Topics discussed include: how close we are to self-conscious machines; what he views as likely building strategies that will yield self-aware machines; what it takes for something to be considered self-aware; how artificial intelligence research might help us better understand the structure of our own minds and how we behave; and what he sees as the risks of AI.

A transcript is below. 

Episode links:

Resources related to this topic or mentioned in this episode:

TRANSCRIPT

[Note: transcripts will contain errors.]

Zach: Hello and welcome to the People Who Read People podcast, with me Zachary Elwood. This is a podcast about understanding other people and understanding ourselves. You can learn more about it and get episode descriptions at www.behavior-podcast.com.

On today’s episode, recorded December 20th 2021, I talk to Hod Lipson about artificial intelligence: we talk about how close he thinks we are to self-aware machines, what he views as likely strategies that will yield self-aware machines, and how artificial intelligence research might help us better understand the structure of our own minds and how we behave.

Hod is a professor of engineering and data science at columbia university. His website is at www.hodlipson.com, that’s HOD LIPSON. I’ll read a little bit about him from that site:

Hod Lipson is a roboticist who works in the areas of artificial intelligence and digital manufacturing. He and his students love designing and building robots that do what you’d least expect robots to do: Self replicate, self-reflect, ask questions, and even be creative.
Hod’s research asks questions such as: Can robots ultimately design and make other robots? Can machines be curious and creative? Will robots ever be truly self-aware? Answers to these questions can help illuminate life’s big mysteries.
An award-winning researcher, teacher, and communicator, Lipson enjoys sharing the beauty of robotics though his books, essays, public lectures, and radio and television appearances. ​Hod is a professor of Mechanical Engineering at Columbia University, where he directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative.
End quote

Okay, here’s the interview with Hod Lipson:

Zach: Hi Hod. Thanks for coming on.

Hod: My pleasure.

Zach: This maybe a good place to start [00:02:00] is when it comes to an artificial creation that could approach human-like abilities, either in terms of creativity or self-awareness, what’s the most impressive artificial intelligence feat that you’ve seen so far?

Hod: Oh, you know, it’s very hard to, to choose because, uh, these feats keep coming, uh, at a. It’s an increasing rate. I mean, uh, if you’d asked me a few years ago, I’d say the fact that, uh, an AI can tell a difference between a cat and a dog, Hmm. That’s an incredible feat that we, the community have been struggling with for decades, uh, with no hope until 2012.

Uh, but since then, other things I’ve have come, uh, you’ve seen AI that can create, uh, can, can write, can summarize tasks. Can answer questions in a dialogue and you see robots doing back flips. So it’s really, it’s all over the place.

Zach: Mm-hmm. It’s a wide space. Yeah. What’s the project [00:03:00] you’ve worked on that you’re the most proud of?

Hod: Again, hard to choose. There’s a lot of things happening and especially that, uh, some of my students might be listening. I don’t wanna pick one over the other, but really, um, when, when you look at, at, uh, at, uh, sort of AI and robotics, we’re seeing amazing things from robots that can build robots to, uh, to machines, uh, that we’ve created that, that can, uh, uh, sort of begin to understand what they, this is a very exciting.

That I hope we’ll see more and more of. We’ve seen, uh, networks that learn to that, that learn better by listening to acoustics rather than being told, uh, what the answer is. All kinds of kind of, uh, surprises where we have, uh, networks, uh, net neural networks, for example, that learn how to paint. Uh, I make art that for most people is as good as human created art.

Mm-hmm. I mean, it’s all over the place. [00:04:00]

Zach: When it comes to the general public’s perception of how far along AI is to being something like the self-aware kind of, uh, you know, sci-fi vision of, of AI that we’ve seen in movies and, and TV and such, how far do you think the gap is between, you know, what we’re capable of now and what people’s perception is of, of how, how far along we are to reaching something like that?

Hod: You know, the, the gap today is pretty large, but here’s the interesting thing. The time to close that gap is, is short. In other words, uh, even though the gap is large because the technology is accelerating mm-hmm. Um, it means that, uh, you know, the progress that we’ve, we’ve, uh, the rate of, of progress that we’ve seen over the past couple of decades is not representative of what’s gonna happen in the.

I believe in the next decade we’ll make so much more progress than we’ve done in the past century in terms of [00:05:00] machine intelligence. That even though the gap is large and machines today are nowhere near human level consciousness for various reasons, we’ll close that gap sooner than you would expect.

And in fact, I think we underestimate where this. I think most, most people sort of tend to think that, tend to ask at what point will machines reach human level intelligence? But we don’t understand that that human level intelligence is not sort of the, the ultimate thing. I mean, it’ll just keep on going.

So, so it’s, it’s, it’s gonna keep on, it’ll rush by us and go somewhere where we can’t even imagine.

Zach: Let’s say there was a, an artificially constructed entity that showed evidence of, of being self-aware. In other words, it showed evidence of having an eye point of view of some sort. Do you think it’s, it’s necessarily the case that that entity would always have some sort of internal perceptual experience, or do you believe it’s possible that something could [00:06:00] express, um, an awareness of self without having an internal reality?

Hod: Absolutely. Uh, any, any machine can express and any human can, can, and the animal can express, uh, all kinds of external, uh, cues that make it look like it has self-awareness or consciousness or whatever it is that you care to define. Uh, but it’s really not the same as what you and I. It’s absolutely possible.

In fact, you know, philosophers have. Millennia. You know, how do, how do I know you are conscious? How do you know I am? I mean, we’re making a lot of assumptions, but the reality is you never know, and it’s been also in artificial intelligence. There’s been this debate about strong AI versus weak ai. Is the AI really smart or is it just imagining it?[00:07:00]

Not just for consciousness. It has existed for almost any form of intelligence you can imagine. How do you know that an AI can really understand what it’s translating or it’s just translating, uh, in a sort of, uh, dictionary with a extraordinary long dictionary? How do you know that something is a chat bot really understands your question or it’s just looking up answers?

The reality is. Theoretically speaking, you can never tell. Uh, but at some point it becomes so close that you just have to, uh, sort of, uh, acknowledge it’s indistinguishable and it’s, it’s, uh, effectively the same thing.

Zach: Yeah. It seems like at some point, I mean, if you’re, if you’re talking about. Behavior that was not programmed in and, and that was kind of emergent in some black box way.

It seems like Occam’s razor might say that if it’s, if it’s doing all of these things that are hard to understand and, and expressing a, a sense of self that it’s. Probably more [00:08:00] likely that it has a sense of self than that. It’s just somehow learned to fake having a sense of self. If that, if that makes sense.

Hod: It, it’s true. And um, I think that, uh, one of the misconceptions about artificial intelligence, there’s a couple, but one of them is, is that it’s going to, uh, sort of imitate. Humans. In other words, it’s gonna have the same kind of self-awareness or consciousness as humans have now. You have to remember, we humans have evolved our self-consciousness for in, in response to sort of pressures of survival.

It, it is, our self-awareness is good at. Certain things that help us give us an evolutionary advantage, and it’s not good at other things. So we have a very, very specific form of self modeling, our ability to see ourselves, um, that is unique to our environment, to our evolutionary history, to our bodies, to our sensations.

We [00:09:00] only sense so many things, but when you look at AI and robots in general, AI in the, in the real world. Robots will have different sensations, different sensors, different uh, abilities, different environments, and therefore they will have a very different kind of sense of self, a very different kind of self-awareness.

So it’s not going to be that sort of, uh. You know, this Android, uh, science fiction

Zach: mm-hmm. Human like machine

Hod: that wants to be human. Mm-hmm. And, uh, you know, craves the, the, no, it’s gonna be, it its own thing. It’s gonna be like a different animal that has a self-awareness that is, uh, in some ways, like us in some ways different.

And I think we have to let go a bit of that human-centric view of self-awareness and recognize that there’s, uh, a lot of types of self-awareness and, and it’s gonna be really interesting to see what, what happens,

Zach: right. It’s probably more akin to, in the same way that it’s hard for us to even imagine, you know, what a, what an insect or a bad is like, or whatever.

There’s [00:10:00] that similar problem and it won’t be exactly like apples to apples kind of comparison. Yeah, exactly. The reason I was interested in talking to you specifically with your work is because in my admittedly, very amateur thinking about artificial intelligence and how one might go about creating a self-aware entity, it seems to me that having a physical body would be an important I.

Part of developing that, because it seems like a core component to self-awareness is having some kind of boundary between oneself and one’s environment. That’s right. So that’s one can define itself against a changing environment and gets various kinds of sensory inputs that are out of its control. So I think, uh, it’s hard for me to imagine that happening in a digital only environment inside a computer.

So that’s why I find your work so interesting, and I’m curious. Is that basically why you’re interested in robotics? Did I sum up basically what you believe? Or if I, if I was off base, maybe you could correct what I said.

Hod: Yes, I, I that you, you, you nailed it. I mean, I think you have to have [00:11:00] what we call an embodied agents.

An agent that has a boundary, that has sensors, that has, uh, and can take actions in an environment now. In principle, you could have a simulated robot in a simulated environment. It doesn’t have necessarily to be physical, but the, the physicality and the richness of the physical interaction and the sort of open-endedness of physical interactions really, uh, sort of give it the, that, that kind of, uh, richness of information from which it can start modeling and, and, uh, and creating these, uh, it’s, it’s the, uh, substrate.

Which self-awareness can grow. And this is really what we’re trying to do, but I don’t think it’s, it’s limited to that. It’s just a, for me, it’s a very sort of, uh, convenient, uh, place to start looking for it or expect it to grow.

Zach: And I’ve seen you describe. Your work or that the strategy behind this type of work is being bottom up instead of top [00:12:00] down.

And can you talk a little bit about what that means and how you see that as being important?

Hod: Yeah, so I, you know, first of all, I have to say self-awareness, consciousness, sentience, if you like all these, all these grand, uh, words, emotions. Are things that philosophers have been and theologists and, and, uh, neuroscientists and people have been thinking about this for, for millennia.

And it’s one of some, one of the, the big three questions I think, uh, that, and it’s, it’s a question we haven’t quite answered and what it is really, and, and where is it coming from? Really what drives it. So I don’t have a, a particular, you know, an answer, a good answer to that question, but, but our approach is a little bit different.

Whereas in, in the past, people sort of started with asking how, what is human self-awareness? They, they start at the top and I think humans are the most self-aware, uh.[00:13:00]

At the top is really, really hard because if you start at the very complex example of self-awareness, chances that you’ll, you’ll really understand it, are, are a slip because it’s so complicated, also involves introspection. We bring baggage into it. Very difficult. So what, we’re taking a very different approach.

We’re saying, okay, self-awareness is not a black and white thing. That either it’s human level or it’s nothing. There’s all kinds of level, all the way from a self-awareness of a, uh, you know, of a, of a worm all the way up to a human and maybe beyond. And we’re gonna start very small. We’re gonna start with the this from.

We’re gonna start very simple, and I think, uh, that this approach, if not solving the, the puzzle, it might allow us a different kind of window to understand what it is. So we’re trying to build it from the ground up in a very, very simple way. Nothing even close to human level [00:14:00] performance and, and build it up.

And our hypothesis is actually very, very simple. It’s that, uh, self-awareness I believe is, is really nothing but, uh, self simulation, the ability. To see yourself in the future under imagine the, the, the sensations you’re going to feel in the future. Uh, imagine actions you can take and the consequences, uh, that those actions will have.

In other words, the ability to sort of simulate yourself into the future. If you can imagine yourself walking on the beach tomorrow. If you can imagine the taste of a meal that you’re going to eat tomorrow, that is self-awareness. And the diff that dog can do that also. But we humans probably can do it further along.

We can do it into the distant future. We can imagine what it’s gonna be like to retire, for example. Uh, and we can, we can really imagine ourself, yourself in the long term and that ability, how far into the future, how accurately you can imagine yourself [00:15:00] into the future. Is the level to which you are self-aware.

This is a very, very crude definition. Philosophers might argue with that, neuroscientist might argue with that, but we’re taking that model and we’re trying to build that up from, from the ground up with, with machines.

Zach: Mm-hmm. Yeah. I wanna come back to that self simulation idea, and I wanted to go back to, uh, yeah.

The, the approach to start with the. The bottom up approach just makes so much sense to me. It almost seems like, like you said, the attempting to do this, this top down of like, we’re gonna design this complex, uh, you know, model of, of what consciousness is, just seems so, it seems almost to miss just how much we don’t know.

And that’s why your, your work makes so much sense to me because it seems like eventually you’re gonna stumble across some really emergent behavior. Some, some sort of. Construct that’s quite simple. That will lead to some pretty amazing behavior. And I think you’ve already, you know, I think your work’s already shown some of that, um, you know, self-organizing or, or, uh, [00:16:00] self-referential behavior.

Hod: Right? I, I think, I think there’s a, exactly like you said, there’s this, if you look at other big questions like the origin of life, uh, like the origin of the universe, all these are very grand questions. And what we like to do is it turns out. For example, origin of Life has a very, very simple explanation.

With evolution, it’s actually, you don’t need these grand theories, actually, a very simple, yet powerful process that gives rise to everything we see around us. Same thing, uh, origin of the universe. If you. That the rules of physics are relatively simple, and yet they give rise to all this complexity we see in the universe.

And to me, I’m, I feel there must be a simple, a really, really simple mechanism that gives rise to self-awareness. It’s not something complicated. Mm-hmm. It’s not gonna be some extraordinary quote unquote algorithm somehow. Takes everything and creates this amazing, it’s going to be, the answer [00:17:00] is going to be something very, very simple.

A set of few little rules that give rise when executed enough. Mm-hmm. With enough in a rich enough environment, they’re gonna give rise to, to self-awareness and beyond. And, uh, I, you know, the question is, what are these rules? What’s, what are these little set of ingredients that gonna give rise to. I’m after.

Zach: Mm-hmm. Yeah, I mean, in support of that, it just seems evident that even the simplest animals, you know, like amoebas and, and that, that level, it’s like they, they have so much complexity and, um, maybe not the most simple ones, but not far up that chain. There’s just so much self-awareness and flexibility and tenacity and creativity in how even the simplest creatures can solve problems.

And so that kind of. You know that, that’s why I’m on, on your side in terms of like, it seems like it’s only a matter of time before you stumble across some relatively simple architecture of some sort That leads to really [00:18:00] surprising behaviors, and then it’s just a, a, a matter of like giving that space to grow and letting it evolve and iterate over.

Many, many iterations to reach something, you know, more emergent or something.

Hod: Exactly. That’s, that’s, that’s exactly it. And you can see analogies of that in, in artificial intelligence. If you look at the history of artificial intelligence, people have tried to create AI from the top down with expert systems and rules and logic.

Mm-hmm. And all kinds of sophisticated, uh, statistical processes. And, uh, in the end. It seems to be that, you know, all of, almost all of AI right now seems to be all the breakthroughs are converging into neural networks, which are really a very, very simple idea, but it just scaled up to massive numbers. You take these very simple neurons, very simple rules, and you pack billions of them together, and suddenly you get.

You knowis that can, that can understand the complete videos and understand what’s [00:19:00] around them, and make all write poetry and, and create art and do amazing things, all with this very, very simple device with a neuron that’s, that’s just, uh, skilled in a massive way. So history of AI is full of attempts to do things from the top down that have failed, and then some bottom up thing that just takes over.

Uh, so I hope it’s gonna be sort of the same thing, but we’re looking for.

Zach: Now there’s the concept of the the strange loop idea, which was popularized by Douglas Hofstadter, where there’s something about the, the action of self modeling, of creating a map of one’s environment and then oneself is part of that environment.

So it creates this kind of, sort of like a regression of a mirror reflecting another mirror kind of property. And I’ve seen that idea discussed and some people think it’s been kind of shown to be not that. Important idea, but I’m curious, are there, are there elements of that in, in how you view, uh, how we’ll reach something like self-awareness?

Hod: Yeah, I [00:20:00] think, you know, uh, what we’re doing right now is an example of that where we’re not just self-aware, we’re made a self-aware. We’re talking about what is self-awareness and how to create it. And will that self-awareness be self-aware, uh, enough to under, to debate self-awareness, you know, something you can, you can, you can do all of that at a sort of, uh, infinite regress and, uh, that, uh, loop, uh, can and does happen.

And, and, you know, if, if it works, it’s, it’s gonna work, uh, it’s gonna happen also, uh, to these artificial systems as well. I.

As a worm and then a rat, and then a dog, uh, then, you know, other primate uh, primates and then humans and and beyond. So it’s, it that level of sophistication and self regression is gonna happen, probably. But, uh, I think it’s a piece of the puzzle. It’s not the whole story.

Zach: I might not be wording it correctly.

To put it another way. What I, what I was trying to [00:21:00] communicate is even for like a simple creature. Like, say a worm or something At that level, it seems like potentially it’s complexity, it’s it’s self-awareness of, uh, even its limited self-awareness is due in some sense to it. Creating a map of the world and then having itself, I.

Be part of that map, you know? So it has to map the world. It has to map itself.

Hod: Yeah.

Zach: In mapping itself, it also has to, you know, map the world. Again, it’s, it’s like this, this, this, that kind of regression. And maybe I’m not, I didn’t phrase it right, but I’m, yeah,

Hod: I understand what you’re saying. Then. It’s, um, in order to, to create a, a model of everything, you have to include yourself and, uh, in, in my mind.

That’s sort of an indirect way to, to have the, the, the self in there. But what’s beautiful about self-awareness is that you sort of, the individual, the self-aware individual is able to separate themselves from the environment. So it’s not just a model of everything that includes [00:22:00] the entity that is self-aware.

It’s a strict separation between the entity and the rest of the world. Mm-hmm. Separation. Is, is really the difference between just a, uh, I don’t know, a self-driving car that’s just modeling everything, uh, versus something that is aware of what part of the dynamics of the world is. Is within its own boundary and what’s outside the boundary, and that separation is really key.

In fact, you know, I, I, I believe that that separation, it endows a, a big evolutionary advantage because once you understand what that boundary between yourself and the environment, you understand that if you are, if you change environments, you only need to relearn the environmental part. But this. You yourself have not changed.

So, so it’s a shortcut. You can ex, you can learn a lot faster once you can separate yourself out from the environment. Mm-hmm. Because when you move to a new [00:23:00] environment, you’re not changing the environment is, you only have, uh, a little bit, uh, to learn. For example, if you’re playing, uh, tennis and you learn how to move your arm and you learn the rules of the game, and now you go and play a different game, go play badminton.

You don’t need to learn how to move your arm again. Because it’s the model of yourself remains the same. You only need to learn new rules for the game. Only the game has changed, but you yourself have not changed. So that separation of self and environment is really, really important. Animals that can’t do that, they just have to learn the whole map of the world and themselves together, like you say.

But once you can separate it. Huge advantage. And I think that’s the origin of why this, uh, uh, self-awareness is, uh, so powerful in nature and why we see it mostly with complex creatures like humans. Mm-hmm. They have a lot of versatility in what they can do and can glean a lot of advantages from this separation.

Zach: [00:24:00] Another thing that has struck me in my very amateur thinking in, in being important for developing something that approaches self-awareness is. The ability to experience some sort of equivalent of the visceral sensations that creatures have, like pain and hunger. In other words, it’s, it’s hard for me to imagine a system that would be motivated to.

Improve and evolve in some way, either if we’re talking iterations or just within itself over time, without some sort of analog of, of physical feelings to drive it, to improve from its environment. And I’m curious, is that something you also believe, and maybe you could talk a little bit about how one would even create those kinds of impacting sensations on a, on a, uh, artificial being.

Hod: Your question there has a couple of, of, uh, really big sub-questions there. At least two. One one. You’re asking about, uh, feelings and emotions. And the other one which you touched upon, I believe is, is the [00:25:00] question of free will.

Zach: And to be clear, I I just meant the, uh, I, I was mainly talking about like the, the pain and, and the visceral sensations.

And I, I think the, um. The motivations was more like just the, the things that impact a creature from its environment. That, and, uh, to tie it into this Hy Conan Penti Hy Conan’s ideas, he, he called it the self-explanatory information, which is information that doesn’t need processing. There’s no, there’s no symbolic meaning.

It’s just acting on a, a negative or a positive. From the environment. If, if that makes, if that clarifies my question anymore. Yeah.

Hod: Okay. I think so. There’s a, there’s a lot to unpack there, but I think if, if, if you talk about pain for example, you know, sort of emotions that are more immediate as opposed to, let’s say long, uh, worrying about something or concern about long-term things, I think, I think these are all, again, on a continuum of.

Predictions [00:26:00] about the self, really. So everything from pain to concern, to happiness, to love, to passion, to all these different things are sort of, uh, in a, in a very unromantic way. They’re boiled down to sort of predictions in the self simulation. Uh, if it’s, uh, if it’s imminent danger, then it could be fear If it’s long-term, a danger, it could be more sort of concern or, or anxiety about something.

Uh, if it’s, uh, future good fortune, it’s gonna be, you know, sort of happiness kind of thing. So, so a, a lot of our emotions, I think. Predictions of this self model, uh, and, and where we see it. So now we have evolved to react based on these predictions. So if we see a prediction, uh, of, uh, imminent, uh, danger, we have policies in our brain that’ll make us, uh, react in [00:27:00] some way, fight or flight, or.

Whatever it is we need to do. So I think this, this, uh, evolution plays into how we respond to these emotions, but these emotions themselves are predictions, uh, about the future. And machines can have these types of predictions, just like, uh, humans and animals do. There’s nothing, uh, magical about it.

Zach: So, sounds like you’re saying, I almost have it kind of backwards, whereas you, you, it sounds like you’re saying the more advanced the consciousness becomes, then the more.

Whatever factors are are affecting, its. Consciousness will be perceived as pain or other. Exactly. Um,

Hod: that’s exactly true. Yeah. I think you phrased it better than I did. So you can feel pain about the state of, uh, the human condition. Alright. Uh, around the plant. That’s a, that’s a different kind of pain than touching something that’s too hot.

But, uh, it, it pains you in a different way. But it’s, it’s, it’s, it’s the same. It’s still [00:28:00] pain. Yeah, it’s still pain and it’s a pain. It’s a different way, but it has to do with our ability to model the future of the world and ourselves in it and understand the consequences of that, that are negative, the negative consequences, and that translates into pain.

So I think a lot of, and the same thing, by the way, can happen to happiness, to enthusiasm, to, to excitement, to all the whole spectrum. And, and I think, uh, that’s exactly it. It’s, it’s, uh, it, it boils down and then we, we learn how to react. You know, we, we react to these emotions in different ways, uh, that are themselves, uh, evolved policies.

Zach: So do you think it’s accurate to say then that if, if we did create some sort of artificial, self-aware in some way, entity. That it would naturally come to have, um, something that, that was akin to its own version of, of pain. Even though, like you said, it, it, these things will likely not be in any way, uh, comparable to us even if they, [00:29:00] they reach that stage.

But it sounds like you’re saying they would naturally come to have, you know, just, just from ha having negative. Positive reinforcements of various sorts. Those things would theoretically translate as a version of, of pain maybe.

Hod: Yes. I think that will happen. Uh, and again, this is both, you know, it’s, it’s exciting, but it’s also sort of, we, we won’t be able to necessarily understand a lot of these emotions.

Mm-hmm. Because again, we understand emotions in the context of our owns.

Mating. I mean, we have, we have human context to emotions. Very, very difficult to understand. That a AI has, that has different sensations, that has different, uh, sensors, different environments, but it is going to have those kinds of emotions. And in fact, I think it’s gonna be part and parcel of being intelligent.

There is no intelligence, [00:30:00] uh, at that level without emotions. I think that’s one mistake that a lot of, uh, people who talk about AI sort of like to separate it out, but I think it’s.

Even issues like conflicting emotions, which are, you know, the of, of human existence, uh, are gonna play out with ai. Conflict predictions, uh, short term versus long term. Mm-hmm. Cost versus benefit. All these conflicts are gonna play out in, in a similar yet different way, Foris. And, uh, they’re gonna give rise to all kinds of emotions that we can’t imagine and, uh, for which we don’t have even words.

Zach: That kind of maps over a bit to the Yeah, the existential philosophy, uh, school of ideas, you know, we all, we all have these influences on us based on just pure facts of, of being, uh, of existing in the world. You know, like fear of death, fear of isolation, fear of, [00:31:00] uh, meaninglessness, fear of, uh, freedom, the ability to, to act.

Um, so yeah, that kind of maps over to these core kind of stresses that you can imagine a artificial. Being, having those, you know, any, any existing being theoretically having these, like just the stresses of existence and the, and the conflicts that that. Produces Yeah. It’s possible to, to imagine that.

Yeah. Right.

Hod: And so it, it, it’s also likely therefore that, uh, a future AI will be just as emotional and sentient and rich. Its poetry will will come, its art will arise from, from these conflicts that, from the struggles that it experiences. I mean, a lot of a, everything that we sort of, uh, are. Feel that we’re unique, I think is gonna happen pretty soon with AI as well.

And yet that’s something that’s very, very difficult for us humans to share. I can’t tell you how many emails I get from people who say, you know, but we, humans we’re different. The machine cannot have [00:32:00] emotions. A machine cannot feel, a machine cannot have, uh, desires. A machine cannot, uh, be creative because it’s.

The same way as humans are. And I think that’s, um, that’s this simply, it’s a very narrow view. It’s a, a very human-centric view. But once you, once you sort of let go of that, you’ll see that, that we are actually going to experience an incredible moment where we will not be the only intelligent, uh, sentient beings on the forms of sentient.

I for one. Uh, I’m looking forward to that because I, I, I want to see what, uh, what else is out there. How, how else can you experience the world in ways that, that we, humans can’t, I think, I think we’re only seeing a tiny corner of the, of the, of the world this way.

Zach: Yeah. That’s what interests me in it. Even just the, the idea of how thinking about these things or the advances in the.

In these [00:33:00] industries, shed some light on the human condition, you know? Mm-hmm. That the fact that there are these, you know, our, our, our minds are, are logical things and the stresses on them are, are logical and understandable. And that, that’s what interests me anyway, that, that tie in. Uh, so getting into the.

The public’s perception of some of the dangerous aspects, theoretical aspects of artificial intelligence, the the risks involved based on how far from self-awareness properties that the current artificial intelligence work seems. Personally, in my opinion, I think that humanity will likely destroy ourselves as a species within not too long, because I think.

For various reasons, like weapons to come that we really don’t have an idea of, of what they’ll be like in the next few decades. And we’ll have, uh, you know, the ability to create manmade diseases, things like this. Uh, personally I think I, I’m pretty pessimistic about our chances of survival. So I, I actually am not too [00:34:00] scared of ai ’cause in the realm of possible threats, I see it as being farther out.

Mm-hmm. But I’m curious, do you see this self-aware, artificial being. How far away do you see it in the future, and do you see it as, as much of a threat comparison to other things?

Hod: Well, I, I share your perspective. Uh, the way I like to see it is sort of the danger is not what AI is gonna do people, but what people are gonna, people using AI and.

That’s, that’s the more immediate threat. I’m a little bit more, uh, optimistic, I would say. I think we’ve been, uh, we’ve encountered many powerful technologies in the past, uh, for which, uh, we thought, I’m gonna usher the end of the world from, from nuclear to genetics. Uh, you know, to fire. I mean, there’s been a lot of discoveries that could, uh, end the world, but they haven’t.

And, uh, it’s not that they weren’t bad actors, but somehow a better nature prevailed and we were able to, to use these technologies, uh, for, [00:35:00] for good, for the most part. It’s not, it’s not clean and simple, but for the most part, so I’m, I’m optimistic, we’ll.

Use this technology for good, but at some, and that will allow us to reach that point eventually where AI is, is superhuman. And you know, again, lots of books have been written about that and uh, that moment of singularity where the AI makes decisions in its own interest. Uh, and takes over the world or whatever it might be.

I think that the reality is gonna be very different. Um, the reality will be more like, uh, we are gonna co-evolve with lots of ai. This is another, I think, misconception around artificial intelligence. There is this idea that AI is one thing. And it’s gonna do this and that it’s gonna take over the world or not take over the world.

But the reality is, I think AI is more like a a, uh, I like to think about it as the sixth kingdom, [00:36:00] another kingdom of life. Just like we have plants and we have animals and we have fungi, the the five kingdoms. We also will have a sixth one, and that’s gonna be lots of different types of ai, big ones and small ones, and not so smart ones.

All shapes and sizes and, uh, we will co-evolve with this. And it just like we can deal with different animals with different capabilities. Some are stronger than us and some are faster than us. Uh, there’s gonna be lots of different ais and we’ll co-evolve and I think it’s gonna be okay, uh, because it’s not gonna happen overnight.

Mm-hmm. So I’m a little bit o more optimistic, uh, about it, but I think. One thing we can agree upon that, that this thing is gonna happen pretty soon in evolutionary times. Maybe a few decades, maybe a century, but you know, our grandchildren are gonna live with this.

Zach: Getting back to the idea that these things could theoretically happen quite quickly, even though I see [00:37:00] it as as farther out, I also see the potential for sort of like somebody like yourself doing work where you are able to isolate some interesting property and then very quickly it like self-educate and reaches some pretty amazing states of things I can totally imagine.

Mm-hmm. That happening too. And do you see, do you see that as possible, like someone like yourself or, or someone doing similar work to go from finding this property, this interesting property of, of a certain architecture to like within a few, just a few weeks, months or years, having something that’s suddenly like quite.

Self-aware in a very quick way like that.

Hod: Yeah. I think what’ll, what we’ll have is, uh, it will, it will take, uh, again, a few decades, which, which is nothing in human evolution mm-hmm. But a few decades is enough for us to sort of adapt as humans as well, uh, under, and again, it’s not gonna be just one thing.

It’s gonna be lots of them. They’re gonna compete, AI are gonna compete with other ais more than compete with human. And in terms of resources and things like that. [00:38:00] So it’s, again, it’s like an ecosystem of, uh, you know, bacteria compete, uh, you know, viruses. As we all know, affect humans, but, but they’re not sort of hell bent on destroying humans.

They’re just competing with other viruses mostly. And, uh, and the same thing is gonna happen with ai. It’s, it’s, it’s an ecosystem and I think will happen quickly, but slowly enough that we can keep, uh, keep up with it and, uh, have. Allies and use AI to create checks and balances on other ais and things like that.

There’s a lot of ways out, uh, from, uh, out of this that are not as, uh, dark as you know, this AI that takes over the world.

Zach: One reason I’m interested in artificial intelligence topics is what it might tell us about our own minds. For example, I sometimes think of different kinds of mental illness as being.

Possibly due to manifestations of faulty models of the world, because what our [00:39:00] human minds do is just so. Complex and having to keep track of so many things. Having to keep track of say, the external physical environment, having to keep track of our physical selves in that world. Having to keep track of our internal mental model of ourselves, having to keep track of the concept of other entities around us who are like ourselves, having to keep track of the, the social world where, which is this abstract world of symbols and meanings that we share with these other beings around us and that help us.

You know, function in ways that are considered normal and socially acceptable. So it’s just a tremendously complex set of models that have to relate to each other. And all these very specific ways have to function in very aligned and exact ways for us to, you know, be considered normal and functional.

And it’s possible to see different types of mental disorders, mental stresses as various kinds of breakdowns in these models and how they relate to each other. And I’m curious, is that something that you’ve thought about or, or that you’ve seen? Talked about [00:40:00] it in, in the realm of like how theories of self-awareness or consciousness map over to the real human world.

Hod: Yeah, no, that’s a very good point. We’ve actually seen this happen with robots, uh, in a very small way. Uh, disorders, if you like, for example, I’ll give you a a a simple example. We have a robot that creates a, a self model, a, a model of its own physical body, uh, from experience. And this robot has, let’s say four legs.

On occasion, and this doesn’t happen frequently, but on occasion it will create a model of itself that, let’s say, has one of the legs in the wrong place. And it just, it just insists that that leg is. It’s self model is just wrong. We, we can tell from the outside that it sees itself incorrectly, and yet because of that self model that’s a little bit incorrect.

It can, it predicts that it’s gonna walk faster than it really can. Right? So it has a, if you like, if it has a, [00:41:00] an inflated self model. And because of that, it’s, its predictions are off and yet it learns how to walk, uh, using that self image. In other words, sometimes even if your self model is wrong, it’s inflated, it can still lead, help you make the right decision.

So, so you can see sort of an inflated self model. You can have a deflated self model. The self model can be incorrect. So you can see a lot of parallels like that with, uh, with the way humans see themselves. Mm-hmm. And, uh, we’ve seen situations where a robot loses a leg in reality, but it fails to adapt itself model.

So it has something that’s equivalent to a phantom limb syndrome. Mm. Uh, if you like. Now we usually chalk this up to a quote unquote bug in the, in the software, but reality is exactly what you say. These, these, these, these are very complex systems with lots of ways to go wrong. And when they go wrong, they go wrong in, in, in, in many ways that give.

All kinds of, uh, [00:42:00] results to the self models. Sometimes a positive, like an inflated self image can be useful in some circumstances. Uh, sometimes, uh, they’re painful because the model, the mismatch with reality causes, uh, problems. This also means that when we look forward into what these ais will actually look like and what they will.

Feel they will have problems. They will have disorders just like humans and sometimes they will be depressed, for example, because they are looking for, you know, their meaning. They will not be a, your driverless car is not gonna be happy taking the same route every day. It’s gonna be say, why am I doing this?

I wanna do something more interesting. I mean, I dunno where it’s gonna end, but.

Pure calculating lo cold, logical machines that are gonna just, uh, you know, do the same thing. They’re gonna be very sort of, uh, rich and complex for better and for [00:43:00] worse.

Zach: Yeah, that’s what strikes me about. Existing in the world is the fact that you have to, there’s so much to balance between the, the modeling of yourself and the modeling of the environment.

And for one example of, of something I’ve thought about in this area is psychosis could be seen. I. As a kind of turning inward, a, a falling apart of one’s model of the external world and one one’s model of of other entities, and a focus on one’s internal symbols. Uh, so that one’s internal world and, and internal sensations and concepts start to take the place of the signals that used to come from other external sources, you know, that that would be there in a more, uh, well-functioning set of models.

So that’s just. One example and, and, uh, right. Yeah. It just seems like there’s so much complexity Yeah. Of things that have to be aligned, you know?

Hod: Exactly. I mean, uh, look, the self simulation that we have in our mind is not only used for making grand decisions, they’re also used to make a decision about where I’m gonna put the foot, my foot [00:44:00] next when I’m walking.

Mm-hmm. Mm-hmm. Uh, this self simulation happens at all levels, at all scales, with all kinds of details. And when that self simulation goes wrong, it inaccurate. It’s, uh, isn’t adapted to reality fast enough or so or so on, then uh, you make bad decisions and then bad things happen. And, and with that can manifest in, in long term bad decisions in, in short term, in falling over, uh, physically or falling over, uh, in, in more, uh, abstract ways.

That’s exactly sort of the complexity and that when the boundaries and the accuracy, uh, begin to fail, all kinds of things happen.

Zach: Yeah. It gets into, you know, the, the, the negative, uh, phrase of, of someone being self-conscious as being too, too much aware. I. Themselves, you know, that, that phrase that we use for that, it’s almost like an awareness that we have of, you know, we have to keep that balance between the outside world and, and ourselves at some, some sort of even footing.

But that’s, you know, easier [00:45:00] said than done when you’re, when things aren’t lined up Right. But, uh Right. It’s,

Hod: and I think it’s also important to understand, uh, we humans probably don’t have a single self model. We have many self

Zach: models. Mm-hmm. Competing self models,

Hod: uh, uh, not just competing, uh, models at different scales, uh, models, uh, at different, uh.

Times, uh, different scales, different situations, different situations, and we pick and choose which one we’re gonna use, and sometimes we pick the wrong one, or we have, uh, competing ones for the same situations. And we pick and choose based on, on all kinds of cues. So in the same way, AI is not gonna be this monolithic self awareness gonna be, again, a, a forest, if you like, of self-aware models, and it will pick and choose.

Or they won’t compete or cooperate in, in, in lots of interesting way. Again, lots of opportunities for, for things to go wrong, but lots opportunities for sort of very powerful. Uh.

Zach: It seems like part of the challenge for [00:46:00] someone doing your kind of work is just staying up to date on all the many advances that are happening.

And I’m curious, what is a way that you stay up to date on all the studies that come out, come out on this, these things?

Hod: Uh, well, I, uh, well, first of I, I’m, I’m lucky to have a lot of students, uh, and they burst into my office, uh, sometimes virtually. And say, did you see this new result? Uh, this happened?

That happened? Mm-hmm. And so they, they keep me, uh, abreast. But, but it’s true. There’s a lot happening in the field of ai. Luckily, there’s a, uh, quite a few, uh, you know, repositories and, uh, podcasts like your own and uh, and blogs that sort of try to digest a lot of what’s happening and make it possible because there is a lot happening.

Absolutely.

Zach: Is there anything you’d like to mention that we haven’t touched on?

Hod: Yeah, I mean, the, the, there’s the, the big elephant in the room here is I think, uh, which we haven’t talked about is the ethics of all of this. You know, is this something, you know, we don’t. [00:47:00] I have self aware machines, period. And we’re just gonna shut down this whole line of research again.

That’s a, that’s a, that’s a question I get by email quite frequently for people who are legitimately concerned. You know, are we playing with fire here? And it’s dangerous and. Um, I don’t wanna sound like I’ve, I, um, I’m confident that, uh, with the answer here, or that I’m complacent with, with the dangers of this technology, it’s, it’s possibly the most powerful technology we will ever develop.

And I’m not the, the first or last to say that. Uh, and so we need, do, need to proceed with caution. And, uh, I think that part of the reason why I’m talking to you and why I talk to other places about this topic, I don’t think this is stuff that, that should be done, uh, behind closed doors. It’s something that we should, the entire public should understand.

This is unfolding. It’s part and parcel, I believe, of making intelligent ai. We’re [00:48:00] not gonna be able to develop a very intelligent AI system for, I dunno, for managing. Finances without it. Eventually having these self modeling capabilities is gonna be part and parcel of any system we develop and everybody needs to be aware of it, and we should understand the consequences and make a decision together.

So I’m glad, uh, we’re discussing this and, and I think, uh, everybody should be aware. It’s, it’s, uh, it’s uh, it’s coming fast and, uh mm-hmm. It’s right around the curve.

Zach: Okay. Thanks a lot for coming on. Ha. It was very interesting. Thank you.

Hod: It was my pleasure. Thank you.

Zach: This has been an interview with Hod Lipson.

You can learn more about his [email protected] if you wanna see some resources on topics referenced in this interview. And some resources that I used for research. You can find that at my website, behavior podcast.com. I don’t make any money on this podcast and I spend a good deal of time on it. If you’ve enjoyed this interview or the podcast generally, please consider leaving me a review on iTunes or [00:49:00] the platform you listen on and consider sharing links to my podcast on Facebook or Twitter or other social media.

I greatly appreciate it. If you happen to play poker, you might also like to check out my work on poker Tells, which [email protected]. Okay. Thanks for listening.