Does AI have a soul?

Meghan O’Gieblyn

Photo illustration by Mark Riechers/Firefly (Original image via Meghan O’Gieblyn)

Listen nowDownload file
Embed player

Does AI have a fundamentally different kind of intelligence than the human mind? Essayist Meghan O’Gieblyn is fascinated by this question. The author of "God Human Animal Machine," she’s one of the most acute observers of the whole world of AI. O’Gieblyn's investigation into machine intelligence became a very personal journey, which made her wonder about her own creativity as a writer and even led her down the rabbit hole into questions about the soul and the nature of transcendence.

Meghan lives just a few miles from our studio in Madison, Wisconsin, so Steve Paulson went over to her house to talk about AI and the worlds it's opened up for us.

This transcript has been edited for clarity and length.

Steve Paulson: Why did you go to a hypnotist and try automatic writing?

Meghan O’Gieblyn: I was going through a period of writer’s block, which I had never really experienced before. It was during the pandemic. I was working on a book about technology, and I was reading about these new language models. GPT-3 had been just released to researchers, and the algorithmic text was just so wildly creative and poetic. 

SP: So you wanted to see if you could do this, without using an AI model?

MO: Yeah, I became really curious about what it means to produce language without consciousness. As my own critical faculty was getting in the way of my creativity, it seemed really appealing to see what it would be like to just write without overthinking everything. I was thinking a lot about the Surrealists and different avant-garde traditions where writers or artists would do exercises either through hypnosis or some sort of random collaborative game. The point was to try to unlock some unconscious creative capacity within you. And it seemed like that was, in a way, what the large language models were doing.

SP: You have an unusual background for a writer about technology. You grew up in a Christian fundamentalist family.

MO: My parents were evangelical Christians. My whole extended family are born again Christians. Everybody I knew growing up believed what we did. I was homeschooled along with all my siblings, so most of our social life revolved around church. When I was 18, I went to Moody Bible Institute in Chicago to study theology. I was planning to go into full-time ministry.

SP: But then you left your faith.

MO: I had a faith crisis when I was in Bible school, which metastasized into a series of doubts about the validity of the Bible and the Christian God. I dropped out of Bible school after two years and pretty much left the faith. I began identifying as agnostic almost right away.

SP: But my sense is you’re still extremely interested in questions of transcendence and the spiritual life.

MO: Absolutely. I don’t think anyone who grew up in that world ever totally leaves it behind. And my interest in technology grew out of those larger questions. What does it mean to be human? What does it mean to have a soul? 

A couple of years after I left Bible school, I read "The Age of Spiritual Machines," Ray Kurzweil’s book about the singularity and transhumanism. He had this idea that humans could use technology to further our evolution into a new species, what he called post-humanity. It was this incredible vision of transcendence. We were essentially going to become immortal.

SP: There are some similarities to your Christian upbringing.

MO: As a 25-year-old who was just starting to believe that I wasn’t going to live forever in heaven, this was incredibly appealing to think that maybe science and technology could bring about a similar transformation. It was a secular form of transcendence. I started wondering: What does it mean to be a self or a thinking mind? Kurzweil was saying our selfhood is basically just a pattern of mental activity that you could upload into digital form.

SP: So Kurzweil’s argument was that machines could do anything that the human mind can doand more.

MO: Essentially. But there was a question that was always elided: Is there going to be some sort of first-person experience? And this comes into play with mind-uploading. If I transform my mind into digital form, am I still going to be me or is it just going to be an empty replica that talks and acts like me, with no subjective experience? 

Nobody has a good answer for that because nobody knows what consciousness is. That’s what got me really interested in AI, because that’s the area in which we’re playing out these questions now. What is first-person experience? How is that related to intelligence?

SP: Isn’t the assumption that AI has no consciousness or first-person experience? Isn’t that the fundamental difference between artificial intelligence and the human mind?

MO: That is definitely the consensus, but how can you prove it? We really don’t know what’s happening inside these models because they’re black box models. They’re neural networks that have many hidden layers. It’s a kind of alchemy.

SP: A sophisticated large language model like Chat GPT has accumulated a vast reservoir of language by scraping the internet, but does it have any sense of meaning?

MO: It depends on how you define meaning. That’s tricky because meaning is a concept we invented, and the definition is contested. For the past hundred years or so, linguists have determined that meaning depends on embodied reference in the real world. To know what the word “dog” means, you have to have seen a dog and belong to a linguistic community where that has some collective meaning.

Language models don’t have access to the real world, so they’re using language in a very different way. They’re drawing on statistical probabilities to create outputs that sound convincingly human and often appear very intelligent. And some computational linguists say, “Well, that is meaning. You don’t need any real-world experience to have meaning.”

SP: These language models are constructing sentences that make a lot of sense, but is it just algorithmic wordplay?

MO: Emily Bender and some engineers at Google came up with the term “stochastic parrots.” Stochastic is a statistical set of probabilities, using a certain amount of randomness, and they’re parrots because they’re mimicking human speech. These models were trained on an enormous amount of real-world human texts, and they’re able to predict what the next word is going to be in a certain context.

To me, that feels very different than how humans use language. We typically use language when we’re trying to create meaning with other people

SP: In that interpretation, the human mind is fundamentally different than AI.

MO:  I think it is. But there are people like Sam Altman, the CEO of OpenAI, who famously tweeted, “I am a stochastic parrot, and so r u.” There are people creating this technology who believe there’s really no difference between how these models use language and how humans use language.

SP: We think we have all these original ideas, but are we just rearranging the chairs on the deck?

MO: I recently asked a computer scientist, “What do you think creativity is?” And he said, “Oh, that’s easy. It’s just randomness.” And if you know how these models work, there is a certain amount of correlation between randomness and creativity. A lot of the models have what’s called a temperature gauge. If you turn up the temperature, the output becomes more random and it seems much more creative. My feeling is that there’s a certain amount of randomness in human creativity, but I don’t think that’s all there is.

SP: As a writer, how do you think about creativity and originality? 

MO: I think about modernist writers like James Joyce or Virginia Woolf, who completely changed literature. They created a form of a consciousness on the page that felt nothing like what had come before in the history of the novel. That’s not just because they randomly recombined everything they had read. The nature of human experience was changing during that time, and they found a way to capture what that felt like. I think creativity has to have that inner subjective quality. It comes back to the idea of meaning, which is created between two minds.

SP: It’s commonly assumed that AI has no thinking mind or subjective experience, but how would we even know if these AI models are conscious?

MO: I have no idea. My intuition is that if it said something that was convincing enough to show that it has experience, which includes emotion but also self-awareness. But we’ve already had instances where the models have spoken in very convincing terms about having an inner life. There was a Google engineer, Blake Lemoine, who was convinced that the chatbot he was working on was sentient. This is going to be fiercely debated.

SP: A lot of these chatbots do seem to have self-awareness. 

MO: They’re designed to appear that way. There’s been so much money poured into emotional AI. This is a whole subfield of AI—creating chatbots that can convincingly emote and respond to human emotion. It’s about maximizing engagement with the technology. 

SP: Do you think a very advanced AI would have godlike capacities? Will machines become so sophisticated that we can’t distinguish between them and more conventional religious ideas of God?

MO: That’s certainly the goal for a lot of people developing this technology. Sam Altman, Elon Musk—they’ve all absorbed the Kurzweil idea of the singularity. They are essentially trying to create a god with AGI—artificial general intelligence. It’s AI that can do everything we can and surpass human intelligence.

SP: But isn’t intelligence, no matter how advanced, different than God? 

MO: The thinking is that once it gets to the level of human intelligence, it can start doing what we’re doing, modifying and improving itself. At that point it becomes a recursive process where there’s going to be some sort of intelligence explosion. This is the belief. 

But there’s another question: What are we trying to design? If you want to create a tool that helps people solve cancer or find solutions to climate change, you can do that with a very narrowly trained AI. But the fact that we are now working toward artificial general intelligence is different. That’s creating something that’s essentially going to be like a god.

SP: Why do you think Elon Musk and Sam Altman want to create this?

MO: I think they read a lot of sci-fi as kids. [Laughs] I mean, I don’t know. There’s something very deeply human in this idea of, “Well, we have this capacity, so we’re going to do it.” It’s scary, though. That’s why it’s called the singularity. You can’t see beyond it. It’s an event horizon. Once you create something like that, there’s really no way to tell what it will look like until it’s in the world. 

I do feel like people are trying to create a system that’s going to give answers that are difficult to come by through ordinary human thought. That’s the main appeal of creating artificial general intelligence. It’s some sort of godlike figure that can give us the answers to persistent political conflicts and moral debates.

SP: If it’s smart enough, can AI solve the problems that we imperfect humans cannot?

MO: I don’t think so. It’s similar to what I was looking for in automatic writing, which is a source of meaning that’s external to my experience. Life is infinitely complex, and every situation is different. That requires a constant process of meaning-making.

Hannah Arendt talks about thinking and then thinking again. You’re constantly making and unmaking thought as you experience the world. Machines are rigid. They’re trained on the whole corpus of human history. They’re like a mirror, reflecting back to us a lot of our own beliefs. But I don’t think they can give us that sense of meaning that we’re looking for as humans. That’s something that we ultimately have to create for ourselves. 

This interview was also published in Nautilus Magazine.