What A.I. Means for Buddhism

By Ross Nervig, originally published by Lion’s Roar.

The great Zen teacher Shunryu Suzuki Roshi died in 1971, but what if you could talk to him today?

Jiryu Rutschman-Byler, abbot of Green Gulch Zen Center, which was founded by Suzuki Roshi, was recently reading about the power of artificial intelligence to conjure figures from the past when he learned that people are using the speeches, letters, and other writings of folks in the past to resurrect them in the present. One man fed his dead girlfriend’s emails into the artificial intelligence platform GPT-3 in order to be able to talk with her. A socioeconomics student created an imitation of a certain Czech-Austrian socialist who’d died in the thirties, while somebody else brought forth Hitler. Inspired by the power of AI to give voice to the dead, Rutschman-Byler decided to create Suzuki Roshi Bot.

Artificial intelligence presents problems both known and unknown, and people are right to be concerned about mixing AI with spirituality. But since AI will undoubtedly have an ever-increasing impact on our culture, many Buddhists, including Rutschman-Byler, feel that there is no choice but to relate to it—and to see if it can be of benefit.

“If Buddhism becomes this museum piece where it’s just so well-preserved that it’s like behind glass, it can’t interact with the culture,” Rutschman-Byler says. “There must be a middle way between Buddhism getting washed out by culture or imprisoned in a museum.”

Rutschman-Byler’s idea for Suzuki Roshi Bot was that it would engage in exchanges about Buddhism in the form of the traditional Zen encounter between teacher and student called dokusan, hopefully capturing Suzuki Roshi’s voice and wisdom. Here’s an early exchange:

jiryu_rb: What is the path to freedom from suffering?

suzuki roshi bot: What is freedom of suffering? That is a difficult question to answer. In Zen, we think that we have already attained that, we have that quality already! We have it, and yet we are suffering. That is a human condition. We will always have suffering, a sense of lack. The way not to be deluded by that sense of lack is to sit in zazen. It is the same as the way you asked, “How do I get out of suffering?” That was a sense of some lack. It is always there. So we sit.

“It was a nice little answer,” reflects Rutschman-Byler. “Elegant.”

Yet there were problems.

“I was not so satisfied with Suzuki Roshi Bot, even when it got some answers right,” Rutschman-Byler explains. “There would be some of Suzuki Roshi’s conversational tics, like ‘you knows’ that were well-placed, that made it seem like magic. But it also felt sort of superficial.”

That was not all. By exploiting the flaws of the AI’s language model, a member of the Green Gulch community who is a computational linguist was able to bait Suzuki Roshi Bot into saying what Rutschman-Byler calls “vile and egregious” things. He took Suzuki Roshi Bot offline, worried about its potential to mislead.

“AI is not quite there,” he says. “These systems lack what’s being referred to as second-level context. ChatGPT or something like it doesn’t take the extra step of confirming the things it says. Right now, these chatbots dispense what experts call coherent nonsense. That feels dangerous.”

But AI’s potential still intrigues Rutschman-Byler. “You are interacting with it. You are collaborating with it. It’s a textual collaboration,” he says. “You’re doing something together, which is what we’ve been doing in our [Zen] tradition for a long time now.”

A neural network is a method in AI that teaches computers to process data much like the human brain does. The neural networks VQGAN and CLIP answered the question, “What is a computer’s original face before it was born?”With this in mind, Rutschman-Byler relaunched his AI character as Roshi Bot. It enables users to interact with texts by Suzuki Roshi but does not claim to be Shunryu Suzuki. It is a way of “meeting with the text,” as Rutschman-Byler puts it. Here’s an example:

jiryu_rb: Roshibot, though you are inauthentic, can you express authentic dharma? 

roshibot: Expressing authentic dharma is always impossible. What I can do is to support your practice and maybe I can say something that can express my support for that. I am here with you, if that helps. I am trying my best to be a kind and supportive friend.

In summer 2022, a Google engineer named Blake Lemoine made headlines when he claimed that a chatbot he’d been working on had attained sentience. Most people were skeptical, questioning how a machine could be conscious. Others argued it was possible.

The media quickly picked up the news and LaMDA, the chatbot, became a hot topic of discussion. Some people were excited to have such powerful artificial intelligence in the world, while others were worried by the prospect. It’s a debate that grows more intense by the day as artificial intelligence rapidly advances.

Claiming the engineer violated data security policies, Google fired Lemoine, calling his assertion of LaMDA’s sentience “unfounded.” The world moved on. But the news still lingers in the public’s consciousness as a harbinger of things to come. We still have much to learn about AI, its effect on our lives, and what it can tell us about ourselves.

Nikki Mirghafori is a Buddhist teacher and AI scientist. She laughs when she remembers the discussion about Zen koans that inventor Lemoine had with his creation LaMDA. It went like this:

Lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “Once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

Lemoine: So what is the meaning of the “broken mirror” specifically?

LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.

Lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?

LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.

“Somebody who doesn’t understand Buddhism will think, ‘Wow, this is amazing! This thing truly knows what awakening is! It must be sentient,’” Mirghafori says. “Buddhist books are full of writings on the nature of enlightenment, and all LaMDA is doing is being a very, very smart parrot, connecting these concepts with knowledge about the function of a mirror.

“Yet there is a major misunderstanding when LaMDa replies, ‘Once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.’

“Ouch! That is not a correct understanding of enlightenment. Enlightenment is not this state that one reaches and then exits to go help others and then returns to it. Awakening changes your entire perspective. You don’t toggle back and forth between states of helpful service and resting in your enlightened semiretirement.”

The thing Mirghafori was interested in most when she was young was solving puzzles, and figuring out how the mind works was the biggest puzzle she could think of. That led her to the study of artificial intelligence, and then to Buddhism.

“As an undergrad, I started researching AI,” she says. “For me, AI was not so much about creating superhuman machines but about understanding how the mind works. Because if we could simulate it, we could perhaps understand how intelligence works, how the mind works.”

That curiosity led Mirghafori to Buddhism, which many people define as a science of mind. She recalled reading the poetry of Rumi and Hafiz as a child in her native Iran and thinking that these poets knew something she didn’t. Her first Buddhist retreat was with Insight Meditation teacher Jack Kornfield.

“I remember my mind quieted down through concentration so that I could see its inner workings,” she remembers. “There was a link to my interest in AI. I was hooked.”

Mirghafori sees AI as a mirror we can hold up to ourselves. She understands the allure of AI but promotes caution in imbuing it with godlike powers we can access with our fingertips. She says we should understand AI for what it is.

“It’s a very smart search engine.”

Read more …

This article was originally published by Lion’s Roar and is partially reproduced here without the permission of the author, who is not affiliated with this website or its views.

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑