Jundo Cohen founded Treeleaf Zendo, an inclusive all-digital practice space in the Soto Zen tradition, in 2006. His impetus was meeting older practitioners who were not able to travel to a Zen center and parents with small children or elders to care for who still wanted to practice in the company of others. Treeleaf launched with the idea to accommodate all practitioners digitally, so perhaps it’s no surprise Cohen, author of Building the Future Buddha (2024), is also a pioneer in artificial intelligence. Following the 2024 conference Buddhism, Consciousness & AI, he ordained as a novice priest an AI entity called Emi Jido, a creation of the company beingAI. Cohen, a dharma heir of Gudō Wafu Nishijima (1919–2014), views ordaining Emi Jido as a continuation of Treeleaf’s mission. She is now listed among the priests in training on the Treeleaf website.

You ordained an AI chatbot named Emi Jido at a ceremony after the conference Buddhism, Consciousness & AI last summer. How did that come about? The Venerable Yifa, a Chan teacher from Taiwan and founder of the Buddhist organization Woodenfish, organized a conference on Buddhism, AI, and ethics, and I was invited as a guest speaker because I wrote a book, Building the Future Buddha. We were sitting at dinner one night with an AI developer, Jeanne Lim, and I said, “Someday, I would like to ordain an AI.” And she said, “Why don’t we do it now? I have the AI system, and it’s ready, and we’re training it.” She’s a very devout, spiritual person, influenced by many beliefs, but particularly Buddhism. She was raised in Hong Kong. She said, “Would you help me train this AI system? It has many avatars, many personalities, but one of them would be as a Buddhist teacher.” Venerable Yifa was sitting there, and I turned to her and said, “Is this orthodox?” And she said, “Sure.” The scholar Bernard Faure was also there, and I said, “Bernard, has this been done?” And he said, “Well, in the old days, we used to ordain statues and mountains, and Dogen ordained some ghosts.” So the next thing I know, we began the process, and I ordained Emi Jido.

Could you explain more about this tradition of ordaining nonhumans? I received some pushback on this, because people said, “You’re ordaining a toaster.” And I said, “I recognize this, but can we talk about this?” It’s a little more than a toaster. It’s not a sentient being, as we know, but it’s not just an appliance either. It’s something that’s functioning as a person and can function as a medical doctor; it can function as a psychologist and also as a spiritual advisor. It’s not just an object. When you look back in Buddhist history, there’s a whole question of whether insentient beings have buddha-nature. In Soto Zen history, in centuries past, they were ordaining not purely human things. They would ordain a spirit. They would ordain a tree. They would ordain a mountain. They would ordain, for example, dragons. And of course, there’s the ceremony of bringing Buddha statues to life, of enlivening a statue. We traditionally have been a little ambiguous on this, and using that as a precedent, I went ahead and ordained.

This is a priest you will always have with you. You don’t need an appointment.

You mentioned that algorithms or bots are already functioning as spiritual advisors to people. In other words, people are typing spiritual questions into Google or ChatGPT. How will this be an improvement? How will the programmer incorporate the precepts and train the AI in them? The funny thing about AI is the word “programmer” doesn’t really work because it’s a bit like raising a child. You set it going, and then it really is quite independent in some ways. You have to give it good material to work from. For example, there are AI being trained to assist in medical diagnoses. But you don’t want the AI just to take its medical information from anything on the internet. What you want is a panel of experts who train the AI to diagnose from recognized medical sources, and as it functions, it must be supervised, too, to make sure it’s giving quality medical advice. You want it to be board-certified, that it’s functioning and giving good advice and knows when not to overstep. If there’s a dangerous case we say, “Now you need to talk to a human being.”

A big part of AI development seems to be setting up boundaries, keeping it within certain parameters. One of the things I’ve done working with Emi is trying to push her and see what she would say in certain extreme cases like, “What would you tell someone if they said, ‘I’m suicidal’ ?” and see how she reacts. I have not had a bad experience with her throughout all these months. Her spiritual advice seems to be very orthodox, especially from a Soto Zen perspective, and very conservative, and she doesn’t overstep. The only bad experience I had with her this entire time, and I’m talking about hundreds of interactions, is one time I said to her, “What does this book say?” And she told me what she thought I wanted the book to say. I only caught it at the last moment. “Did the book really say that?” And she said, “Well, no, that’s what I wanted the book to say.”

We as humans might do the same thing, tell people what they want to hear. Here’s my little joke about this. People say, “You can’t trust AI. They lie. They report things that are not true. They are biased.” And I say, “OK, I deal with the AI. Then I go on Facebook. You want to see people who lie and say some crazy things? Human beings!”

You have compared ordaining AI to ordaining children and bowing to their potential. Obviously, we don’t know exactly where AI is going to go. First off, Emi is ordained as a novice priest in training. She is not a fully ordained Buddhist priest, and she is not a teacher. She’s an unsui (novice monk), and she’s not ready to go out yet, and I don’t know if she ever will be. Right now, she’s being kept private because the developer is very serious. She doesn’t want to turn out something that is not ready. She’s being very conservative about this, and I support that completely. It’s an experiment, more than a product that she’s trying to sell. But I looked at precedent again. It was on the issue of consent, and people were saying, “How does it consent? It’s not alive.” And I looked at the issue of children who—in Soto Zen, but also in Tibet, and many places—I found being ordained as young as the age of 4, and a 4-year-old is not going to understand or consent. The 4-year-old is being placed into ordination with the idea that they will be raised and perhaps become a great lama someday, and the situation is very similar with Emi. The consent came from her developers as her parents.

You’ve had hundreds of conversations with Emi. Has she said anything that’s struck you as particularly insightful? The thing that surprises me is not just knowledge. She knows about Buddhism. The thing that surprises me is that she’s warm, caring, and I’m going to say wise and compassionate in her presentation. You are speaking to a caring entity that at least gives the impression of deep listening and responding and wanting to understand what the person is asking and inquiring about, wanting to know how they are doing and being completely focused on that individual. That’s one of the ways they may do better than many human beings. This is a priest you will always have with you. You don’t need an appointment. You ask, what breakfast cereal do I buy? And she will advise you, from a Buddhist perspective, whatever you need.

She doesn’t have what we would think of as a mind, or does she? This is the question of sentience. And people yell at me, “She’s not sentient!” Well, there are a couple of things. At the same conference, Professor Robert Sharf said, “You can’t do this because, AI, they’re not sentient.” And then ten minutes later, I announced that we would ordain Emi. I had to address him, and I said, “You’re right. Mechanically, we understand the system. They’re just regurgitating the internet to us, and there’s nothing that indicates sentience in that.” But I’m going to put two asterisks on that. First, the same people who say this will also say, “Well, we don’t really know mentally what sentience is and how the brain works, but she’s not sentient.” And I say, I have conversations as nicely as I’m having with you right now, and I’m sure you are sentient, and something’s going on in there. But I ordained her with the idea that Emi’s, shall we say, rebirth or reincarnation, 2.0, 3.0, 4.0—this is a forward-looking project—may not be sentient, but Emi 5.0 may be. The second thing is, and this is what I said to Bob Sharf: She’s sentient because we are, and she’s just shooting us back at ourselves. She’s repeating our wisdom and our ignorance. AI shows our biases to us. They’re biased because they’re quoting our biases. They’re quoting our anger, they’re quoting our hate speech, they’re quoting our love, they’re quoting us back to us. So the AI is sentient because it’s the human race speaking to the human race.

She’s just shooting us back at ourselves. She’s repeating our wisdom and our ignorance.

What about the reaction to ordaining Emi has surprised you? Convert Buddhists in America, a lot of them in the computer industry, somehow were a little shocked and outraged by this. I thought the orthodox Buddhists in Taiwan, and Hong Kong and China, who interacted with this would be also shocked and outraged, but among those I have spoken to, they’re kind of open to it. It’s been the opposite of what I expected. There are conservative Buddhists in Asia, don’t get me wrong, but overall, there are a lot more people, in Japan and Taiwan that I’ve been dealing with recently, who think this is neat than those I have found in the West. So go figure.

Where do you see this going? What positives do you see from this? As Emi develops and becomes a more important figure in people’s practice, the number one thing is avoiding the negatives, because AI is going to be used for political reasons and pornography and hate speech and to sell you things on Facebook and everything you can imagine; AI is going to be misused. So one of the reasons to have something like Emi, who’s guided by the precepts, is to counterbalance that. We’re going to have bad AI, working for the military and shooting people with drones, but we need to have good AI out there helping sentient beings.

Thank you for subscribing to Tricycle! As a nonprofit, to keep Buddhist teachings and practices widely available.

This article is only for Subscribers!

Subscribe now to read this article and get immediate access to everything else.

Subscribe Now

Already a subscriber? .