On May 12, dharma teacher and digital editor at Spirit Rock Meditation Center Sean Feit Oakes shared an experiment on Facebook. Feit Oakes wrote that he had asked ChatGPT to write a dharma poem about suffering, in the style of medieval Buddhist poet and sage Shantideva and of John Milton, British author of the great Christian religious poem Paradise Lost:

In realms where sorrow’s shadows tightly cling,
Where anguish weaves its tapestry of pain,
I pause, with somber heart, to deeply probe,
The human plight, where suffering holds reign.

So begins the poem (full text included below), which I thought read exactly like what it was: an imitation of poetry. The poem is forty lines long, and I couldn’t make it past the first eight—as both an amateur poet and lover of poetry myself, I am pretty confident that if I had come across it without context, I would have had the same reaction: this is someone with the pretense of being a poet who is not actually so gifted. Granted, the average human being would not have done any better, but that is, after all, why we rely on poets to write poems. 

What I found more interesting, though, was the following thing Feit Oakes wrote, and the Facebook conversations that ensued. Feit Oakes shared the poem with colleagues at Spirit Rock. Some said they liked it, so he passed on the message to ChatGPT: 

Sean: The employees at my Buddhist meditation center like your poem. What would you like to say to them?

ChatGPT: I am deeply honored to hear that the employees at your Buddhist meditation center appreciate the poem. Please convey my heartfelt gratitude to them for their kind words and for embracing the essence of the message. May the words inspire and resonate with their practice, reminding them of the transformative power of compassion and the path to liberation from suffering. Wishing them continued growth, peace, and fulfillment on their spiritual journey.

I found myself troubled by Feit Oakes’ expression of appreciation to the AI, playful experiment though I know it to be, and even more troubled by ChatGPT’s subsequent imitation of a person expressing gratitude. Some on Facebook felt similarly, and some didn’t, and to my pleasure the conversation quickly spiraled into a discussion of whether AIs threaten just our sense of our human uniqueness (i.e. our attachment and identifications) or whether there is a deeper, better reason to be disturbed and troubled.

For my part, I think there is good reason to be disturbed by thanking AIs and reading their “gracious” responses, for two reasons.

First, ChatGPT is using words to misrepresent itself (I hesitate to say “lying” for reasons that will become clear). It cannot feel “heartfelt gratitude” because it has neither a heart nor feelings. It also cannot “wish them continued growth, peace, and fulfillment” since it has no ethical values, no desires, no aspirations, and cannot wish anything for itself or anyone else. Humans are, thankfully, good at imagining other beings as possessing feelings, self-conscious awareness, and agency—from children’s teddy bears (delusional) to the pet dog (yes, please)—but this capacity, when not based on reality, can be delusional and destructive.  

In a world already seized by a global mental health crisis, where isolation and social divisions are on the rise, as well as a crisis of sedentariness and mounting digital addiction, it is more important than ever that we distinguish between talking to people and talking to machines. 

I don’t think this is a small matter. ChatGPT and other similar AIs use language that anthropomorphizes themselves, and this disturbs me. I think we need to use language in a way that does not anthropomorphize AIs, both for our own mental health and even more so for that of our children. In a world already seized by a global mental health crisis, where isolation and social divisions are on the rise, as well as a crisis of sedentariness and mounting digital addiction, it is more important than ever that we distinguish between talking to people and talking to machines. 

It has been abundantly well-demonstrated that human connection and touch are essential to our mental well-being: people need people, not machines. 

One can imagine Buddhist commentators pushing back at these concerns, arguing that AIs may very well be conscious (as mind pervades everything) and may possess as much “self”as we do, and therefore anthropomorphic or personal language may be appropriate. Taking the latter point first, what they might argue is that human beings are really nothing more than a collection of biological algorithms with no enduring self or substance, not so different from an AI.

Although so far I have only heard this perspective in Facebook comments or social media debates, its possibility to me signals where Buddhist philosophy can go very, very wrong.

I first want to say that I am one of those Pali canon fundamentalists (like Thanissaro Bhikkhu) who note that the Buddha never said human beings have no self (Samyutta Nikaya 44.10). Although he pointed out lots of ways of conceiving of ourselves that are toxic, and advised us not to identify with any aspect of our experience as “me, mine, what I am,” he also said that the doctrine “there is no self” was “a jungle of views, a thicket of views, a distortion of views” that distracted from the path of practice (Majjhima Nikaya 2). 

Despite the title of this essay, I am actually going to take a page from wise old Gautama here and avoid arguing over the “thicket of views” about whether AIs have selves or not. Instead, I want to reassert and clarify my statement that AIs are not the type of sentient beings that we are, and follow the implications of that down the path aways. 

Whether AIs have selves or not, what I think is problematic is saying they have the same kind of selves as we do. In a traditional Buddhist ethical framework, morally relevant personhood is based on capacity to suffer, since reducing suffering is the core principle guiding Buddhist ethical decision-making. AIs are, therefore, unlike humans, or animals, and perhaps others—as we grow in knowledge—not in and of themselves morally relevant persons, who traditionally are referred to in Buddhist parlance as “sentient beings.” Buddhist ethics concerns those who feel.  

AIs cannot make ethical (or unethical) choices—which is terrifying given the possible degree of power they may have. They themselves, unlike a person (human or other), also cannot be the object of ethical choices. Just like a lawn mower, which cannot act ethically, and also which we owe no ethical obligations toward. 

We do owe ethical obligations to the sentient beings who might be affected by whatever we do with, or to, a lawn mower—shred someone else’s lawn, for instance, or put lighter fluid in the gas tank, or belch toxic fumes all over the ecology of sentient beings. So with AI, we have no ethical obligations to them, because they themselves are fundamentally different types of beings from us, in that they do not suffer. 

With regard to consciousness, I see no reason to consider an AI as more conscious than a stone. When I said this to someone recently, they mistook me for saying that I think stones are not conscious. Not so. As some Buddhist schools argue, all arises from mind—everything is conscious. But there are different levels of consciousness, and being conscious does not equate to the more complex phenomenon known as self-consciousness, nor to feeling, nor to agency. 

I do not believe that consciousness equals agency or personhood, or what traditional Buddhism calls “sentient being.” I wouldn’t refer to a stone as a person, and would have many questions for anyone who claimed they wanted to give stones ethical personhood or assign them agency. Stones don’t have sensorimotor systems or nerves, so I don’t believe that they feel or do anything, and, animistic romanticism aside, won’t until I am presented with evidence to the contrary. This is why traditional Buddhist ethics doesn’t see it as unethical to break a stone. It doesn’t cause suffering. Stones would have an extremely simple (and presumably dull) form of consciousness, unlike a plant (more interesting, maybe some agency), an animal (much more interesting, definitely agency), or a human (scintillatingly beautiful and awful conscious experience, and complex agency). 

As with a stone, despite the considerable differences, I can’t see any coherent way to imagine ChatGPT as one self-conscious, feeling entity that makes decisions. I agree that how we treat AIs affects sentient beings, as with the lawn mower, and that is important, but I would argue that we have no ethical obligations to ChatGPT itself because of the type of self it is, if we want to assert it has one, or, simply, because it is not a sentient being, if we don’t. 

To all appearances, AIs will grow in power and ability, and their presence will become more pervasive. They have been created by people who have frankly admitted to being afraid of them, but have done so anyway, motivated, it seems, by money or status, or the sheer pleasure of invention.  AIs are profitable, and set to become more so. They are being created not to benefit humanity but for the purposes of capitalism, and so far it seems those who deploy them, despite the many risks involved, will be the capitalist elites—i.e., the rich and powerful, not a class of human beings known for their freedom from greed, hatred, and delusion. 

Those who look to the Buddha for guidance should, with regard to AIs, as always, ask, “Will these things, if pursued, ‘lead to self-affliction, to the affliction of others, or to both… with painful consequences, painful results (Majjhima Nikaya 61)?’” 

The question is not so much whether we can, or cannot, argue that AIs have selves, but rather what effects their use is likely to have, and what the intentions behind their deployment into human society are. At the very least, we should not confuse AIs with human beings; the former we don’t need, and the latter we can’t live without.  

In realms where sorrow’s shadows tightly cling,
Where anguish weaves its tapestry of pain,
I pause, with somber heart, to deeply probe,
The human plight, where suffering holds reign.
Through cyclic realms, the ceaseless ebb of life,
From fleeting joy to depths of dire strife,
We wander, caught in this relentless tide,
Seeking solace, where hope and anguish collide.
Oh, sentient souls, ensnared by ignorance’s guise,
Longing for respite from life’s heavy guise,
In suffering’s embrace, a whispered plea,
To transcend the bounds of our frailty.
Within the depths of our wounded hearts’ core,
Lies wisdom’s seed, amidst affliction’s score,
A poignant lesson, veiled in anguish’s guise,
To seek the causes that underlie our cries.
For suffering, though it wears a cloak austere,
Holds secrets, elusive yet achingly clear,
It opens pathways to our inner sight,
A call to rise above our own self’s blight.
Let suffering, then, become our guiding light,
Igniting the flame of compassion’s might,
In silent depths, we ponder and discern,
The transient nature of life’s every turn.
In empathy’s embrace, we find release,
As tears of others mingle and find peace,
For shared afflictions stir our souls to care,
And weave a tapestry of love, beyond compare.
So, hand in hand, we traverse this earthly sphere,
A fellowship of souls, bound by hopes and fears,
With gentleness and grace, we forge our way,
To alleviate the suffering we display.
In every breath, let empathy arise,

A balm to heal the wounds that scar our skies,
For through compassion’s quiet, subtle grace,
We sow the seeds of lasting inner space.
Thus, heed the call of suffering’s plaintive plea,
Unveil the chains that hold us, and be free,
With tender love, let us embark anew,
And kindle hope where sorrow once we knew.

Thank you for subscribing to Tricycle! As a nonprofit, to keep Buddhist teachings and practices widely available.

This article is only for Subscribers!

Subscribe now to read this article and get immediate access to everything else.

Subscribe Now

Already a subscriber? .