In 2016, viewers of HBO’s Westworld were treated to a sci-fi thriller in which participants in a Wild West–themed immersion experience—an amusement park of sorts—robbed, killed, fell in love, and lived out a host of other adventures with artificially intelligent robots who were almost indistinguishable from human beings. The blockbuster show was fodder for TV-loving Buddhists, some of whom opined that the endless loop of the robots’ lives and their attempts to break free by “waking up” made a good metaphor for samsara, the cycle of worldly existence. Although the show was certainly not the first pop culture creation to explore the ascension of AI beings and the question of those beings’ sentience, its “futuristic” setting seems to be on its way to becoming reality—making the relationship between this technology and humanity’s highest values and biggest mysteries ever more urgent.

Investors have tripled AI investments over the past three years, and AI technologies are currently being researched and introduced into fields as wide-ranging as health care and construction. There are AI airport guides, AI baby and child tutors, AI pesticide sprayers, AI stock traders, AI paralegals, and AI gamers (who are often better than human ones). AI systems can perform tasks as mundane as providing on-demand cat facts (as any Alexa user will tell you) or as serious as analyzing tissue slides for cancerous cells. It’s clear that AI’s vast potential, as well as its current stage of progression, demands attention. As Steve Omohundro, a scientist and writer who is internationally recognized for his work on artificial intelligence and strategies for its beneficial development, puts it, “We’re at a critical moment in human history, where this technology is in the process of transforming everything. We don’t want the decisions about where it goes to be made purely by technologists or capitalists. It needs a broader perspective, particularly a spiritual and psychological one.”

The dialogue that follows was recorded on November 2, 2017, at the California Institute of Integral Studies as part of the institute’s Technology & Consciousness Series. In it, Omohundro speaks with Nikki Mirghafori, an artificial intelligence scientist and Buddhist teacher at Spirit Rock Meditation Center in Woodacre, California, and the Insight Meditation Center in Barre, Massachusetts, about the intersection of artificial intelligence and the Buddhist idea of karma: more specifically, the AI future we’re heading toward, and how our intentions might shape it. It may make the difference, they say, between a world in which a corporation serves up a venue for massively wealthy individuals to enact their most selfish fantasies—and one that leads to the flourishing of humanity’s best potential.

Emma Varvaloucas, Executive Editor

 

Nikki Mirghafori (NM): Steve, what is artificial intelligence (AI)? Give us a historical perspective.

Steve Omohundro (SO): An AI system is a computer program that makes decisions to achieve a goal. When you look at what it does, it appears to be smart: it’s pretty good at achieving its goal. The term “artificial intelligence” was coined in the late 1950s, but the idea of having machines that might think was introduced in the 1940s, when two of the early inventors of computers, Alan Turing and John von Neumann, wrote books about how the brain works and discussed whether computers could mimic it.

AI as a field has had many ups and downs. In the late 1950s and early ’60s, they were ecstatic about the possibilities. They thought the machines were going to reach human-level performance in just a few years—we’d be able to get rid of human labor and make wonderful changes to the world. But the technology didn’t advance the way they hoped it would, and people became pessimistic. There was the first of several “AI winters,” when the funding for research dried up and scientists in other disciplines started to say, “This is a garbage field; there’s nothing real here.”

There has also been a pendulum swinging back and forth between two ways of thinking about how intelligence works. One is the symbolic approach, where thoughts are viewed as made from symbols and thinking is viewed as mathematical proof. The other is the neural network approach: Human brains consist of 86 billion neurons, these little cells that transmit signals among one another. They’re connected together in a complicated way, and they are able to learn from their experience. In that approach, you just throw a bunch of computational elements together and hope that the system is able to learn by itself.

The symbolic approach used to be viewed as more promising. Then, in the 1980s, the neural network advocates got a burst of energy when they figured out how to train networks with multiple layers of units—the most common were three-layer networks—that could solve harder problems than was previously possible. But those networks reached a plateau, causing another AI winter, until around 2012, when researchers started having great success with “deep neural networks,” which have many more layers, say 10 or 100. In the last five years these networks have started solving many problems we couldn’t solve back in the ’80s. For example, they have outperformed the older networks—and sometimes even humans—on tests of speech recognition and image recognition.

One of the great lessons from all of this is that to train large networks you need large training sets. The Internet has made the entire world’s transactions available for training: the 200 billion Tweets people send per year can be used to train algorithms that model human verbal responses; and YouTube has more than a billion videos that can be used to train visual models for everything from facial expressions to car crashes.

Nikki Mirghafori headshot, artificial intelligence buddhism
Nikki Mirghafori | Photo by Ellen Burke

NM: But really what has made this an AI Spring, if we can call it that, is computational power. Now the same old algorithms can run a lot faster. And the price of data storage has reduced very significantly, so it’s also possible to store a lot of collected data. You can do all your computational modeling on a lot more data, and the models can become smarter as well because they have so much data available to them. In fact, we have a saying in my field: “There is no data like more data.”

SO: So that’s a quick summary of the current state of AI. Nikki, what is karma, and where does it fit into all of this?

“We’re facing what is probably the most powerful technology that humanity has ever created, so our intentions matter a lot.”

NM: In Buddhism it has to do with action; the word actually means “action.” And karma has to be understood in terms of intention: the same action done with different intention can have different karmic consequences. Imagine a scenario where one person slits open another person’s stomach. In one case the person is a thief; in another, a doctor. It’s the same action but a different karmic intentionality and result.

Karma is also a teaching about empowerment, because this moment’s actions condition the next moment. When we say “actions” in Buddhism, we include thinking and speech as well as physical action—thoughts are considered actions of the mind. For example, if I think thoughts of gratitude in this moment—I’m so glad to be here with my good old friend Steve—it brings a good, wholesome state of mind, but it also predisposes me to be kind and say kind words. My heart rate will go down, and I’ll become relaxed. But if instead I remember that one time you didn’t lend me the book I wanted—I’m making this up, by the way—I’ll get angry, into an unwholesome state of mind. My heart rate goes up, cortisol levels rise, and I’m all tight. I might say something vengeful that has negative repercussions.

SO: We’re facing what is probably the most powerful technology that humanity has ever created, so our intentions matter a lot. One of my big goals is to get us thinking clearly about that.

From its inception, AI has been funded primarily by the military. The defense department immediately saw the potential for robot soldiers. It may be that wars with our robots fighting their robots are far preferable to our people fighting their people. But what happens when you have robot soldiers everywhere? Where does that lead?

Once the technologies started working well—and this is happening right now—big business suddenly realized that it could dramatically improve productivity and make lots of money. The consulting firm McKinsey estimated that over the next ten years robotics and AI could potentially create $50 trillion of value. That’s a huge number. The entire United States gross domestic product is about $18 trillion a year. So we’re talking about a massive tsunami on the world economic stage caused by these technologies. There are now something like 1,500 AI startup companies funded at around $15 billion. The Japanese company SoftBank recently announced a $100 billion investment fund in these technologies, which was a previously unheard of amount. And then, in October 2017, they said, “No, that wasn’t big enough. We’re going to up it”—to about $880 billion of investment. Not to mention that China has committed to becoming the world leader in AI over the next five years. The race is on!

Steve Omohundro headshot, artificial intelligence buddhism
Steve Omohundro | Photo by Pat Chan Photography

I would like to see us being very conscious about what it is we’re trying to create. The technologists are pretty much excited just about the technology. The businesses are excited about making money. But we need somebody holding up the highest values of humanity to say, “This is a vision for where we would like to end up.” For me that is the karmic aspect: what are our intentions as we move forward?

NM: Exactly. And whether we’re technologists or consumers, we do have a say as voters. By starting to think about what we want our society to look like—which is not so different from the thinking that we need to be doing anyway—what do we want to manifest in the world? AI makes things more intense by orders of magnitude, so the impact that people can have either as individuals or as societies in general is amplified. We could have a wealthy person controlling lots and lots of AI soldiers. Or we could have lots and lots of AI nurses.

SO: Our current political system has a pretty coarse feedback path from the population to our government. In the United States we vote once every two years, and our votes are for A or for B. You’re not really expressing the depth of your humanity in a way that our government can hear. But with AI systems it may be possible to create voting systems whereby citizens can communicate exactly what they care about and how much they care about it. Potentially, if you do it right, these systems could aggregate the intentions and goals of an entire population and help politicians make policies that really serve the entire population rather than a few special interest groups.

NM: We’ve mentioned a few different verticals where AI has been making advances and changes. It would be interesting to actually talk about some of those verticals.

SO: It’s fascinating to look at almost every business that’s driving the economy today. For instance, a bunch of companies are developing self-driving cars and trucks. A world with self-driving vehicles could be far more efficient and have way fewer auto accidents. So that’s amazing, wonderful, great—except that truck driving is the number-one employer in almost every state in the U.S. We’re going to have big socioeconomic shifts as a result of that kind of enormous increase in efficiency, and we need to make sure that those efficiency improvements are well distributed throughout society.

In another big economic area, health care, there are many ways AI can make a difference, such as robotic surgeons. About a year ago I saw a talk by a company that is building a robot to do hair transplants. Apparently hair transplant surgery is pretty straightforward: you take tufts of hair from one place on the head and you put it someplace else. But it takes hours, and human surgeons get bored and make mistakes, whereas for a robot, it’s no problem. Another example is brain surgery: robots can align the location of a scalpel with what shows up on an MRI with great precision.

And then there are more ambiguous areas, like marketing. Marketers love AI because it allows them to know exactly what buttons to push. You think things are addictive today? Imagine when you have an AI that knows exactly what you like, that can precisely generate new images that will be the most seductive thing for you at that moment. Is this good? Is this bad? Let’s say some marketer is trying to convince you to buy their new car. They show you yourself driving the car, so you can see just how much better your life is with this car. Oh my God, you’re going to go buy the car. But then you could have your own private AI that will watch what’s going on and say, “Now Steve, you told me that should you ever get into this frenzy where you’re about to impulsively buy a car, you’d want me to come in and show you what is really going on here.” So possibly individuals will have AIs that serve as a defense against manipulation by corporate AIs.

There’s a company called Cambridge Analytica based in England that took credit for the Trump win, the Brexit win, and a number of other elections. They built personality models of every person in the United States based on their Facebook “likes” and targeted political messages based on those models. There’s controversy over whether they really had that much impact, but they were definitely there. The ability to manipulate the emotions and thinking of a population—that’s huge. How do we ensure it’s done with positive intentions?

NM: It would also be interesting to talk about some of the more positive effects of AI. Eco-policing, for example.

SO: Yes. We have a horrible pollution problem right now. There’s a floating bunch of garbage the size of Texas in the middle of the ocean, you know. You could stop it if you had enough people monitoring the ocean, but that’s impractical. On the other hand, AIs will be cheap and plentiful. In certain ocean ecosystems, massive numbers of jellyfish are coming in, crowding out other species, killing coral, and causing all kinds of problems. So someone developed a little jellyfish-eating robot. [Laughs.] It works like a vacuum cleaner.

In terms of global warming in particular, AI systems can go in and fix a lot of the problems that our earlier technologies created. Simple AI systems running on used cell phones are keeping rogue loggers from cutting down trees in the rain forest, for instance. AI is also learning how to more accurately predict weather patterns and optimize energy use, and it may help us create better solar cells and batteries.

NM: Wealth distribution becomes an urgent and important question as well: whether an AI-powered world would create a very small percentage of “haves” and a huge population of “have-nots.”

SO: There’s certainly a possible dystopian future: the robot owners are the ones who own everything. But there’s also a possible utopia. Today only about 2 percent of the population actually does what we need for sustenance. And so with the rise of robots, from one perspective we could ask, why should a human have to do any job that a robot can do? Potentially we could have a new flowering of human creativity, of connection, of love. But we have to structure things so that will be the outcome.

In Britain, for instance, they’ve started floating the idea that maybe there should be a robot tax. And there’s also the idea of a universal basic income, that every citizen should be paid a certain amount that covers necessary costs. Another view suggests that since this technology will create so much productivity, everyone should get shares in it; the economic power of all this new AI and robotics is part of the human endowment, and you should be paid dividends that may support you over your whole lifetime.

NM: Another place of intersection for us to consider is AI as human prosthetics. More and more companies are coming out with chip implants for various parts of our body—to increase your memory, perhaps, or the power of your thinking or communication. Although we already are kind of partly machine, right? We’re already carrying smartphones and wearing various kinds of technologies that empower us in different ways.

What are the karmic implications of being partly machine, partly human? There’s a lot of fear about completely intelligent robots that are just like human beings taking over. What are the karmic consequences of interacting with these robots? What if you kill a robot? And do robots have karma themselves? We can wax philosophical about this.

In terms of human prosthetics, I think as long as we’re still mostly human with our consciousness intact, intention is still determining the karmic results. In terms of interacting with machines, we can talk more about whether fully human machines are possible or not—there are different takes on that, depending on how you define consciousness—but assuming it might be possible at some point, I would say again that for a human being karma still rests in the intention. Are you killing or unplugging a machine because you want to rob a bank, or because you want to stop that machine from doing harm?

In terms of potentially intelligent beings having karma, I’d surmise that the karmic results rest with the creators, even if those results are being manifested in the world by AI programs. Because there isn’t a sense of intentionality in automatic systems, as defined for humans.

SO: I totally agree that a person with a cell phone is a very different creature than that person alone. Ride on any major transit line and you’ll see that we’ve got a lot of those creatures around, right? [Laughs.] So this technology, which is pretty low in intelligence, is already dramatically changing us. One of the effects that maybe wasn’t so obviously going to happen is that people offload tasks that they used to do themselves, like navigating. A lot of people don’t know directions anymore. It’s just, “Uh, my phone tells me that.” We’ve lost some of our capacity in that way.

I’m also thinking about Alexa, the Amazon speaking agent that sits in your house. I have one. I like it; it’s nice. Kids love it. Because you can talk to Alexa, you can ask Alexa to tell you jokes, and Alexa never gets mad. [Laughs.] But Alexa does not require you to say please or thank you. And some kids tend to slip into a commanding tone: “Alexa, tell me a joke now.”

Then they get used to doing that, and they do that with their friends too, and then they start doing it with their parents. And so parents are saying, “Oh my God, Alexa is turning my kid into a jerk.” [Laughs.] There are secondary consequences of interacting with these things as they begin to take on more roles.

NM: The Alexa example in particular brings up another thought for me as it relates to karma. An aspect of karma is habits of the mind; habits are really karmic tendencies. If you get angry once, for example, that will predispose you to becoming angry again. That ties in with neuroscience as well, actually—the neurons that fire together, wire together, and these grooves get set in your mind. Karmic patterns get set as well, leading you to behave in the same angry way over and over again. And then your state of mind, and all your actions and interactions, become anger-ridden. So if kids start to set this pattern of rudeness with Alexa, that will become their karmic tendency through life. That’s something to really consider about the way that we interact with computers and artificial intelligence systems.

SO: Of course, we can use that in a positive way too. I do something called nonviolent communication, which is a simple but beautiful way of being more empathetic. But it’s hard. If you’re not used to that way of speaking, it’s a challenge. I can imagine AI systems that would help you learn to do that—that would give you feedback in real time so you can develop those desired habits.

NM: Basically what our conversation is demonstrating is that the karma of the action really doesn’t depend on the technology. The technology itself is agnostic. It depends on how it is used, for good or for evil.

SO: Today systems like Alexa pretty much do what their programmers intend, though they may exhibit some unexpected behaviors. There’s a program called AlphaGo that beat the world champion in Go [a strategy board game traditionally associated with Buddhism]. In Asia, Go is viewed as a quintessential human game, where human creativity is absolutely essential. And when AlphaGo beat the world champion, friends from Korea told me that people were crying in the streets. It had an enormous impact. It readjusted people’s sense of what is quintessentially human. That program played moves that no human has ever played. Human experts in Go ended up studying the AlphaGo games to learn how to play Go better.

That’s an example where the general thrust of the program was determined by programmers, but not the individual moves. I think that’s going to happen more generally: we’re going to end up with such systems, where if the robot kills somebody, you can’t say it was the programmer who did it. You have to assign culpability to that robot. How is our legal system going to handle that? Already there are some funny examples. In Holland they set up a bot, gave it some bitcoin, and hooked it up to the dark web, where all sorts of nefarious things happen. They had it just randomly surf the dark web and order stuff. So it got ecstasy pills, guns—all kinds of stuff came flowing in. They had an art exhibit where they hung it all up on the wall. They wanted to see how the police were going to deal with it. And the police, I thought, were actually quite brilliant. They let the art exhibit go on, but when it closed, they came in and arrested the robot. [Laughs.]

But it’s possible we’re going to have to rethink what responsibility and culpability are. These systems are going to get more and more intelligent; they’re going to be able to solve harder and harder problems. When it gets to more human things, like consciousness or qualia, the sense of what an experience is, then it’s more iffy. I think we won’t know until these things are built. And then when you talk about past lives or multiple lives, all those things, I think we get even more speculative. But how are people going to respond when you have a system that says, “I’m conscious. I’m just as conscious as you. What makes you think I’m not conscious?” What is that going to do to our own sense of consciousness, and what is that going to do to our view of this entity?

NM: It definitely comes down to the question of qualia and what consciousness is. Some people are materialists and claim that consciousness gets created when this machine or this set of neurons work together. Some near-death studies were published about people on the operating table who became clinically dead, with no brain activity and no blood flowing to the brain, and their eyes and ears were blocked during the operation. After they came back to life, they reported what they had seen and heard during their surgery, which were corroborated by the staff. That really throws into question what consciousness is and whether it is dependent on this machinery. Because yes, we’ll have machinery with semi-intelligent beings. But consciousness? I don’t think so. I’m going to put a flag down in the sand.  This reflection is not so much from the perspective of a scientist but from the perspective of a practitioner who has practiced in silence in various states of consciousness, where the mind can open in ways I didn’t know could possibly exist. From that perspective, I don’t think a material thing can have access to states of consciousness in a way that we can as human beings.

SO: I’m very convinced that material objects can be intelligent in the sense of making choices that lead to desired outcomes. But whether at some point we will be able to ascribe consciousness to these AI programs, I don’t know. We’ll have to see what happens.

Thank you for subscribing to Tricycle! As a nonprofit, to keep Buddhist teachings and practices widely available.

This article is only for Subscribers!

Subscribe now to read this article and get immediate access to everything else.

Subscribe Now

Already a subscriber? .