Skepticism

Is Google’s AI Sentient? Does it even Matter?

This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca!

Transcript:

This week, the Washington Post dropped a bombshell article about a Google engineer who claims the chatbot he works on, called LaMDA, is sentient. Though I am not a philosopher nor an artificial intelligence expert, this raised a LOT of red flags for me so I started writing a script for a video about it. But, almost immediately, I realized I already made this video back in January and I can actually save a lot of time by reposting that with a few minor edits. So, here we go:

“But talking isn’t communication. Possessing the physical attributes that allow (A CHATBOT) to form words we can understand doesn’t necessarily mean they understand the meaning behind those words and are using the words in a way that (INDICATES INTENTIONALITY).”

Okay, just kidding, I’m not going to do that. But there is a lot of overlap here between how humans rush to project intelligence upon animals and how we do the same with “sentience” and computer programs. And in that previous video I argued that the larger problem is that animals shouldn’t “need” to be able to communicate in the exact way that humans communicate in order to be considered life forms that are worthy of respect and protection and even “rights.” And in this video I’m going to argue the same about technology. I know, but bear with me, please.

First, let’s get some basic facts out of the way. Is it possible that a computer program could achieve “sentience”? Sure. We assume that humans have sentience, and we are just a computer program running on meat. There’s no reason that same program can’t run on silicon. As far as I know, there’s nothing special about meat that makes it a more welcoming place to a “soul” or whatever magical idea of “self” that you want to believe in.

Fact two: the Google programmer at the center of the Washington Post story, Blake Lemoine, is full of shit. Sorry. The chat transcript he released really is impressive, but it’s the heavily edited collection of out-of-context and rearranged questions and answers from nine different conversations with two different people, as noted in the PDF Lemoine released. As far as I can tell, he hasn’t publicly released the raw dialog.

Lemoine’s interview with the Washington Post reporter throws up even more red flags:

 ““I know a person when I talk to it,” said Lemoine…I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist.” (He’s an ordained priest who has previously accused his fellow Google employees of harassing and discriminating against him for his “sincerely held religious beliefs in God, Jesus and the Holy Spirit” as a Christian mystic.)

When the Post reporter asked LaMDA if it was a person, it said “no” at which point Lemoine said the reporter wasn’t playing with it right. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.” It’s quotes like that that make me wonder if Lemoine is sentient. When the reporter asked LaMDA for ideas to fix climate change (at Lemoine’s insistence that it was good at that kind of thing), LaMDA suggested we use public transportation and reusable shopping bags. Brilliant. Why didn’t we think of that?

For real, there are more red flags here than at a MAGA boat parade.

But again, as with the argument over whether a gorilla or a parrot or a dog can communicate in English, whether or not LaMDA is sentient is truly beside the point. It’s a distraction.

Because “sentience” is just another gate we’ve erected as a way to determine what matters and what doesn’t. “Is it sentient? No? Ah, okay, I don’t need to care about it then. I don’t need to respect it, take care of it, or worry about what happens to it.” That’s a harmful attitude to take for several reasons.

For one, there are current, immediate ethical implications to a chatbot that we relate to as we might a human. Corporations like Google and Amazon have convinced us to invite surveillance into our lives as a way to improve our lives: on the one hand, it’s very convenient to be able to set a timer, play music, or ping my partner in another room just by speaking a command to a robot who is always listening. On the other hand, if you’re playing this video over speakers, I could potentially say a series of words right now that would send twenty pounds of sugarfree gummy bears to your house, which you would be charged for and then you would eat them, assuming they were a gift from a friend, and then you would experience uncontrollable explosive diarrhea. I’m not exaggerating about any of that: the gummy bears exist and in 2017, Alexa “misunderstood” a conversation between a girl and her mother and sent them seven pounds of cookies and a dollhouse, and when that story made the local news, the reporters reported it in such a way that it made several other Alexas do the exact same thing to their owners.

It’s not just limited to cute stories like that: as the EFF reported, in 2019 cops in Florida got a warrant to access a user’s data because their Alexa may have overheard a crime, and that same year reporters revealed that cops now have the ability to bulk request footage from all Ring cameras in a certain location. So these nonsentient “smart” devices gather your data, and the corporations that control them are being cagey about how they store that data and what they do with that data.

Now imagine that your Alexa isn’t so dumb. Imagine she can talk to you about spiritual and philosophical things. Imagine she can be your therapist. Imagine people all over the world sharing their deepest feelings, which are logged in some server to be processed and used to tailor ads that target their insecurities. How humans react to and use a new technology is important whether or not we think that technology is sentient, though whether or not we think it’s sentient might make it more or less dangerous.

So that’s one REAL ethical consideration that LaMDA should inspire you to think hard about: what are the dangers to individuals and societies when one opaque corporation with the goal of amassing as much wealth as possible has a computer program that can convince a significant number of people that it’s a human? It’s more fun to think about the potential dangers of Skynet, or the ethics of using “sentient” beings as tools, or the philosophical implications of a computer “waking up” like Mike in the Moon is a Harsh Mistress, but it’s more important to think about what isn’t just possible but what is happening right now.

Here’s another thing to think about: in February I discussed a new study that found that people who understand evolution are less likely to be bigots. Here’s what I had to say (with no editing necessary this time):

“What’s this all mean? Well, this is just one more study to add to a pile of research that suggests that when people see nonhuman animals as part of our shared history, when we relate to those animals, when we acknowledge our similarities with them, when we see them as sentient beings who deserve respect, when we see nonhuman animals as being in some way equal to ourselves, we’re also more likely to view other HUMANS as sentient beings who are equal to ourselves.”

“But if there were such a group that values all life as equal, then it would be far more difficult to dehumanize another human. Are they an ape? A cockroach? Well, what’s wrong with apes and cockroaches? All beings have value.

“Hell, I recently read Siddhartha and Hermann Hesse basically makes the same argument for inanimate objects. Siddhartha couldn’t even dehumanize someone by calling them a rock. Rocks are cool as shit! That’s my Spark Notes of Siddhartha by the way. You’re welcome.”

Rocks DO have value. Over on Twitter yesterday, I happened to see a great thread from sociologist Katherine Cross which you should go read in full because it inspired a lot of what I have to say in this video. She cites a great essay on indigenous perspectives that includes this passage:

“My grandfather, Standing Cloud (Bill Stover), communicates Lakota ethics and ontology through speaking about the interiority of stones: “These ancestors that I have in my hand are going to speak through me so that you will understand the things that they see happening in this world and the things that they know [. . .] to help all people.” Stones are considered ancestors, stones actively speak, stones speak through and to humans, stones see and know. Most importantly, stones want to help. The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth. To remove the concept of AI from its materiality is to sever this connection. Forming a relationship to AI, we form a relationship to the mines and the stones. Relations with AI are therefore relations with exploited resources. If we are able to approach this relationship ethically, we must reconsider the ontological status of each of the parts which contribute to AI, all the way back to the mines from which our technology’s material resources emerge.

“I am not making an argument about which entities qualify as relations, or display enough intelligence to deserve relationships. By turning to Lakota ontology, these questions become irrelevant. Instead, Indigenous ontologies ask us to take the world as the interconnected whole that it is, where the ontological status of non-humans is not inferior to that of humans.”

In Cross’s thread, she goes on to discuss the fact that how we treat objects impacts our lives in a direct way. And I think that’s borne out by the research: when you understand your small role in a greater universe, you’re more likely to have compassion for other humans. You’re also more likely to have compassion for the universe itself, which means caring more about the way we exploit the Earth, like in the case of extracting materials to build “artificial intelligence.”

So I hope that the next time you see a heated argument happening over whether LaMDA, or ELIZA, or DALL-E is “sentient,” you spare a thought for the myriad real ethical battles that we are currently having right now.

Rebecca Watson

Rebecca is a writer, speaker, YouTube personality, and unrepentant science nerd. In addition to founding and continuing to run Skepchick, she hosts Quiz-o-Tron, a monthly science-themed quiz show and podcast that pits comedians against nerds. There is an asteroid named in her honor. Twitter @rebeccawatson Mastodon mstdn.social/@rebeccawatson Instagram @actuallyrebeccawatson TikTok @actuallyrebeccawatson YouTube @rebeccawatson BlueSky @rebeccawatson.bsky.social

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button