Back to top

More People Might Soon Think Robots Are Conscious And Deserve Rights

Member Content Rating: 
5
Your rating: None Average: 5 (32 votes)

http://giphy.com

GPT-3 is a computer program that can produce strikingly realistic language outputs given linguistic inputs -- the world's most stupendous chat bot, with 98 layers and 175 billion parameters. Ask it to write a poem, and it will write a poem. Ask it to play chess and it will output a series of plausible chess moves. Feed it the title of a story "The Importance of Being on Twitter" and the byline of a famous author "by Jerome K. Jerome" and it will produce clever prose in that author's style:

The Importance of Being on Twitter
by Jerome K. Jerome
London, Summer 1897

It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.

All this, without being specifically trained on tasks of this sort. Feed it philosophical opinion pieces about the significance of GPT-3 and it will generate replies like:

To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.

The damn thing has a better sense of humor than most humans.

Now imagine this: a GPT-3 mall cop. Actually, let's give it a few more generations. GTP-6, maybe. Give it speech-to-text and text-to-speech so that it can respond to and produce auditory language. Mount it on a small autonomous vehicle, like the delivery bots that roll around Berkeley, but with a humanoid frame. Give it camera eyes and visual object recognition, which it can use as context for its speech outputs. To keep it friendly, inquisitive, and not too weird, give it some behavioral constraints and additional training on a database of appropriate mall-like interactions. Finally, give it a socially interactive face like MIT's Kismet robot:

Now dress the thing in a blue uniform and let it cruise the Galleria. What happens?

It will, of course, chat with the patrons. It will make friendly comments about their purchases, tell jokes, complain about the weather, and give them pointers. Some patrons will avoid interaction, but others -- like my daughter at age 10 when she discovered Siri -- will love to interact with it. They'll ask what it's like to be a mall cop, and it will say something sensible. They'll ask what it does on vacation, and it might tell amusing lies about Tahiti or tales of sleeping in the mall basement. They'll ask whether it likes this shirt or this other one, and then they'll buy the shirt it prefers. They'll ask if it's conscious and has feelings and is a person just like them, and it might say no or it might say yes.

Here's my prediction: If the robot speaks well enough and looks human enough, some people will think that it really has feelings and experiences -- especially if it reacts with seeming positive and negative emotions, displaying preferences, avoiding threats with a fear face and plausible verbal and body language, complaining against ill treatment, etc. And if they think it has feelings and experiences, they will probably also think that it shouldn't be treated in certain ways. In other words, they'll think it has rights. Of course,some people think robots already have rights. Under the conditions I've described, many more will join them.

Most philosophers, cognitive scientists, and AI researchers will presumably disagree. After all, we'll know what went into it. We'll know it's just GPT-6 on an autonomous vehicle, plus a few gizmos and interfaces. And that's not the kind of thing, we'll say, that could really be conscious and really deserve rights.

Maybe we deniers will be right. But theories of consciousness are a tricky business. The academic community is far from consensus on the correct theory of consciousness, including how far consciousness spreads across the animal kingdom r even how rich a field of consciousnessordinary humans possess . If garden snails, for example, might be conscious, with 60,000 neurons in their central nervous system, might GPT-6 also be conscious, with its massive CPUs that blitz through layer after layer of processing on trillions of parameters? Both the cognitive complexity of our imagined robot and its information processing will far exceed what we could plausibly attribute to a garden snail. Its embodied behavior might be simpler, though, if we exclude linguistic behavior. How much does that matter? And how much do the details of biological implementation matter? Do neurons have some secret sauce that silicon chips lack? On questions like these, we can't expect scholarly consensus anytime soon.

Maybe, despite all this, it seems too absurd to suppose that our GPT-6 mall cop could possibly deserve rights. Okay, how about GPT-7? GPT-8, now with prosthetic hands and five-finger grasping? GPT-20? If you're open to the thought that someday, somehow, a well-designed AI could have genuine conscious experience and deserve serious moral consideration, then you'll presumably think that at some point our technology might cross that line. But when, how, and why -- that might be completely opaque, an undetectable shift somewhere amid an ever improving line of huggable mall cops.

http://schwitzsplinters.blogspot.com/2021/03/more-people-might-soon-think-robots-are.html