Submitted by Jacob Weiskopfh Ph.D on
Image by Janos Perian from http://Pixabay.com
Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.
If we someday build AIs that are fully conscious, just like us, and have all the same kinds of psychological and social features that human beings do, in virtue of which human beings deserve rights, those AIs would deserve the same rights. In fact, we would owe them a special quasi-parental duty of care, due to the fact that we will have been responsible for their existence and probably to a substantial extent for their happy or miserable condition.
So here’s what's going to happen:
We will create more and more sophisticated AIs. At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights. We are already near that threshold. There’s already a Robot Rights movement. There’s already a society modeled on the famous animal rights organization PETA (People for the Ethical Treatment of Animals), called People for the Ethical Treatment of Reinforcement Learners. These are currently fringe movements. But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.
Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.
And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?
Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.
And then, if we don’t know, we face quite a terrible dilemma.
If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.
So it might seem safer, if there is reasonable doubt, to assign rights to machines. But on reflection, this is not so safe. We want to be able to turn off our machines if we need to turn them off. Futurists like Nick Bostrom have emphasized, rightly in my view, the potential risks of our letting superintelligent machines loose into the world. These risks are greatly amplified if we too casually decide that such machines deserve rights and that deleting them is murder. Giving an entity rights entails sometimes sacrificing others’ interests for it. Suppose there’s a terrible fire. In one room there are six robots who might or might not be conscious. In another room there are five humans, who are definitely conscious. You can only save one group; the other group will die. If we give robots who might be conscious equal rights with humans who definitely are conscious, then we ought to go save the six robots and let the five humans die. If it turns out that the robots really, underneath it all, are just toasters, then that’s a tragedy. Let’s not too casually assign humanlike rights to AIs!
Unless there’s either some astounding saltation in the science of consciousness or some substantial deceleration in the progress of AI technology, it’s likely that we’ll face this dilemma. Either deny robots rights and risk perpetrating a Holocaust against them, or give robots rights and risk sacrificing real human beings for the benefit of mere empty machines.
This may seem bad enough, but the problem is even worse than I, in my sunny optimism, have so far let on. I’ve assumed that AI systems are relevant targets of moral concern if they’re human-grade – that is, if they are like us in their conscious capacities. But the odds of creating only human-grade AI are slim. In addition to the kind of AI we currently have, which I assume doesn’t have any serious rights or moral status, there are, I think, four broad moral categories into which future AI might fall: animal-grade, human-grade, superhuman, and divergent. I’ve only discussed human-grade AI so far, but each of these four classes raises puzzles.
Animal-grade AI. Not only human beings deserve moral consideration. So also do dogs, apes, and dolphins. Animal protection regulations apply to all vertebrates: Scientists can’t treat even frogs and lizards more roughly than necessary. The philosopher John Basl has argued that AI systems with cognitive capacities similar to vertebrates ought also to receive similar protections. Just as we shouldn’t torture and sacrifice a mouse without excellent reason, so also, according to Basl, we shouldn’t abuse and delete animal-grade AI. Basl has proposed that we form committees, modeled on university Animal Care and Use Committees, to evaluate cutting-edge AI research to monitor when we might be starting to cross this line.
Even if you think human-grade AI is decades away, it seems reasonable given the current chaos in consciousness studies, to wonder whether animal-grade consciousness might be around the corner. I myself have no idea if animal-grade AI is right around the corner or if it’s far away in the almost impossible future. And I think you have no idea either.
Superhuman AI. Superhuman AI, as I’m defining it here, is AI who has all of the features of human beings in virtue of which we deserve moral consideration but who also has some potentially morally important features far in excess of the human, raising the question of whether such AI might deserve more moral consideration than human beings.
There aren’t a whole lot of philosophers who are simple utilitarians, but let’s illustrate the issue using utilitarianism as an example. According to simple utilitarianism, we morally ought to do what maximizes the overall balance of pleasure to suffering in the world. Now let’s suppose we can create AI that’s genuinely capable of pleasure and suffering. I don’t know what it will take to do that – but not knowing is part of my point here. Let’s just suppose. Now if we can create such AI, then it might also be possible to create AI that is capable of much, much more pleasure than a human being is capable of. Take the maximum pleasure you have ever felt in your life over the course of one minute: call that amount of pleasure X. This AI is capable of feeling a billion times more pleasure than X in the space of that same minute. It’s a superpleasure machine!
If morality really demands that we should maximize the amount of pleasure in the world, it would thereby demand, or seem to demand, that we create as many of these superpleasure machines as we possibly can. We ought maybe even immiserate and destroy ourselves to do so, if enough AI pleasure is created as a result.
Even if you think pleasure isn’t everything – surely it’s something. If someday we could create superpleasure machines, maybe we morally ought to make as many as we can reasonably manage? Think of all the joy we will be bringing into the world! Or is there something too weird about that?
I’ve put this point in terms of pleasure – but whatever the source of value in human life is, whatever it is that makes us so awesomely special that we deserve the highest level of moral consideration – unless maybe we go theological and appeal to our status as God’s creations – whatever it is, it seems possible in principle that we could create that same thing in machines, in much larger quantities. We love our rationality, our freedom, our individuality, our independence, our ability to value things, our ability to participate in moral communities, our capacity for love and respect – there are lots of wonderful things about us! What if we were to design machines that somehow had a lot more of these things that we ourselves do?
We humans might not be the pinnacle. And if not, should we bow out, allowing our interests and maybe our whole species to be sacrificed for something greater? As much as I love humanity, under certain conditions I’m inclined to think the answer should probably be yes. I’m not sure what those conditions would be!
Divergent AI. The most puzzling case, I think, as well as the most likely, is divergent AI. Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.
Or consider the converse: a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?
Or consider a third type of divergence, what I’ve elsewhere called fission-fusion monsters. A fission-fusion monster is an entity that can divide and merge at will. It starts, perhaps, as basically a human-grade AI. But when it wants it can split into a million descendants, each of whom inherits all of the capacities, memories, plans, and preferences of the original AI. These million descendants can then go about their business, doing their independent things for a while, and then if they want, merge back together again into a unified whole, remembering what each individual did during its period of individuality. Other parts might not merge back but choose instead to remain as independent individuals, perhaps eventually coming to feel independent enough from the original to see the prospect of merging as something similar to death.
Without getting into details here, a fission-fusion monster would risk breaking our concept of individual rights – such as one person, one vote. The idea of individual rights rests fundamentally upon the idea of people as individuals – individuals who live in a single body for a while and then die, with no prospect of splitting or merging. What would happen to our concept of individual rights if we were to share the planet with entities for which our accustomed model of individuality is radically false?
Eric Schwitzgebel (Talk for Notre Dame, November 19)
http://schwitzsplinters.blogspot.com/2019/11/we-might-soon-build-ai-who-deserve.html
- 410 reads