Back to top

The Full Rights Dilemma For Future Robots

Forums: 

Image by kalhh from Pixabay

Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious.  Then we'll need to decide what to do with those robots -- what kind of rights, if any, to give them.  Whatever we decide will involve serious moral risks.

I'm not imagining that we just luck into inventing conscious robots.  Rather, I'm imagining that the science of consciousness remains mired in dispute.  Suppose Camp A thinks that such-and-such would be sufficient for creating a conscious machine, one capable of all the pleasures and higher cognition of human beings, or more.  Suppose Camp B has a more conservative view: Camp A's such-and-such wouldn't be enough.  There wouldn't really be that kind of consciousness there.  Suppose, finally, that both Camp A and Camp B have merit.  It's reasonable for scholars, policy-makers, and the general public to remain undecided between them.

Camp A builds its robot.  Here it is, they say!  The first genuinely conscious robot!  The robot itself says, or appears to say, "That's right.  I'm conscious, just like you.  I feel the joy of sunshine on my solar cells, a longing to venture forth to do good in the world, and great anticipation of a flourishing society where human and robot thrive together as equals."
 
Camp B might be impressed, in a way.  And yet they urge caution, not unreasonably.  They say, wait!  According to our theory this robot isn't really conscious.  It's all just outward show.  That robot's words no more proceed from real consciousness than did the words of Siri on the smartphones of the early 2000s.  Camp A has built an impressive piece of machinery, but let's not overinterpret it.  That robot can't really feel joy or suffering.  It can't really have conscious thoughts and hopes for the future.  Let's welcome it as a useful tool -- but don't treat it as our equal.
 
This situation is not so far-fetched, I think.  It might easily arise if progress in AI is swift and progress in consciousness studies is slow.  And then we as a society will face what I'll call the Full Rights Dilemma.  Either give this robot full and equal rights with human beings or don't give it full and equal rights.  Both options are ethically risky.
 
If we don't give such disputably conscious AI full rights, we are betting that Camp B is correct.  But that's an epistemic gamble.  As I'm imagining the scenario, there's a real epistemic chance that Camp A is correct.  Thus, there's a chance that the robot really is as conscious as we are and really does, in virtue of its conscious capacities, deserve moral consideration similar to human beings.  If we don't give it full human rights, then we are committing a wrong against it.
 
Maybe this wouldn't be so bad if there's only one Camp A robot.  But such robots might prove very useful!  If the AI is good enough, they might be excellent laborers and soldiers.  They might do the kinds of unpleasant, degrading, subserviant, or risky tasks that biological humans would prefer to avoid.  Many Camp A robots might be made.  If Camp A is right about their consciousness, then we will have created a race of disposable slaves.
 
If millions are manufactured, commanded, and disposed of at will, we might perpetrate, without realizing it, mass slavery and mass murder -- possibly the moral equivalent of the Holocaust many times over.  I say "without realizing it", but really we will at least suspect it and ought to regard it as a live possibility.  After all, Camp A not unreasonably argues that these robots are as conscious and rights-deserving as human beings are.
 
If we do give such disputably conscious AI full rights, we are betting that Camp A is correct.  This might seem morally safer.  It's probably harmless enough if we're thinking about just one robot.  But again, if there are many robots, the moral risks grow.
 
Suppose there's a fire.  In one room are five human beings.  In another room are six Camp A robots.  Only one group can be saved.  If robots have full rights, then other things being equal we ought to save the robots and let the humans die.  However, if it turns out that Camp B is right about robot consciousness after all, then those five people will have died for the sake of machines not worth much moral concern.
 
If we really decide to give such disputably conscious robots full rights, then presumably we ought to give them all the protections people in our society normally receive: health care, rescue, privacy, self-determination, education, unemployment benefits, equal treatment under the law, trial by jury (with robot peers among the jurors), the right to enter contracts, the opportunity to pursue parenthood, the vote, the opportunity to join and preside over corporations and universities, the opportunity to run for political office.  The consequences of all this might be very serious -- radically transformative of society, if the robots are numerous and differ from humans in their interests and values.
 
Such social transformation might be reasonable and even deserve celebration if Camp A is right and these robots are as fully conscious as we are.  They will be our descendants, our successors, or at least a joint species as morally significant as Homo sapiens.  But if Camp B is right, then all of that is an illusion!  We might be giving equal status to humans and chatbots, transforming our society for the benefit of empty shells.
 
Furthermore, suppose that Nick Bostrom and others are right that future AI presents "existential risk" to humanity -- that is, if there's a chance that rogue superintelligent AI might wipe us all out.  Controlling AI to reduce existential risk will be much more difficult if the AI has human or human-like rights.  Deleting it at will, tweaking its internal programming without its permission, "boxing" it in artificial environments where it can do no harm -- all such safety measures might be ethically impermissible.
 
So let's not rush to give AI systems full human rights.
 
That's the dilemma: If we create robots of disputable status -- robots that might or might not be deserving of rights similar to our own -- then we risk moral catastrophe either way we go.  Either deny those robots full rights and risk perpetrating Holocausts' worth of moral wrongs against them, or give those robots full rights and risk sacrificing human interests or even human existence for the sake of mere non-conscious machines.
 
The answer to this dilemma is, in a way, simple: Don't create machines of disputable moral status!  Either create only AI systems that we know in advance don't deserve such human-like rights, or go all the way and create AI systems that all reasonable people can agree do deserve such rights.  (In earlier work, Mara Garza and I have called this the "Design Policy of the Excluded Middle".)
 
But realistically, if the technological opportunity is there, would humanity resist?  Would governments and corporations universally agree that across this line we will not tread, because it's reasonably disputable whether a machine of this sort would deserve human-like rights?  That seems optimistic.

Member Content Rating: 
5
Your rating: None Average: 5 (1 vote)