Submitted by Jacob Weiskopfh Ph.D on
To build a slippery slope argument concerning the presence of consciousness, do this:
* First, take some obviously conscious [or non-conscious] system as an anchor point -- such as an ordinary adult human being (clearly conscious) or an ordinary proton (obviously(?) non-conscious).
* Second, imagine a series of small changes at the far end of which is a case that some people might view as a case of the opposite sort. For example, subtract one molecule at a time from the human until you have only one proton left. (Note: This is a toy example; for more attractive versions of the argument, see below.)
* Third, highlight the implausibility of the idea that consciousness suddenly winks out [winks in] at any one of these little steps.
* Finally, conclude that the disputable system at the end of the series is also conscious [non-conscious].
Now slippery slope arguments are generally misleading for vague predicates like "red". Even if we can't finger an exact point of transition from red to non-red in a series of shades from red to blue, it doesn't follow that blue is red. Red is a vague predicate, so it ought to admit of vague, in-betweenish cases. (There are some fun logical puzzles about vague predicates, of course, but I trust that our community of capable logicians will eventually sort that stuff out.)
However, unlike redness, the presence or absence of consciousness seems to be a discrete all-or-nothing affair, which makes slippery-slope arguments more tempting. As John Searle says somewhere (hm... where?), having consciousness is like having money: You can have a little of it or a lot of it -- a penny or a million bucks -- but there's a discrete difference between having only a little and having not a single cent's worth. Consider sensory experience, for example. You can have a richly detailed visual field, or you can have an impoverished visual field, but there is, or at least seems to be, a discrete difference between having a tiny wisp of sensory experience (e.g., a brief gray dot, the sensory equivalent of a penny) and having no sensory experience at all. We normally think of subjects of experience as discrete, countable entities. Except as a joke, most of us wouldn't say that there are two-and-a-half conscious entities in the room or that an entity has 3/8 of a stream of experience. An entity either is a subject of conscious experience (however limited their experience is) or has no conscious experience at all.
Consider these three familiar slippery slopes.
(1.) Across the animal kingdom. We normally assume that humans, dogs, and apes are genuinely, richly phenomenally conscious. We can imagine a series of less and less sophisticated animals all the way down to the simplest animals or even down into unicellular life. It doesn't seem that there's a plausible place to draw a bright line, on one side of which the animals are conscious and on the other side of which they are not. (I did once hear an ethologist suggest that the line was exactly between toads (conscious) and frogs (non-conscious); but even if you accept that, we can construct a fine-grained toad-frog series.)
(2.) Across human development. The fertilized egg is presumably not conscious; the cute baby presumably is conscious. The moment of birth is important -- but it's not clear that it's so neurologically important that it is the bright line between an entirely non-conscious fetus and a conscious baby. Nor does there seem to be any other obvious sharp transition point.
(3.) Neural replacement. Tom Cuda and David Chalmers imagine replacing someone's biological neurons one by one with functionally equivalent artificial neurons. A sudden wink-out between N and N+1 replaced neurons doesn't seem intuitively plausible. (Nor does it seem intuitively plausible that there's a gradual fading away of consciousness while outward behavior, such as verbal reports, stays the same.) Cuda and Chalmers conclude that swapping out biological neurons for functionally similar artificial neurons would preserve consciousness.
Less familiar, but potentially just as troubling, are group consciousness cases. I've argued, for example, that Guilio Tononi's influential Integrated Information Theory of consciousness runs into trouble in employing a threshold across a slippery slope (e.g. here and Section 2 here). Here the slippery slope isn't between zero and one conscious subjects, but rather between one and N subjects (N > 1).
(4.) Group consciousness. At one end, anchor with N discretely distinct conscious entities and presumably no additional stream of consciousness at the group level. At the other end, anchor with a single conscious entity with parts none of which, presumably, is an individual subject of experience. Any particular way of making this more concrete will have some tricky assumptions, but we might suppose an Ann Leckie "ancillary" case with a hundred humanoid AIs in contact with a central computer on a ship. As the "distinct entities" anchor, imagine that the AIs are as independent as ordinary human beings are, and the central computer is just a communications relay. Intermediate steps involve more and more information transfer and central influence or control. The anchor case on the other end is one in which the humanoid AIs are just individually nonconscious limbs of a single fully integrated system (though spatially discontinuous). Alternatively, if you like your thought experiments brainy, anchor on one end with normally brained humans, then construct a series in which these brains are slowly neurally wired together and perhaps shrunk, until there's a single integrated brain again as the anchor on the other end.
Although the group consciousness cases are pretty high-flying as thought experiments, they render the countability issue wonderfully stark. If streams of consciousness really are countably discrete, then either you must:
(a.) Deny one of the anchors. There was group consciousness all along, perhaps!
(b.) Affirm that there's a sharp transition point at which adding just a single bit's worth of integration suddenly shifts the whole system from N distinct conscious entitites to only one conscious entity, despite the seemingly very minor structural difference (as on Tononi's view).
(c.) Try to wiggle out of the sharp transition with some intermediate number between N and 1. Maybe this humanoid winks out first while this other virtually identical humanoid still has a stream of consciousness -- though that's also rather strange and doesn't fully escape the problem.
(d.) Deny that conscious subjects, or streams of conscious experience, really must come in discretely countable packages.
I'm increasingly drawn to (d), though I'm not sure I can quite wrap my head around that possibility yet or fully appreciate its consequences.
Eric Schwitzgebel
passages from: http://schwitzsplinters.blogspot.com/2018/06/slippery-slope-arguments-and-discretely.html
- 448 reads