Back to top

The Ethics Of Algorithms

Forums: 

http://pixabay.com

Strictly speaking, an algorithm is not a moral agent. It’s just a step-by-step, foolproof, mechanical procedure for computing some mathematical function. Think, for example, of long division as a case in point. From this perspective it may seem a bit like a category mistake or contradiction in terms to talk about the ethics or morality of algorithms.   

But what’s really relevant isn’t the strict mathematical notion of an algorithm, but a broader notion of an algorithm, which is roughly coextensive to whatever it is that computers do. And as it happens, we are delegating more and more morally fraught decisions to computers and their algorithms. In the strict sense of that term ‘algorithm’ there is no algorithm that would allow us to precisely compute the value of a human life in a mechanical, step-by-step, foolproof manner. But that doesn’t stop us from programming a computer to assigned weights to various factors like say age or income or race, health status, performing some calculations, and spitting out a number that tell us whether we ought or ought not give a person a potentially life-saving treatment. Many find the prospect of such a thing truly alarming. It’s hard to blame them for that. After all, how many of us would be willing to trust our own lives to a computer algorithm?

But in fact, we do so all the time—every time we fly on an airplane, for example. The air traffic control system is basically the domain of the computer. It’s a domain in which, for example, human pilots are called upon to act only on an as needed basis. And thanks to the wonders of modern computation, human intervention is, in fact, rarely needed. And the remarkable thing is that air travel is much safer than it otherwise would be as a result.  

Now one might object that air traffic control is a relatively easy case. Basically, all the computers have to do is to keep the planes far enough apart to get them from point A to point B without bumping into each other. But we are approaching new frontiers of algorithmic decision making that are both much more computationally difficult and morally fraught than air traffic control. Think of the coming onslaught of self-driving cars. All those crowded city streets, with cars and pedestrians and cyclists traveling every which way in much closer proximity to each other than planes ever get. Do we really want computers to decide when a car should swerve and kill its passenger in order to save a pedestrian?  

Unfortunately, I doubt that it matters what any of us want. The day is coming, and fast, when the degree of computer automation in what we might call the ground traffic control system will rival or exceed the degree of automation in the air traffic control system. Some will say that the day is coming much too fast and in too many spheres. Computers are already in almost complete control of the stock market. They’re gradually taking over medical diagnosis. Some even want to turn sentencing decisions over to them. Perhaps things are getting out of control.   

But let’s not forget that humans aren’t exactly infallible decision makers ourselves. Think of the mess that our human centered sentencing system has become. Given the mess that judges have made, why should we trust them over a well programmed computer? I grant that human judges have something that computers lack. Judges are living, breathing, human beings, with a sense of duty, and responsibility, and empathy. But those very same judges can also be full of  racial biases, hidden political agendas, and overblown emotional reactions. With a computer, we just encode the sentencing guidelines specified by the law into your algorithm and let the computer decide without fear or favoritism.

If only it were really that simple, though. Unfortunately, it is not. Until we develop fully autonomous, fully self-programmed computers, we’re stuck with human beings, with our racial biases and hidden agendas, doing the bulk of the programming. And although some programmers may think of themselves as young Mr. Spock’s—all logic and no emotion—in the end they are just humans too. And they are unfortunately just as prone to bias and blinders as the rest of us. Nor can we easily train them to simply avoid writing their own biases into their algorithms. Most humans aren’t even aware of their biases.

In the old days, the days of so-called good-old fashioned AI, this might not have been such a big deal. Even if we couldn’t eliminate the biases from the programmers, we could still test, debug and tweak their programs. By contrast, you can’t rewrite a judge’s neural code when you discover he’s got this thing against black people. If you think about it that way, who wouldn’t still take the computer over the judge any day?  

Unfortunately, good old fashioned style AI programming (GOFAI)—the kind where you had to explicitly program every single line of code in order to stuff humanlike knowledge into the computer sort of by “brute force” is quickly becoming a thing of the past. GOFAI is rapidly giving way to machine learning algorithms in many, many spheres. With this kind of computational architecture, instead of trying to stuff the knowledge into the computer line-by-line, you basically give the computer a problem and let it figure out how to solve the problem on its own. In particular, you give the machine a bunch a bunch of training data, it tries to come up with the right answer. If it gets the wrong answer, you (or the world) give it an error signal in response. The network then spontaneously adjusts its network, basically without human intervention, until it gets the right answers on all the data in the training set. Then we turn it loose on the world to confront brand new instances of the problem category not in the original training set. It’s a beautiful and powerful technique.

But now suppose you’ve got some tech bros in Silicon Valley training a machine to do, say, face recognition. Maybe they pick a bunch of their friends to be the training set. We can be pretty sure the training set won’t be representative of the population at large! Now this means that, at the very least, if we don’t want to introduce biases into the network’s representation of the problem domain, we have to make sure to use statistically sound methods to design our training sets. But that, as we discuss in the episode, is much easier said than done, at least in the general case. That’s because the only data reasonable available to us, in, for example, the case of sentencing decisions, may be riddled with the effects of a history of bias.

But there’s an even harder problem. These networks can sometimes be inscrutable black boxes. That’s because the network basically decides on its own how to partition and represent the data and what weights to assign to what factors. And these “decisions” may be totally opaque to their human “teachers.” That means that if something goes wrong, we can’t even get in there and debug and tweak the network, as with old-fashioned AI. At least with those systems, we knew exactly what the algorithm was supposed to be doing.

Now I don’t want to sound like a luddite. I recognize the decided upsides of moving beyond human decision to automated decision making. And even though I still have a soft spot for old-style knowledge representation from the heyday of GOFAI, I appreciate the amazing success of newfangled machine learning architectures. Still, I’m not really in a hurry to farm out too much of our moral agency to machines. I think before rushing pell-mell into the breach, we need to slow down and think this through much more systematically.

Kenneth Taylor

https://www.philosophytalk.org/blog/ethics-algorithms

Member Content Rating: 
5
Your rating: None Average: 5 (1 vote)