Submitted by Dr. How on
From the Roomba that vacuums the carpet, to the robots that build our cars with systems that drive themselves, the extended use of artificial intelligence is set to revolutionize the whole of life. Yet in 2014, physicist Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.” Is this a baseless fear? Or should we take some action before some new development goes rogue?
Every day artificial intelligence, or AI, is changing our lives for the better. Many applications not only save time and effort but also lead to better outcomes. Britain’s Astronomer Royal, Martin Rees, is cofounder of the Center for the Study of Existential Risk. AI is one of the many aspects of our world that is being evaluated.
Rees: “We’ve benefited hugely from information technology. The fact that someone in the middle of Africa has access to the world’s information, and we’re all in contact with each other, is a wonderful thing. And of course, if we look further ahead, we wonder about whether AI will achieve anything approaching human intelligence, and that raises a whole new set of questions. Machines and robots are taking over more and more segments [of the labor market]. And of course, it’s not just factory work they’re taking over; they’re taking over many professional jobs: routine accountancy, medical diagnostics and surgery, and legal work will be taken over.”
Most of these areas pose no concern at the moment. Yet there is the possibility of existential disaster. Cambridge University professor Seán Ó hÉigeartaigh studies the pros and cons of AI.
Hulme: “According to one of your colleagues, Nick Bostrom, ‘the transition to the machine intelligence era looks like a momentous event and one associated with significant existential risk.’ If it’s true, what is the risk, and is it existential?”
Ó hÉigeartaigh: “When Nick Bostrom talks about the transition to machine intelligence and existential risk, he’s not speaking about the artificial intelligence systems that we have in the world today, or tomorrow, or even next year. He’s looking forward to the advent of what we might term ‘artificial general intelligence,’ the kind of general reasoning, problem-solving intelligence that allows us to dominate our environment in the way that the human species has. We’re currently nowhere near that, and experts are divided on how long it will take us to achieve that. However, were we to achieve this, it would undoubtedly change the world in more ways than we can imagine. If we were to create intelligence that was equivalent or perhaps even greater, then it would undoubtedly change the world even more, and it would be imprudent of us to assume that that would go very well for us.”
Along with 25 other experts, Ó hÉigeartaigh published a report in February 2018, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
Hulme: “You say this: ‘Artificial intelligence is a game changer.’ So what’s been the reaction from governments, corporations, individuals?”
Ó hÉigeartaigh: “The reaction has actually been very positive. We were very pleased. We had an initial concern that we would put out this report and people would be concerned that maybe it’s scaremongering or unnecessary concern. That was very much not our intention with the report. One of the main recommendations that we were trying to bring across in the report is that with artificial intelligence developing quite quickly, and in particular artificial intelligence being applied to so many different safety-critical systems in our world, this means that we need to start working together more closely between machine-learning experts, who really understand the state-of-the-art of the technology; policymakers and lawyers, who need to legislate around it and who need to guide governmental strategy around it; and the civil infrastructure people and social scientists, who need to think about the impact of it.”
One of the concerns is that if machines are allowed to learn and achieve artificial general intelligence, one or more of them could go rogue. The stuff of science fiction, you may say. But we all know that when something becomes possible, though still ethically undesirable, we can’t prevent its eventual use by someone.
Commenting on this possibility, author William Poundstone writes, “There is going to be interest in creating machines with will, whose interests are not our own. . . . I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously.”
Is there a time to unitedly say no to some kinds of development? This is the first of two questions that come to mind from a biblical perspective. They’re not “religious” in the sense that causes people to recoil from anything tied to belief, but reality-based, because they speak to our physical existence and survival.
This first question relates to a time when human beings had achieved much technologically through common purpose and common language. In early urbanized Babylonian society, an account of the building of a high tower, which would symbolically challenge God’s domain, ends with the forestalling of further development because “this is what they begin to do; now nothing that they propose to do will be withheld from them” (Genesis 11:6, emphasis added). Those early builders could have no limits without some form of outside control. They could have taken a different path and chosen not to “play God.” But as a result of their overreaching, some form of outside control had to be asserted; so the people were scattered, their language confused.
How to determine right from wrong in human endeavor is the second question that comes to mind. Is there a universal standard by which we can know right from wrong action? Today we can readily admit that the problem with technologies that can be used for good and bad (so-called dual-use) arises from the selfish side of human nature.
Such a global code lies within the law of love, defined in the Bible as love toward God as Creator and toward fellow human beings as neighbors. Artificial intelligence can certainly augment human intelligence, but only the mind of God at work in humanity can provide us with spiritual intelligence so that we live ethically and at peace with all.
David Hulme
http://www.vision.org/insight-video-artificial-intelligence-rogue-robots-8598
Rory Cellan-Jones, “Stephen Hawking Warns Artificial Intelligence Could End Mankind,” BBC News (December 2, 2014).
- 786 reads