Should we be worried about the future of artificial intelligence? Some of the biggest names in science and technology have recently made headlines by suggesting that yes, we should. Microsoft founder Bill Gates has joined this school of thought, saying he is “in the camp that is concerned about super intelligence,” and predicting that in mere decades, AI could become “strong enough to be a concern.”
Summoning the Demon
Gates’ comments echo what several other forward thinkers have said about this topic. Elon Musk, founder of SpaceX, Telsa Motors, SolarCity, and PayPal has called AI our “biggest existential threat.” For him, the development of computers that think like humans, has the potential to play out like a sci-fi movie. If we lost control of our creation, it could be a case of “summoning the demon”. Despite his concerns, Musk has invested in AI companies, including DeepMind and Vicarious. This, he says, is because he hopes to “keep an eye on what’s going on.” Musk has also stressed the need for stricter regulation in this area, “just to make sure that we don’t do something very foolish.”
Yet the starkest words of warning have come from physicist Stephen Hawking. The renowned scientist, who relies almost entirely on machines for his basic movement and speech, has claimed that AI “could spell the end of the human race.” If we can create independent sentient life, then, according to Hawking, it would eventually “take off on its own, and redesign itself.”
The Ethical Dilemma
Clearly, a future using AI in daily life would pose many new challenges, and some vendors are already weighing the ethical implications of such technologies. When DeepMind sold to Google for example, its founder insisted that the technology never be used for military purposes. Google itself has established an ethics committee to consider such issues. More recently, the Brookings Institution has made a case for a Federal Robotics Commission to deal with the “harms robotics enables.”
But the idea is nothing new. People have long been wary of technological development. In Jewish folklore, a 16th century Bohemian Rabbi is said to have created a golem, which he destroyed due to fears that it could not be controlled. The anxiety that we might inadvertently “summon a demon” from technology dates back at least to 1818 when Mary Shelley’s novel Frankenstein was published. Written during the beginning of the industrial revolution, the book captured the fear of “playing God” that was current at the time. Perhaps a more salient question is, should we have forfeited the benefits to welfare, medicine, and education which we’ve been enjoying since that era to address Mary Shelley’s fear? And, if the answer is no, should we forfeit the future benefits of AI, because of the concerns of some of our contemporaries?
Can a Computer Have a Soul?
Complicating the debate is the question of how AI differs from types of automation that have gone before. If a machine can think like a person, does that make it a person? And most fundamentally, can a machine have a soul?
Of course, if the answer was yes then we really would be playing God, and this is at odds with many of our culture’s most deep-seated religious beliefs. Yet AI makes the questions of what is a soul and where does it reside relevant and pressing, The ancient Mesopotamians thought the soul was located in the liver, which is the largest organ in the body. During the European Middle Ages, the soul was considered to rest in the heart because if the heart stopped, death quickly followed. Nowadays we tend to think of the soul as residing in the brain and giving us our sense of self. Is it any wonder that the link between creating artificial intelligence and artificial souls is being made? While there are many unanswered questions, it can seem doubtful that we’re really capable of making a sentient being out of sand.
The Real Threat
Along with these religious concerns, many suspect there will be insurmountable technological limitations. Computer scientist and CTO of SwiftKey Ben Medlock argues that AI could not accurately replicate the brain in the foreseeable future, as “we dramatically underestimate the complexity of the natural world and the human mind.”
So, if a “machine with a soul” capable of turning on its human creators is still only the stuff of sci-fi, do we need to be concerned? Yes, says scientist Andrew Ng, but not for our lives − for our jobs. If cars can be self-driving, and if we can each have our very own robot butler, then the demand for human labor is going to change dramatically. In the end, the greatest threat to humanity from AI may be that it kills us with kindness.