The Heartless System
For professor Noel Sharkey, the greatest danger posed by AI is its lack of sentience, rather than the presence of it. As warfare, policing and healthcare become increasingly automated and computer-powered, their lack of emotion and empathy could create significant problems.
"Eldercare robotics is being developed quite rapidly in Japan," Sharkey said. "Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers."
The GeckoSystems CareBot. The future of caring for the elderly?
This lack of empathy could become particularly problematic in the theatre of war. US defence organisations (and other countries) are pouring considerable resources into autonomous technology. "Recent US planning documents show there is a drive towards developing autonomous killing machines," Sharkey said. "There is no way for any AI system to discriminate between a combatant and an innocent. Claims that such a system is coming soon are unsupportable and irresponsible.
“It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.”
If the idea of software-powered killing machines isn't nightmarish enough, then some of science's darker predictions for the future of AI certainly are. As far back as the 1960s, when AI research was still in its earliest stages, scientist Irving Good posited the idea that, if a sufficiently advanced form of artificial intelligence were created, it could continue to improve itself in what he termed an 'intelligence explosion'.
While Good's supposition that an 'ultraintelligent' machine would be invented in the 20th century was wide of the mark, his theory exposed an exciting and potentially worrying possibility: that a superior artificial intellect could render human intelligence obsolete.
In 1993, mathematics professor, computer scientist and SF writer Vernor Vinge wrote a celebrated and oft-quoted paper that pointed out the darker implications in Good's thinking. "Good has captured the essence of the runaway, but does not pursue its most disturbing consequences," Vinge wrote. "Any intelligent machine of the sort he describes would not be humankind's 'tool' – any more than humans are the tools of rabbits or robins or chimpanzees."
Vinge's opening abstract was more ominous still: "Within 30 years," he wrote, "we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
Vinge called this rise of superhuman intelligence the 'singularity', an ever-accelerating feedback loop of technological improvement with potentially unexpected side effects. "When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities - on a still-shorter time scale."
Literature and history are littered with bleak predictions and dire warnings of scientific hubris. In the nineteenth century, the invention of the steam engine brought forth predictions of broken necks. In the eighties, emerging interests in nanotechnology sparked fears of 'grey goo', a civilisation-destroying plague of tiny locust-like machines. Most recently, the construction of the Large Hadron Collider hastened rumours that an artificially created black hole could destroy they entire planet.
The end of the world?
Fears of a 'negative singularity', where the human race is rendered obsolete or enslaved by a superior being of its own creation, follow a similar alarmist tradition - that's the opinion of scientific figures such as Ray Kurzweil, who foresees a positive outcome for the coming singularity.
According to Kurzweil, the coming singularity won't come as a tidal wave, but as a gradual integration. Just as mobile phones and the internet have revolutionised communication and the spread of information, so will our society absorb future technology. "This will not be an alien invasion of intelligent machines," Kurzweil wrote in the foreword to James Gardner’s 2006 book, The Intelligent Universe. "It will be an expression of our own civilisation, as we have always used our technology to extend our physical and mental reach."
Theories Or Fairy Tales?
While it's tempting to conclude the story of artificial intelligence on a note of drama and mystery - with humanity standing on the brink of godlike immortality or technological slavery - the certainty of the singularity's advent is not universally held.
Professor Sharkey contends that greater-than-human computer intelligence may never occur, that the differences between human brains and computers may be so fundamentally different they can never be successfully replicated. "It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth," Sharkey said in a 2009 interview with New Scientist. He’s particularly suspicious of the predictions of scientific figures such as Moravec and Kurzweil. Their theories are, he argues, "fairy tales".
"Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors," Sharkey said. "The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don't see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience."
The Computer Awakes
The story of AI is one of remarkable discoveries and absurd claims, of great leaps forward and abrupt dead ends. What began as a chess algorithm on a piece of paper grew, within a few short years, into an entire field of research, research which went on to spawn important breakthroughs in computer technology and neuroscience.
Yet even now, the first sentient machines, thought to be so imminent in those early years of research, appear to be as remote as they were more than half a century ago. And like artificial sentience, the singularity predicted by scientists such as Ray Kurzweil and Vernor Vinge remains little more than a distant possibility, a mirage that remains forever in the future.
Nevertheless, there remains the possibility that, as technology advances science discovers new insights into the human mind, we may one day see the creation of the first sentient machine. And whatever the outcome of that first awakening - whether we're transfigured by it, as Kurzweil believes, or enslaved, as Bob Joy fears - we should be mindful of one thing: that it all began with a quiet game of Chess.