THE title of Tom Chivers’s new book, The AI Does Not Hate You, sounds reassuring. But the full quote, taken from the founder of the Machine Intelligence Research Institute, Eliezer Yudkowsky, continues: “nor does it love you, but you are made up of atoms which it can use for something else”. It is one of several chilling observations cited by Chivers, who found himself in tears during the course of his research, haunted by an extraordinary prophecy from a Google engineer: “I don’t expect your children to die of old age.” Surveys of AI researchers suggest that about one fifth believe that the development of artificial general intelligence — a computer that can do all the mental tasks of humans — would result in existential catastrophe: i.e. human extinction. They reckon it will have this capacity within the next 50 years.
Apocalyptic, dystopian visions of Artificial Intelligence have long haunted fiction. One trope is their unnoticed adoption in our homes, one of several dark themes in the recent Years and Years BBC1 TV series. Of course, the machines are already installed: heating systems that know how long a house takes to heat up; dishwashers that know how dirty plates are; and, of course, search engines that think they know what people would like to buy next. New technology is sold with the idea that it enables people, freed from the menial tasks of mental arithmetic and memorising, to be more intelligent, more capable. The opposite might be true, however. Take the use of a satnav. It clearly makes a journey simpler; but, by removing the exercise of memory, landmark-spotting, and direction-finding, does it make the driver simpler, also? This might seem a trivial example, but the education world is grappling with this issue on a grand scale: what price imperfect, adolescent thought when a finished, computer-generated essay can be downloaded instantly?
Perhaps this is too gloomily Luddite. Intelligent machines have the capacity to save humankind as well as destroy it. Smart motorways that regulate speed to reduce pollution; farm fertilising systems that use GPS to maximise yield; storm-predictors and earthquake early-warning systems — benign examples of AI abound. What is troubling is the lack of a coherent ethical framework. The flaw in this brave new world is that AI depends on the breadth of vision and farsightedness of its programmers. The Fantasia brooms in Chivers’s example (page 17) suggest what might go wrong if even one eventuality is overlooked. Programmers are bright people; but AI is designed for complex scenarios, and these are precisely the circumstances in which the unforeseen happens. Enthusiasm for AI, therefore, might be tempered by the knowledge that some of its keenest proponents can be found in the armaments industry. The Bishop of Oxford’s description of AI as a mirror counsels us not to expect more of AI than we expect of ourselves. The writer of the Epistle of James might be prepared to judge AI by its works. But, when all these terabytes are being crashed together, is it too much to wonder where grace might be found?