HOBBES’s dismissal of the religious dreams and visions that plagued the ignorant and the pious was predicated on his conviction, spelled out in the opening pages of Leviathan, that “life is but a motion of Limbs.” The heart, he reasoned, was “but a Spring, the Nerves, but so many Strings”. That being so, could we not say that “all Automata . . . have an artificial life”, or that human ingenuity might “make an Artificial Animal”? The human body was, after all, just a complex machine.
Much the same could be said of the human mind. In his Elements of Philosophy, written a few years later, he reasoned that “ratiocination” — reasoned thought — was, at heart, “computation”. What went on in the head was ultimately no more than “addition and subtraction . . . multiplication and division”.
The implication of these two convictions was momentous. If the body was no more than springs and joints, and the mind no more than addition and subtraction, it should, in theory, be possible to build a human from scratch. The idea remained a dream even as, fulfilling Hobbes’s prophecy 200 years later, the logician George Boole firmly established “the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities”.
A century after Boole’s work was published, the computer scientist John McCarthy coined the phrase “artificial intelligence” at a conference at Dartmouth College, New Hampshire, and helped to establish a discipline that promised to turn Hobbes’s vision into reality. If neuroscience was able to deconstruct all that was quintessentially human into signals in the brain, there was, in principle, no reason why computer science couldn’t reconstruct it, one bit at a time.
IT WAS decades before AI could boast of anything more than an aptitude for chess, but, by the time AlphaGo, a programme produced by Google’s Deep Mind, beat Lee Sedol, the 18-times world champion, at the fiendishly difficult “Go”, it was beginning to look as if the game was up.
For some, the prospect positively pulsed with potential. Technology offered the opportunity for the plodding, irrational, limited, analogue human to be augmented and transformed. Medicine (or rather, medicine and public health) had more than doubled life expectancy (in some countries) over the course of the 20th century. Gene therapy promised to extend it further in the 21st, and brain-computer interfaces offered the possibility of cheating death altogether by scanning, digitising, and uploading the entire brain into a storage facility from which it could be downloaded and re-embodied at some point in the future (if desired).
Whether or not this particularly ambitious goal was achievable, humans might be radically improved in other ways. AI offered the potential to break out from the constraints imposed by evolutionary biology, and re-engineer our frail, faulty humanity. Humans could use information technology to improve their memory, their mental processing, and even their morality. As Ray Kurzweil, the American futurist, who was the St Paul of this transhumanist gospel, claimed, with puppy-dog enthusiasm: “We’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music. We’re going to be sexier. . . We’re really going to exemplify all the things that we value in humans to a greater degree.” Amen.
Re-engineered as a kind of super-species that was capable of self-virtualisation, humans might then escape the limitations of earth, and take their new form beyond the solar system, spreading out across the cosmos in some kind of interstellar mission.
Such a human transformation was part of a wider, cosmic transformation that made the Soviets’ “new man” seem positively pedestrian. Intelligent machines could engineer other intelligent machines. Liberated by ever-faster processing power, AI would grow exponentially until it arrived at “The Singularity”, the point at which the new superintelligence would leave behind humans and their ponderous sublunary lives, and reshape itself, us, and the earth as it saw fit.
“Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to pervade the whole galaxy and even the whole universe,” Yuval Noah Harari wrote in Homo Deus: A brief history of tomorrow (Vintage, 2016). “This cosmic data-processing system would be like God. It will be everywhere and will control everything.”
THE idea that all this might not work out so well for humans themselves occurred to more than simply the professional dystopians. Stephen Hawking warned that, once this singularity was reached, humans would not be able to compete. “The development of full artificial intelligence could spell the end of the human race.”
The idea that AI could be humans’ “final invention” became commonplace. The Astronomer Royal, Martin Rees, wrote about how genetic modification, combined with cyborg technology, would lead to a world dominated by “inorganics — intelligent electronic robots — [which] will eventually gain dominance”.
Harari followed up his hugely popular Sapiens with an equally popular Deus, which suggested that spirituality, humanity, liberty, and morality would be superseded by “the data religion”, in which all experience was reduced to data patterns. Humanity, he wrote, would turn out “to have been just a ripple within the cosmic data flow”, with the ensuing “Internet-of-all-Things . . . [becoming] sacred in its own right”.
The religious resonances within all this were painfully obvious. The world of AI seemed inexorably to gravitate to religious issues and language: post-mortem existence, immortality, re-embodiment (a kind of secular resurrection), human transformation and improvement, not to mention the possibility of a post-human “mission” to the rest of the cosmos — and, indeed, wholesale cosmic transformation.
EARLY Jewish and Christian apocalyptic visions were exemplified by three notable characteristics: alienation within the world, desire for the establishment of a heavenly new world, and the transformation of human beings so that they might live in a perfect, new creation. More or less the same factors characterised visions of the AIpocalypse. “Having downloaded their consciousnesses into machines, human beings will possess enhanced mental abilities and, through their infinite replicability, immortality,” Robert M. Geraci wrote in The Journal of the American Academy of Religion.
For some, this was just another staging post in the long history of the way in which science and religion repeatedly found themselves in conflict. Science, now in the form of AI, offered humanity redemption, transformation, and eternity, an overpowering eschatological vision to replace the old new heavens and old new earth. Once again, religion was pensioned off. As one writer in The Atlantic put it, “AI may be the greatest threat to Christian theology since Charles Darwin’s On the Origin of Species.”
That there was potential for conflict was beyond doubt. One BBC report on whether AI would transform religions presented various priests and believers with examples of roboticised religiosity. The faithful tended to receive them with qualified enthusiasm, but their qualification was a firm one. AI was fine as far as it went, but that could only be so far because AI did not and could not have “a soul”.
There was a certain circularity in the reasoning here: only humans can have a soul, which is why robots don’t have a soul. Combine this with the popular pseudo-neuroscientific view — that, as Harari expressed it, when “scientists opened up the [human] black box, they discovered there neither soul, nor free will, nor ‘self’ — but only genes, hormones and neurons” — and all the ingredients for a perfect confrontation were there.
The headlines were ready. “Religious groups fight AI research because ‘it threatens the soul’.” Precisely because they overlap across the human, the potential for conflict between science and religion is always a live one, whether talking about algorithms in the 21st century or astrology in the fourth.
And yet, if the long history of science and religion has anything to teach us, it is that this conflict is only potential, not inevitable.
Indeed, if the main argument of my book is right, and it is the complex, multilayered, and varied natures of the human beast that lie at the heart of so many interactions between science and religion, then it is just possible that the age of AI might open up a space for enriching dialogue rather than closing it down in the face of defensive argument.
Nick Spencer is Senior Fellow at Theos. www.theosthinktank.co.uk
This is an edited extract from his new book Magisteria: The entangled histories of science and religion, published by Oneworld Publications at £25 (Church Times Bookshop £20); 978-0-86154-461-5.