THE philosopher David Edmonds, from the well-regarded Uehiro Centre of Practical Ethics in Oxford, has gathered together 20 colleagues, most from his own university — but, alas, not including a theologian — to discuss ethical challenges posed by the recent phenomenal development of artificial intelligence (AI).
Their essays are impressive. This is an up-to-date, jargon-free and thoughtful survey of a challenge that anyone who banks or buys online, let alone attempts to negotiate with an automised utility provider, will be only too aware. Doing so, we face a version of the Turing dilemma, namely: are we negotiating with machine “intelligence” or with a living human being?
Typically, the essays in this collection start by outlining the actual or aspirational advantages of AI in a particular area, before setting out in detail their disadvantages. For example, we all use pocket calculators, but does a generation brought up using them now largely lack facility in mental arithmetic and is then unable to check the plausibility of a particular calculation? When trying to negotiate with an AI insurance provider (the previous provider having doubled your premium), do you fail to frame questions that computers can understand? And do you fear being hacked every time you check your online bank account?
Some articles, understandably, discuss the dystopian horrors of Ransomware’s attacks on hospitals. In his introduction, Edmonds adds these AI challenges: an erosion of personal autonomy; the built-in bias of algorithms; the absence of accountability and the claim that it is just a “blip” in the machine, or, even worse, that the machine is “always right”, as seen in the Horizon IT scandal; and, inevitably, the elimination of jobs.
Several contributors mention that previous technological developments raised similar challenges. Printing made books available to a much wider public, but books soon proved threatening: they eroded oral traditions of transmitting stories, and they made monastic scribes redundant. Mechanical transport hugely facilitated travel, but it eroded riding skills and horses themselves.
Nuclear fission produces more abundant energy to heat our homes, but it did nothing to improve Hiroshima and Nagasaki. Drones make land surveys and searches easier, but they are now used indiscriminately in war, as we have witnessed so horrifically in Ukraine.
The late, great Ian Barbour argued that technology could be used or misused, since technology was power. No one here argues that AI can be reversed. It is a power that is with us, for good or ill. In the final and very thoughtful essay, Ruth Chang, Professor of Jurisprudence at Oxford, argues that, since AI is now a fixture, we must keep humans firmly “in the loop”. It is not good enough to delegate final decisions to machines (Horizon again): we should, indeed, use them wisely before making a significant decision affecting the vulnerable; but it is we, as moral beings, who must finally make that decision.
Uehiro Centre’s Professor Peter Millican insists that machine “intelligence”, with its prodigious ability to process vast amounts of information, is not to be confused with humanity’s additional sentient, often intuitive, intelligence. And Dr John Zerilli, Chancellor’s Fellow at Edinburgh, writes a fine article arguing that scholars in science and the humanities, despite ongoing differences of approach, can all make a significant contribution to the debate about AI, not least by encouraging AI scientists to recognise that their discipline does need to listen carefully to ethical challenges.
This is a distinguished, well-priced, and timely collection.
Canon Robin Gill is Emeritus Professor of Applied Theology at the University of Kent and the Editor of Theology.
AI Morality
David Edmonds, editor
OUP £14.99
(978-0-19-887643-4)
Church Times Bookshop £13.49