*** DEBUG START ***
*** DEBUG END ***

Leader comment: Risk and control: where AI can go badly wrong

by
01 September 2023

THE ability to absorb data and do something clever with it ought to be universally welcomed. If society can steer itself away from guessing and getting things wrong, or calculating on insufficent data and getting things almost right, vast amounts of time and resources could be saved. One can think of how improved social-support systems might be, such as the clumsy application of Universal Credit rules, if they could take account of all the nuances of human lives. But this immediately points to the chief source of anxiety about artificial intelligence: that it has been developed by, and will be used by, people with the wrong motives.

There are two particular areas of concern. The first is risk. If drivers in a particular postcode have a poor recorded history, all new drivers to that area will struggle to get insurance, regardless of character. If too many houses or flats of a particular construction suffer damage, similar buildings will be unsaleable because mortgages will be denied. If people with a particular financial profile apply for a loan, they will be turned down — and Erin Green on these pages suggests the sorts of people that this will happen to. None of these scenarios is new. Risk-assessment businesses have been calculating odds for years. But the imprecision of their calculations hitherto has allowed a degree of leeway. No longer. “Computer says no” is becoming unarguable, as commercial companies develop a certainty that will remove any vestiges of negotiation from their transactions.

The second concern is control. The ability of a state to record the activities of its citizens has been available for many years. The difficulty has been finding a means of processing so much information. With the development of AI, human limitations can be removed, and restrictions imposed on those whose patterns of behaviour are deemed antithetical to the state authorities. The example of China looms large, as a state that combines technology and autocracy.

The current debate about AI thus seems to veer between two poles: that it is too clever, or not clever enough. It is, of course, both: it can aggregate knowledge in a way never before experienced; but what it does to that knowledge depends — for the time being, at least — on human instruction. If a program is found to be acting “inhumanly”, it is because it has not been coded with the same tolerances as we expect of the best humans. As followers of a faith that enjoins them to be perfect, Christians find it hard to argue for a system based on fallibility; and yet that is what humanity is. AI will be a boon if its imperfections are acknowledged by those who employ it.

Browse Church and Charity jobs on the Church Times jobsite

Letters to the editor

Letters for publication should be sent to letters@churchtimes.co.uk.

Letters should be exclusive to the Church Times, and include a full postal address. Your name and address will appear below your letter unless requested otherwise.

Forthcoming Events

Green Church Awards

Closing date: 30 June 2024

Read more details about the awards

 

Festival of Preaching

15-17 September 2024

The festival moves to Cambridge along with a sparkling selection of expert speakers

tickets available

 

SAVE THE DATE

Festival of Faith and Literature

28 February - 2 March 2025

The festival programme is soon to be announced sign up to our newsletter to stay informed about all festival news.

Festival website

 

ViSIt our Events page for upcoming and past events 

The Church Times Archive

Read reports from issues stretching back to 1863, search for your parish or see if any of the clergy you know get a mention.

FREE for Church Times subscribers.

Explore the archive

Welcome to the Church Times

 

To explore the Church Times website fully, please sign in or subscribe.

Non-subscribers can read four articles for free each month. (You will need to register.)