THE ability to absorb data and do something clever with it ought to be universally welcomed. If society can steer itself away from guessing and getting things wrong, or calculating on insufficent data and getting things almost right, vast amounts of time and resources could be saved. One can think of how improved social-support systems might be, such as the clumsy application of Universal Credit rules, if they could take account of all the nuances of human lives. But this immediately points to the chief source of anxiety about artificial intelligence: that it has been developed by, and will be used by, people with the wrong motives.
There are two particular areas of concern. The first is risk. If drivers in a particular postcode have a poor recorded history, all new drivers to that area will struggle to get insurance, regardless of character. If too many houses or flats of a particular construction suffer damage, similar buildings will be unsaleable because mortgages will be denied. If people with a particular financial profile apply for a loan, they will be turned down — and Erin Green on these pages suggests the sorts of people that this will happen to. None of these scenarios is new. Risk-assessment businesses have been calculating odds for years. But the imprecision of their calculations hitherto has allowed a degree of leeway. No longer. “Computer says no” is becoming unarguable, as commercial companies develop a certainty that will remove any vestiges of negotiation from their transactions.
The second concern is control. The ability of a state to record the activities of its citizens has been available for many years. The difficulty has been finding a means of processing so much information. With the development of AI, human limitations can be removed, and restrictions imposed on those whose patterns of behaviour are deemed antithetical to the state authorities. The example of China looms large, as a state that combines technology and autocracy.
The current debate about AI thus seems to veer between two poles: that it is too clever, or not clever enough. It is, of course, both: it can aggregate knowledge in a way never before experienced; but what it does to that knowledge depends — for the time being, at least — on human instruction. If a program is found to be acting “inhumanly”, it is because it has not been coded with the same tolerances as we expect of the best humans. As followers of a faith that enjoins them to be perfect, Christians find it hard to argue for a system based on fallibility; and yet that is what humanity is. AI will be a boon if its imperfections are acknowledged by those who employ it.