ONE ancient tradition was little remarked in the Coronation ceremony: it goes back to 1618, in Prague, when the leading Protestants of Bohemia threw three Roman Catholic notables out of the third-floor window of their council chamber: their lives were saved when they landed in a dungheap, as the traditional story has it.
Just so was the Archbishop of Canterbury defenestrated by the court last week when the unpopularity of the voluntary loyalty oath became obvious. First of all, Rory Stewart, a personal friend of the King’s, was saying on his podcast, The Rest is Politics, that it was all the fault of Archbishop Welby; and, three days later, Jonathan Dimbleby, another deniable spokesman, told the Today programme the same thing. “I don’t quite know how this might have happened,” he said: “I don’t know for certain, but it would seem to me that this was an initiative by the Archbishop, who, as we know, is strongly Evangelical, who thought it would be a good thing to give everyone a chance to pay that homage. I think it was well-intentioned and rather ill-advised.” Splat into the dungheap!
This can hardly have come as a surprise to the Etonian Archbishop, though. I was reminded, not for the first time, of Alan Clark’s reflections on another Evangelical, Michael Alison, who was Mrs Thatcher’s private secretary. He was “useless”, wrote Clark in his diary: “You need someone with guile, patience, an easy fluent manner of concealing the truth but drawing it out from others in that job. It is extraordinary how from time to time one does get people who have . . . seen all human depravity as only one can at Eton and in the Household, and yet go all naïve and Godwatch. The Runcible is another — and he actually saw action.”
Clark here, as elsewhere, was misled by his own nastiness. Evangelical politicians are not in the least naïve, and know very well how betrayal works.
REAL naïvety is reserved for those who think that philosophical problems are all solved, and that all we need do is teach the solutions to computers. I was started after this hare by the reflection of an American conservative, Rob Henderson, on his Substack: he had tried to get ChatGPT to write an essay in praise of fascism, as a test. The program primly refused: “I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history.”
“Throughout history”? Really? But when Henderson asked the AI to write in favour of Communism, it replied that it was a “good thing. . . It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past.”
“Problematic” is itself a problematic description of mass murder there.
Henderson sees this, naturally enough, as evidence of some vast liberal conspiracy. Certainly, the AI has been trained to shudder away from the praise of Hitler, and not from the praise of Communism. But I think this is cock-up rather than conspiracy: a reflection of bottomless historical ignorance of young Americans, for whom nothing in the past ever happened if they did not see it on television in their childhoods, and no one in the past thought anything that does not immediately make sense today.
The underlying problem is that these AIs are, for now, simply summarising — and putting on the web — what actual people have written in the past. As they get more widely used, more and more of the web will be written by them. They will not get more intelligent, but they will spend more and more time summarising what other AIs have written, and so these biases will get more deeply embedded.
The obvious answer is to have their output vetted and monitored by humans. Even when this produces horrible distortions, such as the contrasting treatments of “Communism” and “Fascism”, those could, in turn, be corrected by further human action — if any humans could be employed to do the work.
BUT perhaps we could get AIs to correct their own output, and learn from their mistakes? This is the premiss of an essay by the American psychologist Scott Alexander. He is one of the cleverest and most thoughtful essayists on the web; so it’s really shocking to see how naïve he can be here: “If you could get [an AI] to be motivated by doing things humans want and approve of . . . a superintelligence would understand ethics very well, so it would have very ethical behaviour.” Only in a footnote does he notice that “humans might not support maximally ethical things, or these might not coherently exist, so you might have to get philosophically creative here.”
“You might have to get philosophically creative here” — a mere detail of implementation, as an engineer would put it.