I HAVE nothing particularly well informed or interesting to say about the Coronation; so that’s what I’ll say. This is not just my problem. All across the country now, there are people scowling at their screens, trying to think of any aspect of this largely incomprehensible service which has not been covered 30 times over already.
One exception is Gaby Hinsliff, in The Observer, who put her finger on the weakest point of the whole ceremony: the exceptionally silly idea of a voluntary pledge of loyalty. This is almost a contradiction in terms; for either everyone is loyal, or some people are potential traitors. It really is best not to be too specific about such things. As Hinsliff says: “While the fuss over the oath will doubtless be forgotten soon enough, a wise king would take this as a warning not to push his luck.”
THOSE wretches currently desperate for something to write about the Coronation will soon have larger problems. Although the “large language models” currently referred to as “AI” can’t and won’t become intelligent, self-conscious, and autonomous entities that could replace humans in all their aspects, they can certainly replace an awful lot of content-providers. The underlying problem here is that a great many people currently employed to type at their screens are not required to be intelligent, self-conscious, and autonomous themselves. If anything, their jobs require the deliberate suppression of all such qualities. But the artificial stupidity so prized in the job market can now be better and more cheaply produced by “artificial intelligence”.
Consider these examples from Microsoft’s helpful directions to programmers learning to use its system: from the prompt “Headline: Coach confident injury won’t derail Warriors”, the program produces this lead paragraph: “The coach is confident that the injury won’t derail the Warriors’ season. The team is still focused on their goals and that they will continue to work hard to achieve them.”
So much for the jobs of sports writers and their sub-editors. The programme works by harvesting the most used, stalest, and most juiceless clichés from all the billions of words on the open internet, and — one way or another — that’s what hundreds of thousands of people are paid to do today.
The model is not telepathic. It responds to precision and clarity much as we hope humans do: Microsoft’s hints for best performance include: “Be Specific. Leave as little to interpretation as possible. Be Descriptive. Use analogies. Order Matters. The order in which you present information to the model may impact the output.” (Note that it was presumably a human operator who took the single, sufficient word “matters” and waffled it up into “may impact the output”.)
One profession in which these unnatural qualities are common — because nothing works otherwise — is programming. I got a taste of this myself when I was trying to write a small, useful, and necessary script to retrieve an email disaster. Microsoft’s code-editor now has a degree of AI built into its completion mechanism, and the results can appear completely telepathic. But, because they are only reproducing the patterns that other programmers have used in similar situations, they can also introduce infuriating errors with equal confidence and dispatch.
THE irrelevance of truth and falsehood is the quality that the philosopher Harry Frankfurt used in his foundational work On Bullshit (Princeton University Press, 2005) to distinguish bullshit from straightforward lying. In the attention economy, all that matters is that some vague concept be salient for a moment or two.
Rafael Behr points out in Politics for Survivors (Atlantic Books), his excellent analysis of the collapse of trust in the British political system and in the wider world, that the claim on the side of the Brexit bus that “We give £350m a week to the EU” worked as bullshit partly because it was exposed as a lie. The more that Remainers complained about its dishonesty, the more clearly they fixed in the public mind the idea that we gave some large sum to the EU, even if no one knew quite what it was.
The web is full of bullshit in those terms, whose only function is to grab attention. But this problem is about to get much — perhaps a million times — worse. Already, one organisation, Newsguard, has identified 49 websites in seven languages publishing hundreds of articles a day on all kinds of subjects, all of them generated entirely by AI. The purpose, of course, is to get revenue from programmatic advertising. We are approaching fast the world where “Everything is possible and nothing is true,” as Peter Pomerantsev described Putin’s Russia.
YET powerful people still need to know the truth. There is a lovely anecdote in the latest Vanity Fair about the collapse of Rupert Murdoch’s latest engagement, to Ann Leslie Smith. “One source close to Murdoch said he had become increasingly uncomfortable with Smith’s outspoken evangelical views. ‘She said Tucker Carlson is a messenger from God, and he said nope,’ the source said. A spokesperson for Murdoch declined to comment.”
Tucker Carlson, America’s best-paid cable news host, was sacked without warning or explanation a month or so afterwards.