LET us start with a slightly old but still rather shocking story from The Guardian, which last month reviewed a British Museum exhibition about 19th-century China.
The review, by Jonathan Jones, mentioned the opium wars as one of the wicked things that Britain did to China, but then went on to deal with the most evil and destructive Western import. You’ll never guess, but “it was an older import, Christianity, that unleashed the most devastating cataclysm of 19th-century China. Christianity . . . inspired Hong Xiuquan, a village school teacher who failed his civil service exams, to declare that he was the brother of Jesus Christ and make himself Heavenly King: in 1851 he led his Taiping Heavenly Kingdom to war against the Qing. ‘Christ and I were begotten by the Father’, affirms Hong Xiuquan in his own correction of a British missionary’s letter on view here.”
So, an actual Christian writes to the Heavenly King pointing out that his beliefs are not Christian — yet Christianity still gets the blame. On this basis, you might as well blame Christianity for every crime committed in the name of Islam, and — while you’re about it — blame Judaism for the Crusades.
It isn’t very surprising that The Guardian would be outraged by the last two errors. What shocks me is that no sub read that paragraph and asked whether it made sense.
IN DEFENCE of Jonathan Jones, you could say that, at times of threatening change and widespread misery, all kinds of lunatic schemes are patched together on the basis of novel ideas — hello, Brexit! — and other papers do much worse. Take The Times’s lead on artificial intelligence on Tuesday morning: “AI systems will be powerful enough to ‘kill many humans’ within just two years, Rishi Sunak’s adviser [Matt Clifford] on artificial intelligence has warned.”
Mr Clifford is then quoted as saying: “You can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years time.”
Mr Clifford was sufficiently outraged that he put sections of the transcript on Twitter, and his full answer to the question when all these horrible catastrophes would come about was this: “The truth is no one knows. There are very broad ranges of predictions among AI experts. I think two years would be at the very most bullish end of the spectrum, the closest moment. There are very credible people like Yann Le Cun, the chief Al scientist at Facebook, who says we have no idea how to get there and it could be decades.”
This is almost the exact opposite of the view that The Times attributed to him. It will be interesting to see whether the paper runs a correction.
I find most of these discussions about AI incredibly frustrating, because — like the machine-learning systems they discuss — they can provide only a summary of the conventional wisdom, one that is looking for agency in all the wrong places. It is not the programmes themselves that will cause harm, but the organisations that direct them and the systems that they work within. Concentrating on the idea that it is AIs out of control which are the danger ignores the much more pressing and likely problem that AIs thoroughly controlled by malevolent actors are what we need to fear.
THE science fiction writer Ted Chiang was interviewed in the Financial Times and brought a melancholy humanism to the discussion.
“Chiang’s view is that . . . the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write, tasks that anthropologist David Graeber called ‘bullshit jobs’. AI-generated text is not delightful, but it could perhaps be useful in those certain areas, he concedes.”
Useful to whom, I wonder. The other night, a Cambridge professor told me that ChatGPT can already produce material for a respectable 2:1 degree. He was certain that some of his students were already using it. One his colleagues had retaliated by getting the programme to mark his essays for him. So, already, these machines are evacuating all human content from what is supposedly an education. But they are doing so only because human beings instruct them.
This story seems to me a much more credible apocalypse than the idea that some machine on its own will suddenly develop malevolence and competence. The terrifying thing about AI is that it just makes easier the normal human vice of bullshitting, or worse.
Take, for example, a story that appeared in the excellent Rest of the World newsletter, about the chat models built on religious texts. There are five, at least, which will answer questions in the persona of Arjuna, the charioteer to Krishna. They draw, it would appear, on the Bhagavadgita, but they show a remarkable, perhaps supernatural, knowledge of contemporary Indian politics. They praise Narendra Modi and disparage his political opponents. BibleGPT, in contrast, told me that I should love my Canaanite neighbours. It really isn’t the technology that’s the danger, but some of the technologists.