THERE are three main worries about the economic — and hence the ethical — impact of generative AI.
In ascending order, the first is whether the people who have been used to train these machines will be fairly compensated for the value of their work. If a machine that has been trained on the work of a particular writer can then imitate the writer’s style, should they be paid a portion of the profits from this? That is what the writers’ strike in Hollywood is largely about.
The Society of Authors in the UK is similarly worried, and has put out an excellent position paper on the subject. The singer Grimes, once a partner of Elon Musk, had ingeniously offered a 50:50 split of the royalties to anyone who uses their voice in a commercially successful AI-generated song. But a singer’s voice is much easier to identify than a writer’s.
The second worry is whether people doing white-collar jobs will find themselves replaced by generative AI. The historian Peter Turchin has quoted a projection that one third of the lawyers’ jobs in the US will disappear in this process.
But generative AI is not limited to words. It produces pictures, both static and moving ones, and music, too. That is why actors are on strike in Hollywood, as well. They foresee a future in which their precious selves are replaced by cheaper and infinitely biddable simulacra.
The third worry is whether generative AI will destroy the internet as a source of information. Generative AI is already being used to produce fake news, fake reviews, fake tweets, and even whole fake books on a scale that the gatekeepers of the internet, Google, Amazon, and so forth, won’t be able to cope with.
One organisation, NewsGuard, has found 421 sites, operating in 14 languages, and generating hundreds of thousands of fake news stories — just word salad at best, actively malevolent disinformation at worst.
On Amazon, entirely fake books are appearing, some of them apparently by real authors. On Twitter, the efforts of Russian bot farms look puny beside the possibilities of AI-generated posts and responses — which, of course, every organisation will have to adopt to protect itself from the efforts of its enemies. Trolling is now weaponised and automated.
The thriving business of fake reviews, both for products and places, will move into overdrive with these technologies. As Gary Marcus, one of the most level-headed experts on the subject, has written, “Cesspools of automatically generated fake websites, rather than ChatGPT search, may ultimately come to be the single biggest threat that Google ever faces.
“After all, if users are left sifting through sewers full of useless misinformation, the value of search would go to zero — potentially killing the company. For the company that invented Transformers — the major technical advance underlying the large language model revolution — that would be a strange irony indeed.”
That is not the only irony in the situation. Google, Amazon, and all the other businesses now threatened by the rising tide of AI-generated sewage are themselves built on earlier iterations of AI, the ones that people used to call “the algorithms”. But the one thing that no AI can do is to distinguish truth from falsehood, or the material that humans write from those that other machines produce.
Last month, The Irish Times published a piece headlined: “Irish women’s obsession with fake tans is problematic” — and had to apologise 24 hours later when it emerged that the piece had been largely generated by AI. But in a world where that sort of mindless trend piece is acceptable when written by a human, it is difficult to see why having a computer generate it makes things worse.
It might seem that in journalism, as in many areas of life, the problem is not so much the threat of artificial intelligence as that of artificial stupidity. There is nothing wicked that AI can generate that humans are not presently being paid, however inadequately, to produce.
But quantity really does alter quality. The very idea of truth can disappear in a cloud of nonsense. Perhaps the likeliest outcome is that generative AI will destroy the assumption that we all rely on to make sense of a world of electronic media: that anything we read, or see, or hear, was made by a human being for recognisable human purposes.
Some of the problems that this poses for the Church are theological, or at least apologetic. Some are much less elevated. Why toil over a sermon if you can get AI to write it? But if you find your sermons could just as well be written by AI, that is a problem with your sermon-writing, not the technology.
It is perhaps more interesting to ask whether the programs could improve the delivery rather than the words. Perhaps a voice model constructed with generative AI could be a far more effective preacher or reader than most human clergy. Can we teach rhetorical skills to an AI voice? Could such a voice, once trained, teach us in turn? It is only once we start playing with them in unpredictable ways that we can begin to use them as tools for human creativity.
The basic mistake that people make about AIs is to treat them as if they were other people. They are not. They have no human motives. They produce lies with just as much confidence as truths: you must never trust their results without checking. But they are not “lying” because they have no concept of truth. They are not even “hallucinating”, because they have no consciousness. Instead, they are answering the question: “What word found on the internet is statistically most likely to follow the previous words in this sentence, given the sentences (the context) that surrounds it?”
And because they are not people, they cannot create new things. They can only recombine old ones. Of course, it is true that a great deal of human activity is similarly uncreative. When it is, we readily dismiss it as boring, poorly executed, or derivative. When a machine does the same thing, but in bizarre and unpredictable ways, we should not mistake it for creativity.
Details of the Church Times AI webinar here