EVERYTHING interesting in the papers has been about journalistic ethics this week, and very little has been about the Church of England. I will take that as a sign.
The subjects overlap in a long Financial Times “Alphaville” piece about Wonga. This comes close to arguing that there is no such thing as an entirely ethical investment, because the modern economy is so complex. “In terms of its previous investment in Wonga, the Church held a stake in a venture capital fund which had in turn invested in the business (two degrees of separation). If some of the money from a green bond eventually flows through a supply chain to an oil company via, say, the use of unleaded petrol in company cars, the connection might pass through three or four degrees.”
Since it is obviously impossible to calculate all these ramifications, the argument says that decisions can be based only on reputation. “Wonga had become a household name, which completely changed the reputational implications for any investor. If anyone did come up with a set of calculations that demonstrated Wonga sat on the wrong side of the ethical interest rate line, then the application of that model would have all kinds of unintended consequences, and force ethical funds to divest from many other businesses, in many other unpredictable ways, either now or in the future. Instead, Wonga had a kind of infamy that is easy to recognise, but hard to quantify.”
This strikes me as rather cynical, but difficult to argue with. Nor is it wholly cynical. It may be true that the mobile phone that I use depends on rare minerals extracted at gunpoint by slaves in the Congo; but there is still a useful ethical distinction between buying a mobile phone and investing in either guns or the mercenaries who use them.
MOBILE phones, though, raise their own ethical issues. For news reporters, these are chiefly problems about the speed with which lies now travel round the world.
I was struck by a gruesome little story in Alan Rusbridger’s book on journalism in general and the travails of The Guardian in particular: Breaking News (Canongate). He had spotted that the former UKIP MEP Godfrey Bloom had retweeted a tweet by an extreme right-winger who called himself “PeterSweden7” about a gang rape in Malmö, in Sweden, which concluded with the attackers’ spraying lighter fluid on their victim’s crotch and setting light to it.
The assumption, of course, was that the attackers were Muslim refugees. “PeterSweden7” said that he had found it on the Facebook page of a Swedish economist, who wrote there that he had heard this from police sources.
But the police had not come out with the story publicly. One senior policeman had referred to “an assault amounting to torture”, but without being more precise. This had been the girl’s original story. But, when the results of her medical examination came back to the police, it was clear that she had been lying. There were no burns. In fact, there was no evidence of rape. Two months later, after she had confessed to making the story up, she was quietly charged with making a false statement.
But, of course, the original lie was by then entirely well-established, and is the first thing you find if you Google the story. Rusbridger’s fact-checkers had themselves missed the much later complete retraction.
Obviously, some of the actors were malign and interested in spreading fear and suspicion of Muslims. But what part did Google and Facebook play? No human judgement is brought to bear on the stories that their algorithms spread. The measurement that the companies care about is how many clicks a story gets, and how long any reader lingers in front of an advertisement. That can be expressed mathematically in a way that the truthfulness of a story simply cannot be, even when it can be known.
BESIDES, the idea that either Google or Facebook should become the global arbiter of truth has problems of its own, even if it were physically possible, which it is not.
One answer comes in a story from The New York Times. Craig Newmark, whose website Craigslist destroyed the classified advertising and economic base of newspapers all across the United States, has given £20 million to The Markup, a startup that will investigate the effects of algorithms on society. The reporters involved have impressive track records in exposing the hidden bias of apparently neutral programs. When access to health insurance, housing, and even prison sentences are all determined by opaque algorithms, this is a really urgent journalistic task.
Perhaps the FT is more right than it meant to be, and all we can ultimately rely on is reputation. But who is to decide on a source’s reputation? What criteria will they use? Some readers, after all, show a very clear preference for being deceived. It may be frightening to consider that the future of ethical journalism is in the hands of journalists. I find that it adds to the terror to suppose that it’s in the hands of readers, too.