CHARLIE WILSON has died. As editor of The Times in the later 1980s, he did so much to make The Independent a success. He had been appointed by Rupert Murdoch to drive the paper downmarket and boost its circulation. He succeeded in both aims, and, at one stage, 50 journalists resigned on the same day to join The Independent; ten years later, after Murdoch sacked him and put in place Simon Jenkins to bring the paper back upmarket, in the mid-1990s the Mirror group hired Wilson to run The Independent, where he laboured to undo his earlier achievement.
I liked him. He was decent to people who stood up to him. He once tried to set me on to an exposé of Holy Trinity, Brompton (HTB), as a cult that was stealing away the children of the aristocracy — he had been seated at a dinner next to some countess who had been horrified by the change in a friend’s children. He believed that we were talking about the Brompton Oratory, which made our conversation even more confusing for a while. But he did relent when I explained just how grand HTB was itself.
When The Independent started, our ambitions were thoroughly elitist. “We are to be the paper The Times would be if it were still a proper newspaper” was one popular slogan; the science editor used to say “We want to be the paper the permanent under-secretaries read on their way into the office.” The only paper left with that level of ambition — to tell the people who make decisions things that they don’t know or haven’t understood rather than to tell the people who don’t make decisions whatever they want to hear — is, I think, the FT.
This is partly a consequence of the increasing powerlessness of our politicians as the country slides into insignificance. With a loss of power has come a loss of authority, and into that vacuum the strange beliefs come rushing.
ONE of the strangest consequences has been the deification of machines, and of computers in particular. The utilitarian idea that moral choices are subject to calculation rather than judgement is odd in all sorts of ways — not least because a machine has no moral character to be affected in the future by the choices that it makes today.
Yet there is an influential school of academic philosophy which holds that moral questions can, in principle, be answered by a sufficiently powerful computer that could perform utilitarian calculus more accurately and dispassionately than fallible humans. In the place of the Christian argument that the imperfections consequent on original sin make us poor judges of morality comes the idea that the imperfections of our evolved brains make it difficult to understand.
The Boston Review has a long review by the MIT philosopher Kieran Setiya of a book by a “long-termist” Oxford philosopher, William MacAskill, who takes to an extreme the view that a sufficiently advanced and well-programmed computer could come up with the right answer to all the really big ethical questions. As Professor Setiya points out, this conclusion is more plausible when applied to the really huge questions, such as “Should the human species colonise the universe?”, which no one is actually facing, than to the small ones facing individual humans. But it is influential in the “effective altruism” movement, which is, in turn, attractive to billionaires such as Bill Gates, whose money gives them a great deal of power and who are — some of them — concerned to use it wisely and well.
Professor MacAskill’s argument, Professor Setiya says, builds on the premise that we should make utilitarian calculations based not just on the happiness or well-being of people alive today, but on those who will — other things being equal — come to live in the future. In some forms, as when we look at the moral imperatives of the climate emergency, this seems obviously true. But what happens when the calculations are run to their conclusion?
“MacAskill values survival in the long term over a decrease of suffering and death in the near future. This is the sharp end of longtermism. Most of us agree that (1) world peace is better than (2) the death of 99 percent of the world’s population, which is better in turn than (3) human extinction. But how much better? Where many would see a greater gap between (1) and (2) than between (2) and (3), the longtermist disagrees. The gap between (1) and (2) is a temporary loss of population from which we will (or at least may) bounce back; the gap between (2) and (3) is ‘trillions upon trillions of people who would otherwise have been born.’ This is the ‘insight’ MacAskill credits to the iconic moral philosopher Derek Parfit. It’s the ethical crux of the most alarming claims in MacAskill’s book.”
MEANWHILE, the Revd Alice Goodman has a column in Prospect magazine about the moral dilemmas of life as a parish priest. She closes with an anecdote about a student who rang up wanting to talk on the day when Ms Goodman was moving to a new vicarage. She put him off until the following day, but he missed that appointment because he had killed himself. “And I go on”, her column ends, “carrying his memory and my guilt, understanding the intractable pain of where we are now.”