IN A reductio ad absurdum of the notion of content moderation, this week, a ruling was made over Facebook footage of President Biden which had been doctored to suggest that he was inappropriately touching the breasts of his young grand-daughter. It was captioned “sick pedophile”. Yet Facebook was quite right not to take down the fake video, we were told by the Meta Oversight Board, an independent panel of lawyers and academics appointed to hold the social-media giant to account.
This was because, the Board ruled, the video had not been created using artificial intelligence. Moreover, it did not depict the US President saying something that he had not said: it just showed him doing something that he did not do. So, it did not violate Mark Zuckerberg’s “manipulated media” policy.
Even the co-chair of the Board, Helle Thorning-Schmidt, a former prime minister of Denmark, admitted that this was incoherent. The solution, she said, however, was not to remove the video from the internet, but to label it as “significantly altered”. As for an AI-generated phone call, in which the President purportedly called for Democrats not to vote in a primary, that was OK, because Facebook’s rules applied only to video, not audio.
The worldwide web was renamed “the Wild West Web” this week by the mother of the murdered teenager Brianna Ghey, after hearing evidence that her daughter’s 15-year-old killers had gained access to videos of real-life torture and suicide on the dark web — a hidden part of the internet which requires a special browser. She called for under-16s to be allowed only safe phones from which all social-media apps had been removed — and with software to alert parents if their children searched for harmful keywords.
In theory, the new Online Safety Act ought to bring in what the Prime Minister calls “tough new powers” to curb the spread of illegal and harmful content on social media. Those who break the rules could be liable for swingeing fines of ten per cent of global turnover — perhaps $12 billion in the case of Meta. Yet lawyers have questioned how workable the law will be in practice, as campaigners for child safety, on the one hand, and for free speech, on the other, potentially make contradictory legal appeals.
Privately, ministers concede that promoting both good governance and innovation can pull in opposing directions. And considerations of freedom of expression are not insignificant: a video may be doctored for legitimate satirical purposes. The law can be a blunt instrument.
Perhaps a solution can be found elsewhere. Publishers are liable for prosecution if their content is illegal. Yet internet giants such as Facebook and Google argue that they are not publishers, but platforms. In 1996, President Clinton signed into law a Communications Decency Act to criminalise “obscene” or “indecent” content on the internet. But it exempted online platforms from liability for content created by their users.
Since then, social-media organisations have developed algorithms designed to serve up content that amplifies strong emotions among their users. The business model of Big Tech is to monetise these emotions on the back of increased advertising. It is an approach that is conscious and wilful. Perhaps if Meta and co. were legally reclassified in other countries as publishers rather than platforms, they would be subjected to a greater discipline.