Fake news: A real problem

0
232

Is mass ignorance better than knowing half-truths, or being misinformed? Is removing infringing speech the only way to tackle the issue? Dibyojyoti Mainak provides some insights

The issue of fake news has dominated news cycles of major democracies in the past few years. It is interesting to note that while attempts at criminalizing false information have been made by governments across the world including the US and Canada, they have been struck down for excessively curbing free speech in Uganda, Zimbabwe, Antigua and Barbados, in addition to the US and Canada.

So far it appears that court-based approaches to reining in fake news are at best inadequate. Recently, India’s Ministry of Information and Broadcasting first released, and then quickly withdrew, guidelines governing the accreditation of journalists in an attempt to control the menace of fake news. These guidelines, however, would do little to punish websites like Postcard News, whose founder was arrested for inciting violence, but would suspend accreditation for practising journalists on mere accusations of fake news.

Dibyojyoti-Mainak
Dibyojyoti Mainak

There are compelling arguments on how these laws can have a chilling effect on the ideal of free speech, and can also lead to self-censorship. Legitimate concerns exist of excessive state paternalism and the subsequent effects on the flow of information (it is hard to define concepts like “facts” and “truth” in the context of criminal law).

For example, it would probably be unfair to restrain activists from reporting that a certain nuclear plant is endangering the lives of millions in a neighbouring settlement merely because such endangerment of millions cannot be scientifically established before a court of law. The final issue with regulation of so-called “fake news” is the biases of the regulator in question, and the possibility of partisan politics being the guiding principle behind censuring news and social media items.

At the same time, though, rampant proliferation of fake news, whether through malicious divisive activities of so-called internet trolls, or through carefully guided propaganda spread by highly sophisticated “consulting firms”, creates negative impacts for diverse sets of victims, as well as “echo chambers” of misinformation, further entrenching biases. Traditional news companies are increasingly finding their position as primary disseminators of credible (and viral) news challenged by the mass influx of “news” and “posts” on social media. They have no way to counter these.

The regulation and prosecution of false information spreading has so far been based on a somewhat symptomatic approach, i.e. based on the impact of such (mis)information. Yet the proliferation of false information is so pervasive, and the effects so non-crystallized, that resorting to punitive criminal law is at best useless, and at worst excessive.

Existing court-based approaches are simply unequipped to tackle thousands of online trolls sharing hundreds of individual pieces of false information on social media, boosted by existing traction-based collaborative filter algorithms, where the damage caused is not by an individual act but by the compounding effect of how these systems work.

Given the 24-hour news cycles of our times, by the time any remedial measure like a retraction or court-awarded damages takes effect, the outrage has boiled over, and the harms permanently established. In addition to the constitutional issues in criminalizing speech alluded to above, the special context of social media means that we need to think beyond criminal or, indeed, punitive legal action. Our focus needs to be on the dissemination itself.

In that regard, the onus should be placed on the social media platforms themselves, which derive financial benefits from virality of content, to sufficiently warn their users about the veracity (or lack of veracity) of any content on their platforms. This does not have to mean private censorship by Facebook, but could instead mean multiple things, for example authenticating verified news stories from respectable platforms with a “tick”, similar to Twitter’s Blue Tick for classifying celebrity accounts.

It could mean tweaking algorithms in a manner that pages or accounts with a history of sharing clearly identifiable false information find less traction (i.e. ensure that it does not show up on top of news feeds). It could also mean encouraging and even mandating initiatives that require social media websites to partner with organizations like the International Fact-Checking Network. Finally, it means protecting social media websites in cases of “good Samaritan” acts in trying to curb fake news, and at the same time not allowing them carte blanche immunity when they claim passive intermediary defences for letting their algorithms run amok.

The realistic solution to curbing fake news lies not so much in punishing infringers, but more in informing the reader that something she/he may be reading may be completely false to begin with.

DIBYOJYOTI MAINAK is general counsel at InShorts news app