New article: Foundational questions for the regulation of digital disinformation

What do we need to know before we boldly venture forth and solve the problem of disinformation?

In a new article for the peer-reviewed Journal of Media Law, Andreas Jungherr poses questions those asking for greater control of digital communication spaces by corporations, experts, and governments should answer:

First, they need to be clear about what they mean by disinformation, how they go aboutestablishing it, and how they make sure not to become participants in legitimate political competition favoring one side or the other.

Second, public discourse tends to treat disinformation largely as a problem of unruly, unreliable, and unstable publics that lend themselves to manipulation. But what if disinformation is a problem of unruly, unreliable, and unstable political elites?

Third, what are reach and effects of digital disinformation anyway? If disinformation is such a threat that we severely restrict important structures of the public arena and political speech, there should be evidence. But current evidence largely points to limited reach and effects.

The risks of regulatory overreach are well documented. For one, overreach can stifle unruly but legitimate political speech, alarmist warnings that exaggerate the dangers of disinformation can lead to loss of satisfaction with democracy, increase support of restrictive regulation, and can lead to a decline of confidence in news and information, true or false.

So yes, there are severe risks in overplaying the actual dangers of disinformation. To risk these adverse effects, we should be very sure about the actual impact of digital information.

To be clear: this is not a claim that disinformation does not exist. There clearly is disinformation in digital communication environments and it is somewhat harder to police there than in traditional ones.

But, as with any form of communication, the effects of disinformation appear to be limited and highly context dependent. Harms are possible but likely embedded in other factors, such as economic or cultural insecurity or relative deprivation.

We must make sure that the empirical basis for the supposed dangers of disinformation is sound before we boldly innovate and increase central corporate and governmental control, which can result in chilling legitimate speech that contests authority and the powerful.

Looking back at the conceptual shambles and empirical elusiveness of prior digital fears like echo chambers, filter bubbles, bots, or psychometric targeting, we need to start demanding better conceptual and empirical foundations for the regulation of digital communication spaces.

Abstract: The threat of digital disinformation is a staple in discourse. News media feature examples of digital disinformation prominently. Politicians accuse opponents regularly of slinging disinformation. Regulators justify initiatives of increasing corporate and state control over digital communication environments with threats of disinformation to democracies. But responsible regulation means establishing a balance between the risks of disinformation and the risks of regulatory interventions. This asks for a solid, empirically grounded understanding of the reach and effects of digital disinformation and underlying mechanisms. This article provides a set of questions that a responsible approach to the regulation of disinformation needs to address.

Andreas Jungherr. 2024. Foundational questions for the regulation of digital disinformation. The Journal of Media Law. Online first. doi: 10.1080/17577632.2024.2362484