Fake News! Propaganda! Manipulation! Conspiracy! The digital realm is plagued by content that is accidentally or intentionally false, harmful – or both. This chapter addresses how we can begin to make sense of the different diseases and their symptoms – to support our digital health.
The narrow approach to online disorders focuses on verifiably false information. This form is relatively easy to identify and can be countered by hiring fact-checkers, tagging suspicious postings, removing false news posts, and so on. A more difficult malaise to diagnose is when we begin to address deliberate attempts at the distortion of news to promote ideologies, confuse audiences, create polarisation, and disseminate disinformation to earn money. While many of these activities can be politically motivated, these attempts can take the form of clickbait practices and the intentional filtering of news for commercial purposes, to attract particular audiences. This approach is harder to study and verify empirically. It pertains to the economic models of news markets and variations in the quality of news.
To help us understand different dimensions of false content online, Claire Wardle and Hossein Derakhshan created a framework of information disorder (Figure) (1). It makes a distinction between different types of content based on their intended purposes:
- Misinformation – false connection or misleading content that can be also unintentional and that is not always harmful. This includes shared content that is believed to be true and should be made public for the common good, even if its veracity has not been checked;
- Disinformation – intentional false context, including intentionally created conspiracy theories, or other content that can in some cases be harmful; and
- Malinformation – false content that is purposely created to cause harm, or use of content for malicious purposes.
Figure: Types of Information Disorder (2022) (2)
For audiences, the distinction between different types might not always be apparent – but for those attempting to remedy these disorders it matters. The framework of information disorder is now widely used by journalists, policy-makers, and researchers as their roadmap to false content online. Naturally, these actors need to focus on the truly harmful content. From a legal perspective, two things matter: what the intentions of the content creator are, the content, and how untrue it is. A journalist may accidentally include inaccurate information in a piece of news. In contrast, a propagandist can deliberately create fully fabricated content, meant to deceive its audiences. (3)
In practice, then, information disorder can take many forms. As an example, the European Union (EU) multi-stakeholder High-Level Expert Group (HLEG) on Fake News and Online Disinformation identifies the problem of practices that go well beyond anything resembling “news”: automated accounts, networks of fake followers, fabricated or manipulated videos, targeted advertising, organized trolling, visual memes, and so on.
Similarly, information disorder includes many types of actions. In addition to the process of creating false content, disinformation is circulated in many ways, including posting, commenting, sharing, tweeting, and retweeting.
Finally, information disorder is not a disease without a cause. It is an action by different stakeholders who help to inflame or remedy online harms. Online platforms and underlying networks, protocols, and algorithms make the dissemination of mis-, dis-, and malinformation easy and viral. Because global platforms make money with user data, curbing the spread of false information is not in their interest if it just gets eyeballs, likes and shares. Additionally, various state or non-state political actors, for-profit actors, citizens individually or in groups, and infrastructures of circulation and amplification (including news media) may want to stop false information – or may want to create and spread it widely. (4)
References:
(1) Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
(2) Updated version by Wardle in 2022; see, e.g., https://faktabaari.fi/tapahtumat/claire-wardle-massive-problems-are-tackled-with-a-minimal-budget/
(3) Möller, J., Hameleers, M., & Ferreau, F. (n.d.). Types of disinformation and misinformation Various types of disinformation and their dissemination from a communication science and legal perspective. https://www.die-medienanstalten.de/fileadmin/user_upload/die_medienanstalten/Publikationen/Weitere_Veroeffentlichungen/GVK_Sum
Minna Aslama Horowitz is a Docent at the University of Helsinki, a researcher at the Nordic Observatory for Digital Media and Information Disorder (NORDIS), a Fellow at St. John’s University, New York, and an Expert on Advocacy and Digital Rights at the Central European University, Vienna. She is also a member of the Think Tank of the Nordic Council of Ministers to address platformisation in the Nordics. Horowitz researches (public media) policies, digital rights, and media activism.
Artwork: Lumi Pönkä
Download the Digital Information Literacy Guide (PDF).