• No results found

Mapping Disinformation: Analysing the diffusion network of fake news and fact-checks in Italy during the COVID19 pandemic

N/A
N/A
Protected

Academic year: 2022

Share "Mapping Disinformation: Analysing the diffusion network of fake news and fact-checks in Italy during the COVID19 pandemic"

Copied!
67
0
0

Loading.... (view fulltext now)

Full text

(1)

-

(2)

This page is left intentionally blank

(3)

Abstract

In recent years, disinformation circulating the internet and especially social media has become a widespread concern. The urgency of the fake news problem lies in the fact that decisions that are taken on false or misleading information risk impacting democratic processes negatively. This is especially true during a global health crisis when the misinformation in question concerns scientific facts and informs the way people act in society. Focusing on the relational aspect of fake news, new insight and hypothesis generation can be explored with a relatively novel method, social network analysis. This research provides with an example of the method applied to political problems by analysing the misinformation and fact-checking diffusion network on the Italian Twitterverse during the second wave of COVID19. The network shows a tight core of misinformation and a peripheral fact-checking region approximating a spanning tree. Although some levels of polarization are observed, the resulting network shows no evidence of echo chambers that hinder interaction between the misinformation and the fact-checking clusters. Actor-level analysis revealed that the majority of the users interacting in the network are humans and that influential and active users share misinformation only. The findings of this work are presented to show how network analysis can contribute both mitigation strategies in particular and to social and political sciences research in general.

Keywords: fake news, disinformation, fact-checking, social network analysis, diffusion network.

Word count: 18885

(4)

To JM and to all those who have been there, on the other side of a computer screen

(5)

This page is left intentionally blank

(6)

TABLE OF CONTENTS

INTRODUCTION 7

AIM, RATIONALE, AND SCOPE 7

OUTLINE 8

POLITICAL COMMUNICATION AND FAKE NEWS 10

THE FAKE NEWS EXPLOSION 10

FAKE NEWS, MISINFORMATION, DISINFORMATION 10

STUDYING FAKE NEWS 12

FIGHTING FAKE NEWS 13

FAKE NEWS AND POLITICAL BEHAVIOUR 15

POLITICS OF FALSEHOOD AND THREAT TO DEMOCRACY? 17

SOCIAL NETWORK ANALYSIS 21

OVERVIEW 21

THEORETICAL FRAMEWORK 23

NETWORK-LEVEL ANALYSIS 23

NODE-LEVEL ANALYSIS 28

FAKE NEWS AND SOCIAL NETWORK ANALYSIS 31

METHODOLOGY 34

RESEARCH DESIGN 34

CHOICE OF TIMEFRAME AND BACKGROUND 35

LIST OF SOURCES 38

DATA COLLECTION 39

RESULTS 40

BUILDING THE NETWORK FROM COLLECTED DATA 40

RQ1:ANATOMY OF THE DIFFUSION NETWORK 41

RQ2:LOW-CREDIBILITY VS FACT-CHECKING 45

RQ3:PROMINENT NODES 48

DISCUSSION 53

CONCLUSION 58

BIBLIOGRAPHY 60

(7)

This page is left intentionally blank

(8)

7

Introduction

Originally welcomed as democratising tools, social media platforms have in recent years been subject to heavy criticism regarding, among other things, the quality of the information that is spread through them. While on one hand they have been praised for mobilizing and liberating, on the other they have been put under scrutiny regarding the quality of the ‘news’ circulating as a result of self-broadcasting. In an environment of increasing political polarization, what started as considerations on partisanship soon became a condemnation of biased relayed information. In the months leading up to the 2016 US Presidential election, the term ‘fake news’

became a buzzword used to discredit unfavourable information divulged against one or the other side of the political spectrum. As allegations of foreign involvement in spreading misleading or false information to support one or the other political side were confirmed to be founded, the topic became a staple of communication as well as political research.

Multidisciplinary in nature, fake news has been explored in various academic fields and with various methods. Because of the relational nature of the phenomenon, its social dimension constitutes a perfect example of a political problem that can be studied with the aid of network science, a largely underexploited method in the social and political sciences.

Aim, rationale, and scope

The ultimate goal of this work is showing how network science, and its application in social network analysis, can be an extremely valuable method for political scientists especially as an increasingly larger slice of the public debate takes place on online platforms where data is readily collectable. This is particularly true when trying to map online information diffusion networks on social media platforms as certain diffusion patterns on certain types of information are seen to be correlated with certain political outcomes.

Network research linked to political sciences and political events, when present, is mostly focused on the United States and/or limited to mapping anglophone communities in these diffusion networks by searching specific hashtags or relying on US-based datasets to design search queries (see for example Shao et al. 2018; Grinberg et al. 2019). The European environment has not drawn as much attention in the scholarship and when it has, the few that have been carried out concentrate only on timeframes surrounding elections (see for example Desigaud et al. 2017 and; Ferrara 2017 on the 2017 French presidential election; Hedman et al.

(9)

8 2018 on the 2018 Swedish general election; and Giglietto et al. 2018 on the 2018 Italian general election). Yet, these are not the only cases where political polarisation can be witnessed in the public sphere. As research on conspiracy theories and scientific news shows, “many health issues generate intense political conflict” (Edy and Risley-Baird 2016, 588) and this is particularly true in the wake of the pandemic outbreak of novel coronavirus in 2020, the scope of which was unprecedented in recent history.

In the hope to contribute both to the field of fake news research and that of online social network analysis applied to political problems, this research will offer an exploratory analysis of misinformation and fact-checking diffusion network in the Italian Twitterverse as the country entered the second wave of contagion in the COVID19 pandemic. As a tool to guide the analysis, the research will use three research questions covering both network and node level analysis by focusing on the diffusion network, the two multigraphs, and some specific actors. This research’s contribution to the scholarship is two-folded. First, it furthers the characterisation of online environments in languages other than English by presenting an analysis of a misinformation network in the Italian online environment. Then, in contrast with previous studies on Italy which focused on referenda and elections, this research focuses on a different type of political event, related to the still extremely relevant government’s handling of the COVID19 pandemic. Because of how anticipated the second lockdown measures were, the chosen timeframe for this analysis is between the 23rd of October 2020, when the first local curfew was reintroduced in Lombardy (Agenzia ANSA 2020) and thus sparked discussion in the public opinion regarding possible nation-wide measures, and the 3rd of November 2020, when the decree enforcing the de facto second national lockdown was signed (Decreto Del Presidente Del Consiglio Dei Ministri 3 Novembre 2020 2020).

Outline

The first two sections of this work will include a literature review and theoretical background for both fake news as a research topic in political science and for social network analysis as a method. After an overview of the term fake news, the first theory section will delve into the scholarly discourse surrounding fake news as a phenomenon and how it can be studied, with a focus on the social aspect. The overview on the nature and role of fake news will include its relationship with bots as a spreading device and with fact-checking as a counteraction measure.

(10)

9 Lastly, the links between fake news and political behaviour will be briefly touched upon with a final consideration on the kind of discourse behind how fake news is usually approached.

The chapter on social network analysis is designed with the specific goal of introducing the method to a readership that might not be acquainted with network science in general. To ensure a sufficient understanding of the implications of a somewhat novel method, the chapter will begin with an overview of social network analysis and how to make sense of network data.

Next, it will draw a theoretical framework from network science theory that can be helpful in the study of disinformation with social network analysis methods. The theoretical framework will include two parts, one covering the network-level analysis and one covering the actor-level analysis. Finally, the chapter will review the literature on fake news that employs social network analysis as a method to situate this research in the scholarship.

The methodology section will focus on the research design first, with a brief background of the case in question and the chosen timeframe. As a source-based approach, it will expand on the chosen sources and describe the data collection process. This section will introduce the results chapter where the research findings will be illustrated tackling every research question separately, starting from the whole network to the subgraphs, and, finally, to the actor level analysis. The last section before the conclusion will discuss the results of the analysis together with its limitations, while also highlighting the potentials for research employing social network analysis.

(11)

10

Political communication and fake news

The fake news explosion

The internet, and social media with it, was initially welcomed as a powerful tool for aiding democracy. It was seen as the ultimate step in breaking the communication monopoly of the broadcasting society, taking away the control over what is shared from mass media as the gatekeepers and back to the masses (Bastos, Raimundo, and Travitzki 2013, 261). The chance that self-mass communication gave to the single to broadcast oneself (as the original YouTube slogan recited) was embraced as a promise of inclusion of oppressed groups, of breaking oppressive censorship, and of eventually actioning social change because of it. The hope for positive change deriving from this democratisation of communication was encouraged by accounts of meaningful and transformative collective action having started with the help of social media information cascades (Weidmann and Rød 2019). Stories of the Twitter revolutions, especially in the direct aftermath of the Arab Spring, did nothing but confirm the belief of the absolute positivity of social media.

However, as news consumption shifted from traditional mass media towards online news and in particular social media, claims that news quality was affected started spreading (Allcott and Gentzkow 2017). The issue of news quality online and especially on social media is not a negligible detail, since, as Pew Research Center reports, in 2020 half of the US population said they get “news from social media ‘often’ or ‘sometimes’,” with Facebook, YouTube and Twitter on the podium as regular sources of news (Shearer and Mitchell 2021). In an environment of rising concern for news quality, the term fake news became the buzzword of the 2016 US presidential election, as President Donald Trump popularised it to accuse his opponents of spreading biased news. When evidence was found of ‘fake news farms’ where disinformation was manufactured and spread to influence democratic processes (Dawson and Innes 2019), the fake news explosion extended beyond the purely political discourse, approached with an often unmotivated sense of doom and despair for democracy and functioning society as a whole.

Fake news, misinformation, disinformation

Fake news is certainly not a recent phenomenon and is arguably as old as news itself. A synonym, dezinformatsiya (disinformation), was originally coined in Stalinist USSR to mean

(12)

11 specifically secret police propaganda that blended falsehoods with truths to support a specific agenda. An example of disinformation is the Soviet campaign claiming that the HIV virus causing AIDS had been created by the US and released as a biological weapon (Schoen and Lamb 2012, 6).

However, despite truth manipulation not being a novel activity nor one specific to the political environment, the term fake news has emerged in recent years as a heavily political one.

During the 2016 US election it was employed on either side of the political spectrum to accuse either candidate of fostering an environment where falsehoods, constructed facts and altered realities were spread to further political agendas (Giusti and Piras 2021, 3). Of course, the practice was arguably not peculiar to either of the involved candidates nor necessarily more employed than in the past, but the campaign signed the mainstreaming of the term, the concept, and possibly an escalation of the phenomenon.

As there is no universal consensus on the definition of the term fake news, it is useful to describe it by comparing it to other similar terms which pertain to the domain of information of dubious reliability that spreads beyond the single consumer. These related and, at times, overlapping concepts include terms such as false news, deceptive news (or deception), disinformation, misinformation, clickbait, and rumours. In their work about how to study fake news, computer scientists Zhou and Zafarani identify three main characteristics by which these terms can be categorised: their authenticity, the intention behind their diffusion, and whether the information is news (Zhou and Zafarani 2020, 3). The abovementioned concepts encompass all variations of these three characteristics, both factual and non-factual, intentionally misleading or not, consisting of news or not. For example, a rumour could be anywhere on the spectrum of all three categories while deception is categorised as non-factual and intentionally misleading news. Disinformation and misinformation are both non-factual pieces of information that do not necessarily consist of news, but differ in their intention, with disinformation being intentionally misleading and misinformation being potentially in good faith.

However, fake news has come to mean any of those concepts, depending on the context of each utterance (Tandoc, Lim, and Ling 2018). This is due to the fact that fake news as a term and as a phenomenon has been co-opted by politicians to label opposing views from traditional media, press and rival politicians, regardless of its accuracy in describing the specific piece of information (Brummette et al. 2018, 499). Relayed to the public, the term has gained emotional

(13)

12 valence and it was seen to be discussed in “emotionally charged and ideologically similar”

environments as a tool to potentially hinder the free flow of information instead of describing a specific phenomenon (Brummette et al. 2018, 510).

Studying fake news

The ecosystem in which the fake news phenomenon operates, according to Shu et al., consists of three dimensions that have to be studied through separate means. First of all, fake news has a content dimension in which the content of the single pieces of ‘news’ can be analysed to look for patterns (Shu, Bernard, and Liu 2018, 2). This is a valuable and important dimension of fake news studies whose development taps into journalism and propaganda research as well as fake news mitigation research in that it is used to train AI in recognising fake news and flag it (Volkova et al. 2017). However, because of the diffusion element in the fake news phenomenon, there are two other dimensions to consider: the temporal dimension and the social dimension.

The temporal dimension looks at when the information is being shared and how the sharing patterns evolve over time to investigate the behavioural aspect of the phenomenon. Finally, the social dimension focuses on who shares the information and how the information spreads, this includes the relationship between the various types of users in the phenomenon (publishers, diffusers, consumers) and how the spreading network is configured. (Shu, Bernard, and Liu 2018, 2).

Analysis of the social dimension of fake news can be divided in two macroareas. The first is the analysis of the diffusion network in its entirety, that is the web of relations through which the piece of fake news has travelled and that constitutes the environment in which the news has been consumed. Analysing the social dimension of fake news at the network level can reveal information about this environment such as whether the diffusion takes place inside an echo chamber or whether it is possible to identify mechanisms of diffusion that can explain how the news spreads (Shu, Bernard, and Liu 2018, 3–16).

The second macroarea of studying the ‘who’ in fake news research is that of focusing on the user level. User-level analysis can help characterise consumers of fake news as well as highlight key spreaders, both to understand patterns of diffusion and to optimise mitigation. When looking at the user level on online fake news, and especially in social media, it is important to consider the role of social bots in the spreading process. A social bot is defined as a “computer algorithm that automatically produces content and interacts with humans of social media” for

(14)

13 reasons that vary from efficiently aggregating information, to better customer care, to harmful purposes (Ferrara et al. 2016, 97). Research on the reach of social bots in social media networks displays an important correlation between very influential users and likely automated user accounts in controversial online debates, such as the one around vaccines (Ferrara et al. 2016, 98). As bot detection strategies develop, so do the algorithms trying to escape them, with on one side bots developing strategies to pass as humans (Ferrara et al. 2016, 99) and on the other the rise of cyborgs, semi-automated accounts “embedded within human social networks”

(Grinberg et al. 2019, 4). While some research shows that “humans are mostly to blame for spreading falsehoods, rather than bots” (Scheufele and Krause 2019, 7666), the scalability potential that bots hold in the spread of fake news is not negligible and research on how they populate the fake news diffusion environment online is invaluable, especially for detection and mitigation methods.

The social dimension of fake news analysis has an extremely wide potential in research related to disinformation. Analysis of the social dimension of fake news diffusion can look at the phenomenon in its early stages to map the spread with unique insights both on the properties of the connections and on possible mitigation strategies with applications beyond the content of the single piece of disinformation.

Fighting fake news

Studying fake news with the goal of developing detection and mitigation strategies cannot shy away from covering the practice that is most often associated with fighting fake news. Fact- checking, as the term clearly describes, is concerned with checking the facts presented in a piece of news to determine whether it contains truthful information or not. As journalism scholar Lucas Graves points out, many aspects of fact-checking have always been part of journalism and are indeed at the core of its precepts (Graves 2013, 112). However, with the advent of internet blogs and independent journalism, the online world witnessed a so-called

“fact-checking explosion” where the role of fact checking shifted from journalistic practice to a form of press criticism (Spivak 2018). It is with this slightly shifted focus that fact-checking and its synonyms myth-busting and debunking are understood in the context of fake news studies. As the Fact-Checkers’ Code of Principles from the International Fact-Checking Network states, fact-checking is the ideally fair, non-partisan and transparent practice of independent journalists (Poynter 2018) committed “to publicis[ing] errors and

(15)

14 falsehoods”(Graves 2013, 3) in political discourse as well as in media in general and “to sorting fact from fiction” (Silverman 2015, 71).

However, fact-checking has not been exempted from criticism both on an epistemological level and on a more practical level concerning its effectiveness as a strategy to fight fake news.

Epistemologically, the biggest issue represented by fact-checking practice has to do with the role of arbiter of factuality that it takes upon itself and the claims of fairness and non- partisanship it builds its image around (Uscinski and Butler 2013). Even admitting every piece of fact-checking is unbiased in its contents, the very choice of what to fact-check and what not to fact-check carries some level of selection bias that cannot be avoided. This becomes even more evident when value-laden claims, such as political statements, are checked despite it being impossible to test them “for their correspondence to reality” in the way fact-checking claims to operate (Graves 2017, 520). When engaging in this kind of practice, of which even fact- checking giant PolitiFact has been found guilty (Nieminen and Sankari 2021), fact-checkers place themselves as part of the political discourse while also maintaining their self-proclaimed role as holders of the truth, with the risk of degenerating in truth policing and censorship (Farkas and Schou 2019, 136).

Even so, it is not yet clear whether fact-checking is effective in fighting fake news. Research on rumours, conspiracy theories, and misinformation found evidence of a “backfire effect,”

evidence of which was found in analysing political misperceptions (Nyhan and Reifler 2010, 307). Factual corrections of misperceptions, i.e. fact-checks or debunks, are found to have counterproductive results, specifically regarding issues that are perceived as controversial.

Further research has found this to be the case even in the scientific and medical sphere of public discourse. A study on the misperceptions surrounding the flu vaccine shows that “corrective information […] decreased vaccination intent” (Nyhan and Reifler 2015, 463). Similarly, work on the scientifically refuted belief that vaccines cause autism found that, when confronted with corrective information, rumour communities “respond[ed] not only with psychological resistance, but by publicly […] counterarguing debunking messages,” thus strengthening the community around said rumour (Edy and Risley-Baird 2016, 591).

Despite the criticism that fact-checking received as a mitigation strategy, as it took on the role of watchdog of news production and diffusion, it created a somewhat symbiotic relationship with the disinformation phenomenon that makes the two practices particularly similar in terms of how they can be studied. As the fake news they fight, fact-checkers operate

(16)

15 in the online and social media environment as well. Because of the virality element that characterises the spreading of fake news in the online environment, fact-checkers are led to follow the same diffusion patterns of the phenomenon they are trying to mitigate, in the hope to eventually expose to the fact-check the same people that were exposed to the fake piece of news (Silverman 2015, 138). It follows that an important element in fact-checking is its social dimension, which can inform and refine fake news mitigation practices. However, the literature on fact-checking is mostly on the qualitative and ethnographic side of the spectrum, focusing often on case studies of specific independent fact-checking organisations (see for example Silverman 2015; Graves 2017; or Haigh, Haigh, and Kozak 2017). When existing, research that tackles the social dimension of fact-checking in conjunction with fake news, has mostly focused on the case of the US and merely political fact-checking (see for example the Hoaxy algorithm in Shao et al. 2018).

Fake news and political behaviour

As fake news came to prominence with ex US president Donald Trump during his presidential campaign for the 2016 US elections, most fake news research has focused on the nexus with politics and in particular elections. After the allegations of Russian interference in foreign national democratic processes were demonstrated to be founded (Office of the Director of National Intelligence 2017), studies on fake news circulating during the 2016 US Presidential election found a strong positive correlation between fake news and populism (Allcott and Gentzkow 2017).

In the field, this correlation has consistently been found to be true and in many cases was associated with a rise in votes of the populist party in question. For example, during the Italian general election in 2018, misinformation circulating was seen to favour populist Lega and M5S candidates (Giglietto et al. 2018) which then became the first two parties in the Italian parliament, constituting the first populist majority in Western Europe (D’Alimonte 2019).

However, evidence for a correlation is not sufficient for establishing a causal connection between fake news and the rise of populism because of selection bias. The causal link is hard to explore since those who are exposed to populist fake news could be self-selecting into polarized discussions in which the fake news circulates because of a pre-existing preference and adherence to the populist discourse. Furthermore, when designing experiments to avoid

(17)

16 these biases, fake news exposure does not seem to explain much of the correlation with populist voting (Cantarella, Fraccaroli, and Volpe 2020, 34).

While a direct causal relationship seems hard to prove, a good deal of research on fake news and its impact on politics has been carried out in connection to another concept, that of echo chambers, or filter bubbles (Pariser 2014). First pointed out as a mechanism in conservative news media outlets, echo chamber describes the segregation of ideological frames to which one is exposed. In its original connotation, the chamber was an enclosed media space that constituted the only news media that a person consumed and the echo was produced by the outlets that reiterated the same frames of reference, producing positive feedback loops reinforcing the information that was passed on (Jamieson and Cappella 2008, 76)

The debate surrounding echo chambers or filter bubbles online is a rather complex one, as it is both an inadvertent and sought-after result of how social media works. Social platforms’

main goal is to make sure the user stays on the platform as long as possible and therefore their algorithms are naturally designed to meet user preferences. Regardless of how the platform acquired this type of information, social media tries to offer to the user as personalised an experience as possible. Ads are targeted and personalised based on user characteristics and search history, the content proposed mimics past behaviour in content consumption, the suggestions on new users to follow/befriend/interact with are designed on shared connections, shared interests, or shared beliefs (Sasahara et al. 2020). On these platforms, users have the power to completely cut ties with other users, a practice that research shows is more often correlated with these users’ online behaviour rather than offline components (Sibona and Walczak 2011). These mechanisms, although designed to make the permanence on the platform more enjoyable and entertaining, have the (hopefully unintended) result of creating artificial online environments where no content around us disagrees with our worldviews and we are far away (in terms of click paths at least) from those who might hold different beliefs.

As fake news operates in polarised discourses, its connection with echo chambers is particularly important. Even though proof that these bubbles exacerbate polarization is lacking (Guess et al. 2018), echo chambers are correlated with other events that might affect democratic processes negatively. Research on politicians’ activity in social media shows that political opinion leaders tend to use the algorithms behind the creation of echo chambers to determine which political discussion to adhere to, thus reinforcing homogenous ideas within polarized groups (Druckman, Levendusky, and McLain 2018). Furthermore, even if they are not the

(18)

17 cause of discourse polarization online, echo chambers can work as amplifiers of fringe narratives, conspiracy theories, and disinformation, making them reach mainstream media where wider exposure might result in further radicalisation (Tollefson 2021).

Ultimately, it is difficult to determine to what extent fake news impacts political behaviour, but it is clear that the relationship is worth exploring further. Despite not finding compelling evidence of a causal link between misinformation consumption and political behaviour, research on rumours shows that the diffusion of misinformation can create communities of misperception where incorrect or dangerous information is spread and reiterated and that these communities can ultimately contribute to political culture, changing the level on which political discussion take place and thus impacting political balances (Edy and Risley-Baird 2016, 588).

Politics of falsehood and threat to democracy?

The discussion on fake news is often approached with a sense of doom for the world as we know it. The discourse revolves almost uniquely on the idea that we have entered the age of post-truth and that the spread of disinformation is, in the words of French President Emmanuel Macron, “an attempt to corrode the very spirit of our democracies”(Macron 2018 as cited in Farkas and Schou 2019, 1). In the public sphere and in political communication, the opinions diverge on which side of the political spectrum is thought to be perpetuating this post-truth state. The claim, though, is essentially the same on either side, and that is that, as Habermas already discussed in 2006, “a post truth democracy […] would no longer be a democracy”

(Habermas 2006, 18).

As much as social media had been welcomed with overemphasised excitement about its democratising powers at the turning of the century, in the public debate it was soon enough recognised that user-generated content came with drawbacks as well. On either side of the political spectrum, these drawbacks were identified with fake news, which, in the aftermath of the 2016 American Presidential election, became the scapegoat for the rise of alt-right groups in Western countries and the related decline of liberal democracies. However, research on conspiracy theories shows that they can have positive social consequences and that the current narrative with which they are depicted can lead to an othering of the political voices that are the expression of such phenomena that is detrimental for democracy (Dentith 2019, 99).

(19)

18 The narrative that wants democracy threatened by fake news and their ubiquity on social media, though, conceals a rather normative belief about democracy itself. In fact, in this understanding, facts are inherent in the concept of democracy and therefore this relatively sudden decay of a culture of facts in political communication is seen as a decay in quality of Western liberal democracies. However, democracy is an essentially contested concept, whose understanding has changed through time and space and it is not necessarily shared by everyone in the same community (Kekes 1977, 71). What one understands of an essentially contested concept informs research, in positive and negative, even if the research seeks to shy away from concept analysis and makes it necessary to clarify what understandings are at the root of one’s inquiry. A concept analysis of democracy as a means to correctly frame fake news as a phenomenon is well beyond the scope of this research. It is, however, important to point out that in political discourse and academic scholarship alike, the statements that connect fake news with democratic collapse are weighted down by a specific narrative.

The same narrative that lies behind this type of discourse also informs the reflections regarding the range of solutions proposed for tackling the post-truth and fake news issue. For as much as fake news and the misinformed masses can be blamed for the increasing popularity of right wing rhetoric, quick-labelling right-wing supporters as misinformed fosters a discourse where democratic participation is ultimately seen as undesirable and the true roots of populist success are instead overlooked (Mouffe 2005, 71). Moreover, despite the commendable work that fact-checkers do online, debunking can and does backfire (Silverman 2015, 47) and counterarguments can have the effect of strengthening rumour communities around their beliefs (Edy and Risley-Baird 2016, 596; Scheufele and Krause 2019, 7665). Policing the truth (whether by human fact-checkers or with fact-checking algorithms on social media platforms) does present several epistemological issues (Uscinski and Butler 2013) that have the risk to degenerate into something more problematic (Farkas and Schou 2019, 136). The framing that comes out of these attempts of dealing with fake news has the risk of losing perspective by

“drawing attention to epistemic rather than directly political questions” that are at stake (Moore 2019, 111). Facing fake news with the understanding that it is corroding truth as the very pillar of democracy seems to lead to solutions just as corrosive for the defended democracy.

Instead, fake news can be understood as potentially damaging to democratic processes in that it can be used as a means of manipulating collective political action. In research about information warfare and propaganda, organised information campaigns have the main goal of creating confusion, disorder, and mistrust within the campaign targets (Jowett and O’Donnell

(20)

19 2012). Misinformation and disinformation, even when not malicious, have similar effects of

“scrambling and dividing public opinion in order to benefit from the resultant chaos” (Loveless 2021, 65). The benefit is not necessarily of a political nature, it can as well be merely financial (an example are click-baits for website traffic), however, the divisive result stays the same.

Fake news achieves this division by appealing to emotion and operating in a conducive environment for self-selecting exposure to information. As social media enhance patterns of selective news consumption thanks to various mechanisms in their functioning (see the discussion on echo chambers and filter bubbles above), fake news furthers the polarization by changing the level on which matters are discussed from the factual to the emotional. In this sense, fake news is potentially a weapon with intended and unintended repercussions on democratic processes (Loveless 2021, 69). In democratic processes, individuals or groups can influence policy and legislations in ways that have an impact on society as a whole. When beliefs disconnected from shared reality inform actions, these actions have the potential of going against what would collectively make sense for a society. This is the case for example of fake news dividing public opinion on climate change and thus hindering coordinated responses that would be more likely to tackle the issue (Loveless 2021, 69).

This work does not investigate fake news because it considers it to be the cause of democratic decline. The feelings to which fake news appeals are very real and it is imperative to look at them as such in studying the effects of fake news and their correlation with political events. However, the specific discussion on whether democracy will crumble in the age of post- truth is beyond the scope of this research. Instead, the stance that is held in this is that, in the words of Farkas and Shou, “the current democratic crisis is in fact neither sudden nor linked to issues of factuality or reason alone” (Farkas and Schou 2019, 126).

Fake news is not threatening democracy, at least not in the sense it is normally understood to do so, but not for this reason should it be overlooked. Decisions that are taken on false or misleading information can affect society negatively, especially when the misinformation in question concerns scientific facts that can be demonstrated. However, as mentioned above, the discourse about fake news has a prominent political dimension, which includes the narrative of it being a threat to democracy, from which it cannot be isolated. Because of this political and emotional connotation of the term, this research understands and operationalises fake news as the phenomenon where non-factual, disprovable, or extremely deceivingly portrayed information is relayed as if news with or without the intention to deceive. In line with the design of this research and following Zhou and Zafarani’s conceptual mapping work, the preferred

(21)

20 term of this work is misinformation, where no direct assumption is implied on the intention behind the piece of information spread. Using other terms on the fake news spectrum, for as much as they might also be politically connotated, can aid in focusing the attention on an understanding of the phenomenon instead of on the effect that it has on information diffusion and in the correct functioning of democratic processes.

(22)

21

Social Network Analysis

In the relatively long history of understanding deception in the internet age, fake news research has always been explicitly interdisciplinary in nature (Lazer et al. 2018, 1096). With the rise of social media and the gradual shift of news consumption from traditional media to such platforms, along came the realisation that social media platforms created a fertile environment

“to accelerate fake news dissemination […] and encourage malicious entities to create, publish and spread fake news” (Zhou and Zafarani 2020, 2). Thanks to the unprecedented access to big fake news data, analysis of fake news in conjunction with social media has taken various paths in various disciplines, with both a news-related approach and a user-related approach (Zhou and Zafarani 2020, 6).

Profiting of the possibility of gathering large amounts of relational data, among the rest, from social media, a method that has found incredible potential for application in fake news research is social network analysis (SNA) (Can and Alatas 2019). Since social network analysis considers the relations that connect users, be it friendship, sharing posts, retweeting or commenting, it gives the opportunity to study in large scale the social dimension of the news dissemination ecosystem on social media (Shu, Bernard, and Liu 2018, 2). However popular it might be with network scientists; the method is not as well-known in much of social and political science research. Therefore, as most of it is going to be new material to the readership of this thesis, a comprehensive overview of the method is necessary before the methodology used in this work is presented.

Overview

Social network analysis is substantially mapping and measuring the “relationships and flows between people, groups, organizations, computers, or other information/knowledge processing entities” (McGuire et al. 2016). Born in behavioural science to study specifically social group structures, it later evolved into network science when it was coupled with pre-existing graph theory, which added the mathematical scaffolding of the field. With relational phenomena of any kind as its main focus, network science is naturally a very interdisciplinary field, profiting over the years from advancements in all disciplines, despite the different goals and challenges of each field (Barabási and Pósfai 2016, sec. 1.4). As online social networks developed and

(23)

22 large amounts of social network data became widely available and more easily computable, SNA is now again growing as a method in the social sciences.

As social networks are the structures of actors and their relationships with each other, SNA is the analysis that aims at revealing the information that relates to these actors in their relationship network. The complexity in grasping network science lies in the fact that “the purposes and emphases of network research call for some different considerations” (Hanneman and Riddle 2005, chap. 1). Unlike conventional data, its focus is not how actors compare to each other based on their attributes. The focus is instead the connections uncovering the patterns in the overall network in which actors are embedded and actors are described by their relations, rather than by their attributes. It is for this reason that the basic data structure of network data is a square array (or matrix) where rows and columns are the actors and the information in every cell is their relationship. In network science, the actors are called nodes (N) or vertices and their relationships, the connections between them, are called links or edges (E). These connections can be undirected or directed, i.e. only denoting the link between two nodes, or adding information on the direction of the connection, and binary or valued, that is with or without a weight on the type of connection. This difference in data format depends of course on the kind of information the data convey and whether one or the other format is more meaningful to the researcher.

Another fundamental difference between conventional quantitative social sciences data and network data is how it relates to the application of inferential statistics. Questions about stability, reproducibility, or generalizability of results that in conventional social sciences data tie back to probability sampling are usually not of interest in network research. While sampling is never drawn on probability methods in network analysis, this is deemed irrelevant as network

(24)

23 research aims at mapping the full network (with the possibility of missing data) of the population interested and rarely needs to generalise to a larger population. Furthermore, the logic of hypothesis testing and standard errors is problematic in network analysis as “network observations are almost always non-independent, by definition” (Hanneman and Riddle 2005, chap. 1) and conventional inferential formulas that take into account standard deviation in independent random sampling will therefore not apply to network data. Instead, the interest in network statistics is on the probability of the parameter to not be random for which simulations of edge redistribution on networks with similar characteristics are used.

While causal inference might not be the specific goal of this work, it is necessary to point out these peculiarities in network research. The theoretical and methodological challenges that network data presents to conventional approaches to quantitative research are not to be interpreted as faults in the method. They instead highlight the importance of network research in other theoretical aspects of the analysis such as mechanisms, processes, and the role of the communities in relational phenomena, without taking away the fundamentally empirical and data-driven essence of the method.

Theoretical framework

Employing social network analysis for fake news research can give insights on the social dimension of fake news that other methods do not necessarily explore. In drawing a theoretical framework within network science (and SNA) that can be fruitful in disinformation research, it is necessary to distinguish between two different levels of analysis that SNA can encompass and that will be covered in this research: the network level and the actor level.

Network-level analysis

At the network level, the first measurable element that can give insights on social dynamics is the type of network and its topology, that is the shape (or shapes) that the network presents in.

Network topology, and network visualisation in general, is extremely powerful. While network metrics can tell the story of the network to those within the field, network visualisation can highlight structures, patterns, groups at a first glance in a way that is understandable for everyone. Even to the network scientist, network visualisation that brings out the information encoded in network topology is fundamental since the topology of a network “determines the

(25)

24 web’s connectivity and consequently how effectively we can locate information on it” (Réka, Hawoong, and Barabási 1999, 130).

In real (empirical) networks, the connections of every node are (almost) never random, meaning also that real networks do not follow random network behaviour. For example, most real complex networks do not follow a Poisson distribution for the number of connections that nodes have and instead of eventually becoming fully connected (complete), they tend to stay in a supercritical situation, with components of different sizes (Barabási and Pósfai 2016, sec.

3.5). Unlike what happens with random network models, therefore, network structure in real networks can expose the mechanisms through which the specific network is built, since this information is encoded in the very fabric of formed ties.

Real networks that present some highly connected nodes and several other nodes with very few connections, for example, seem to fit a different type of network structure model than the random network. Instead of following Poisson distribution, the number of connections per node in these networks follows a power law: they are scale-free networks (see Figure 2) (Barabási and Pósfai 2016, sec. 4.2). While not all real networks are scale-free, many of them are and display interesting properties and patterns, such as the existence of hubs, highly connected nodes that thanks to their positions and their connections greatly reduce the distances between any two nodes in the network. Non-random networks are also not neutral, that is their wiring (the connection pattern) is not random, which in turn means that the network structure and topology is going to be greatly influenced by the mechanisms that lie behind their preferred mixing. Political networks, for example, are never neutral (Barabási and Pósfai 2016, sec. 7.8).

The different patterns make predictions of spreading drastically different between random networks and scale-free networks. The connection between network topology and the dynamics of the spreading process is particularly relevant in so-called diffusion networks, whose wide application concerns epidemiology and information flow studies alike. Especially for models that aim at understanding fake news with the goal of developing strategies to control their

(26)

25 dissemination, the connection between information flows and epidemiology is particularly relevant, since “limiting the spread of fake news can be seen as analogous to inoculation in the face of an epidemic” (Shu, Bernard, and Liu 2018, 19). From epidemic diffusion network research, we can identify four main preferred-mixing models of network topology: core infection model, inverse core model, spanning tree, and disjoint populations with bridges (Bearman, Moody, and Stovel 2004, 49).

The core infection model (Figure 3a) presents itself as a high activity core that appears as a hairball that diffuses the information (‘infects’) the less densely populated peripheral population. The inverse core model, instead, has a central core that is not as dense and interconnected within itself, but is directed outwards to peripheral nodes (Figure 3b) (Bearman, Moody, and Stovel 2004, 50). Both core and inverse core models have similar diffusion potentials and the interconnectedness makes them extremely robust to random failures (break of a link), since damage to peripheral links or nodes do not affect the network as a whole and the hubs are the only points of failure.

The bridging model (Figure 3c) presents two

mostly separate

populations that engage in different behaviour (in epidemic models one is high-risk and the other is low-risk) and that are connected by a few actors.

These actors are not necessarily hubs (although they often have this role as well), but, because of their position, are bridges connecting two otherwise disjointed components (Bearman, Moody, and Stovel 2004, 51). In these networks, connectedness is dependent on these bridging nodes and loss of such nodes or their connections can make the network completely segregated, hindering any kind of spreading between the two populations.

The spanning tree (Figure 3d) is as the name suggests a network that expands as a tree and unlike the core models has very low redundancy. This means that it contains only the minimum

(27)

26 number of links to be a connected graph and that it does not close cycles (acyclic). In a tree graph where A is connected to B and to C, the connection between B and C, which would close the cycle, is considered superfluous because both B and C can already be reached through A (Wasserman and Faust 1994, 119). As a result, this model configuration is sparse and every node is potentially a bridge, making diffusion very effective, but prone to failure (Bearman, Moody, and Stovel 2004, 51).

The mechanisms through which nodes connect in a network are explained by various theories of tie formation that can have to do with the network’s self-organisation (encoded in the network’s own properties), with the actors’ attributes, or with exogenous factors (Lusher, Koskinen, and Robins 2013, 24). For example, the existence of hubs, highly connected nodes that shorten distances in the network, is explained by a network self-organisation mechanism that Barabási and Pósfai call preferential attachment, that is the preference of new nodes to connect to these hubs rather than less connected nodes (Barabási and Pósfai 2016, sec. 5.2). As a network evolves, new nodes tend to connect themselves to the most popular ones, contributing to a degree distribution in the network that resembles a power law, creating a scale-free network.

Preferential attachment is connected to another property of scale-free networks: assortativity.

When wiring is non-neutral (i.e. not random), the mechanisms of popularity attachment among nodes can display two patterns of preferred-mixing. The first one is that of assortative networks (4a), where hubs tend to connect with other hubs, leaving small-degree nodes isolated. The second one is the opposite mechanism, disassortativity (4c), where hubs tend not to connect to other hubs and instead create star-like motifs by connecting to small-degree nodes in the network. Disassortative networks present a hub-and-spoke topology, where hubs connect to small-degree nodes who depend on the hub for information transmission (Barabási and Pósfai 2016, sec. 7.2).

(28)

27 Another mechanism of tie formation that is particularly relevant in the study of disinformation is that of homophily. As the term suggests, homophily refers to the mechanism for which nodes are more likely to connect with nodes with which they have the most attributes in common (Lusher, Koskinen, and Robins 2013, 18). In networks that map political attitudes, for example, this means in practice that actors will be more connected to other actors with similar attitudes.

Unfortunately, the relationship between the tie formation and the actor’s attribute is endogenous and causality cannot be established on simple networks (Bail et al. 2018, 9216).

However, this mechanism resonates with the very well-known concept of echo chambers, whose existence and impact in the online environment is widely discussed and researched (Balsamo et al. 2019, 2).

In network analysis that focuses on polarisation online, echo chambers are understood as segregated clusters where homogenous information is spread. As actors more likely connect with those more similar to them in beliefs as well as other attributes (homophily), they also tend to close triads, that is connect with connections of connections, creating strong redundancy within clusters of likely similar actors (Sasahara et al. 2020). This redundancy in the network pulls the nodes together and away from others, clustering groups of actors where the information circulating is picked up and repeated multiple times, it “echoes.” Due to this segregation from areas of the network where different content is shared, the redundancy of connections, and the homophily that characterises in-chamber connections, confirmation bias and selective exposure can result in a vicious cycle of polarisation and further segregation (Sasahara et al. 2020).

However, echo chambers represent only one type of what network analysis calls communities, clusters that can play a role in the functioning of the network. While clustering is natural in networks, organisational clustering that plays a role in processes such as opinion formation and reinforcement can be identified through community detection methods if it is encoded in the network topology. The social meaning of communities can vary depending on the network-specific context, but in general communities detected with data-driven approaches on information diffusion networks can be interpreted in three ways. The weakest relationship criterion is that of topical similarity, where the only thing that is seen to bring the actors together is the topic, not necessarily their attitude towards it (Stoltenberg, Maier, and Waldherr 2019, 122). Going up in cohesion levels, communities can also be understood as ideological association or, at the highest level of cohesion as strategic alliances (Stoltenberg, Maier, and Waldherr 2019, 123). While strategic alliances indicate collective action because of the strong

(29)

28 relationship they represent, ideological association is the most common understanding of communities and it encompasses the concept of echo chambers as well. Without necessarily having the segregation component (which is instead a fundamental element of the echo chamber definition), ideological association implies shared beliefs and positions by the nodes in the community. It represents a discourse coalition, that is a common position on a certain narrative, that in segregated communities of this type can degenerate in the “spiral of segregation” that echo chambers attempt to describe.

Another interesting insight on social dynamics that is encoded in the network is the phenomenon opposite to clustering: holes in the structural fabric of the network. Structural holes, as sociologist Ronald Burt called them, are parts of the network’s web where connections are sparse (as opposed to clusters, where they are dense) and the network is held connected by a few, extremely important nodes that act as bridges between otherwise disconnected groups.

The argument for structural hole’s importance in an network draws on the insight of Mark Granovetter, who first in the field recognised that weak ties (those that are not redundant) are in fact stronger than strong ties (redundant ties within a group), since they constitute the only type of tie that can bridge communities together (Granovetter 1973, 1366). In Burt’s understanding, these weak ties are holes in the structure of the network that give to certain actors the “opportunity to broker the flow of information” between groups (Burt 2001, 208).

Because of the existence of clustered groups within which information travels faster than it does across groups, the presence or not of brokerage of these structural holes is fundamental to the analysis of information diffusion. As noted above, a diffusion network with disjointed populations that engage in different behaviour will be only as robust as the bridging nodes, as their failure (or the failure of their weak ties across populations) will isolate the two populations, stopping the information flow between them.

Node-level analysis

While network topology can display the presence or absence of structural patterns, in complex networks it is necessary to move to the node level of analysis to be able to analyse the actors and understand what roles they occupy in the network structure. The most useful and relevant information that can be extracted from actor-level data in a network is related to the nodes’

centrality.

(30)

29 Centrality is understood in SNA as a “variety of measures designed to highlight the differences between important and unimportant actors” (Wasserman and Faust 1994, 169). As the term suggests, centrality measures derive the level of importance of a node from the location it occupies, whether it resides in the periphery of the network or it is central to it. Furthermore, since a node can occupy various locations in a network and be central from different points of view, the measures of centrality are different and depict different roles that central nodes can have. While centrality measures draw graph-related conclusions on the importance or prominence of these nodes, it is important to point out that the roles that the actors have in the real network and which centrality measures are relevant to the study in question will always be context-dependent.

In general, the main actor centrality measures are degree centrality, closeness centrality, eigenvector centrality, and betweenness centrality (see Figure 5). Degree centrality measures the number of connections an actor has. In directed graphs these connections can be of two kinds, incoming or outgoing, and indicate two different roles that the actor can play in the network.

Out-degree is the number of outgoing connections, denoting how active a node is (Ward, Stovel, and Sacks 2011, 250). In Twitter networks, out-degree is an important measure of participation. On Twitter, an account cannot control the retweets or mentions that they receive, which means that “a node can be mentioned without actually taking part in the conversation.

However, to make a mention, one has to tweet about the subject” (Recuero, Zago, and Soares 2019, 4). The higher the out-degree compared to the rest of the network, the more active the

(31)

30 node is on the platform. Higher out degrees are therefore connected to the concept of superparticipants or superspreaders, overly active nodes that despite non being hubs per se are capable of affecting conversations (such as political hashtags) because they generate “highly replicated messages” (Bastos, Raimundo, and Travitzki 2013, 268). Instead, in-degree is traditionally connected with popularity (Wasserman and Faust 1994, 202). In Twitter networks it marks the number of mentions, retweets and replies an account gets and is therefore associated with the idea of opinion leaders, people that spark and have the ability to influence conversations because of their reputation or authority (Recuero, Zago, and Soares 2019, 4).

The other three centrality measures have to do with the position of the node in respect of other nodes. At the actor level, closeness centrality is the path length between the node and all the other nodes in the network. An actor with a high closeness centrality score has shorter path to most nodes than the average and is therefore closer to other actors, an actor with closeness index 0 is an isolate because it is not connected with anyone and the distance between said actor and any other actor in the network is infinite (Wasserman and Faust 1994, 188). Eigenvector centrality instead measures the extent to which an actor is well connected to other well- connected actors, thus capturing prestige and influence (Ward, Stovel, and Sacks 2011, 250).

A derivate of the eigenvector centrality algorithm is PageRank, the first and best known algorithm that Google uses to rank web pages in Google Search results based on their relative importance for the specific search (Brin and Page 1998). The last centrality measure is betweenness centrality, the extent to which an actor lies in the shortest path between two other actors (Freeman 1977). In Freeman’s words, “a point falling between two others can facilitate block, distort or falsify communication between the two. It can more or less completely control the communication” (Freeman 1977, 36). This centrality measure, therefore, is deeply connected with the network property of structural holes discussed in the previous section and Freeman’s insight in designing it contributed to Burt’s concept. Betweenness centrality captures the importance of nodes in terms of how fundamental they are in other nodes’ access to information and the value is higher the more often the node is present in the paths between two other nodes. While all of the abovementioned centrality measures, including betweenness, are often positively correlated, high betweenness centrality is not a given in nodes with high central indices of different kinds, since this measure takes into account the quality of connections in terms of control of the information flow (Wasserman and Faust 1994, 192).

The presence of brokers and bridges in the network’s structural holes can also be measured with more specific metrics: constraint and bridging. Constraint is the algorithm designed by

(32)

31 Burt himself to measure brokerage. As “more constrained networks span fewer structural holes,”

an actor whose connections reside in a densely connected group will have above average constraint and belong to the category of nonbrokers, while an actor with lower constraint would have across-group connections, allowing for higher chance of intergroup flow of information and therefore is more likely to be a broker (Burt 2004, 362). Bridging measures a similar phenomenon, that is to what level a node in a network occupies a strategic position “such that changes in links to or from this node have maximal impact on the overall structure of the network […] changing network cohesion” (Valente and Fujimoto 2010, 9). As noted by Granovetter and Burt, it is the weak ties that ensure network cohesion and therefore, especially in clustered networks, it is the bridges between these clusters that avoid complete segregation.

What Valente and Fujimoto’s bridging measure does is, then, to evaluate which nodes are the most structurally fundamental to the network by systematically deleting links and checking whether the link contributed to the network’s stability. This does not only find the bridges in a network, but it has incredible application in network intervention, as it points out the nodes that can interrupt diffusion in certain cases, or, like it would be in online networks of polarized political beliefs, the nodes that are the only way for opposite views to flow in an otherwise segregated community.

Fake news and social network analysis

In recent years, fake news research employing SNA has tried to examine the diffusion networks of disinformation by applying various strategies from other disciplines such as epidemic models (Moreno, Nekovee, and Pacheco 2004), influence models, and complex contagion models (Lerman 2016). Because of the impact on political communication and political behaviour that fake news can have, much of the empirical research on disinformation networks has gravitated around elections as topical moments of attention on social media.

Much of the work has been centred on the US, specifically on the 2016 presidential elections that saw the victory of the republican candidate Donald Trump, the very person who popularised the term fake news (Farkas and Schou 2019, 2). Grinberg et al., for example, merged a panel of voter registration data with their Twitter accounts and analysed the share of disinformation pieces in the diffusion network of political articles on Twitter (Grinberg et al.

2019). They discovered that although fake news articles accounted for a minimal percentage of the political articles, they were shared almost in their entirety by an incredibly restricted,

(33)

32 extremely active, and clustered group of users. Of these, many were found to be cyborgs, human-authored semi-automated algorithms (Grinberg et al. 2019, 2). A similar result in terms of characterisation of accounts on the spectrum between fully automated bots and humans is found by Shao et al., who again analyse the 2016 US presidential elections through their Hoaxy algorithm (Shao et al. 2018). In their diffusion network of low-credibility and fact-checking articles shared on Twitter, they find that 75% of users in the core are consistent with being humans, but that the high centrality nodes are actually leaning towards the bot side of the spectrum and that the network robustness is highly dependent on them (Shao et al. 2018, 14, 17).

On the European context, similar strategies to those of Shao et al. have been applied for the 2016 Italian constitutional referendum (Guarino et al. 2020), 2018 Italian general election (Giglietto et al. 2018), and 2019 European election (Pierri, Artoni, and Ceri 2020). These works got similar results to those on networks the other side of the Atlantic. The disinformation networks are always highly clustered, at times with clear indication of homophily polarization in modularity-based communities, that is groups of users whose causal mechanism for clustering can be identified in a common characteristic of the users (Guarino et al. 2020, 19).

Moreover, partisan and hyper-partisan sources were the catalysers of a great deal of the interactions in the days leading up to the event in question (Giglietto et al. 2018, 11). Finally, in agreement with research about the US, the networks seemed not to be robust to removal of central “super spreader” nodes (Pierri, Artoni, and Ceri 2020, 17). Interestingly, in a study of the interaction network on Twitter during the 2019 European election, Cinelli et al. find that despite the clustering “disinformation outlets did not interact among themselves, but rather they exhibited a tendency towards self-mentions” (Cinelli et al. 2020, 10). Moving away from empirical data, in their simulation models of epidemic spread, Tambuscio and Ruffo have found that fake news “easily becomes endemic and the debunking disappears”, but that the presence of fact-checkers in strategic positions within the network can “be applied and have partial success” (Tambuscio and Ruffo 2019, 15–16).

Some attention, although very limited compared to elections, has been given to conspiracy theories and science-related fake news. Similarly to research on elections, Del Vicario et al., in data gathered over the course of 5 years about conspiracy theories and scientific news networks on Facebook, found that users tended to ignore alternative narratives to the one they had selected and thus created polarised and homogenous clusters consistent with echo chamber theories (Del Vicario et al. 2016). Wood looked at the diffusion network of conspiracies

(34)

33 regarding the zika virus outbreak of 2015-2016 and noticed that the debunking subnetwork was more heavily centralised around a few highly influential accounts than that of conspiracy propagators (Wood 2018). In light of the existing scholarship, the COVID19 pandemic might constitute an extremely fertile environment for research of this kind that employs SNA and, although it is very early for more extensive work, some examples that analyse the infodemic that spread together with the pandemic are already being published (see for example Ahmed et al. 2020 for work on coronavirus and 5G conspiracy theory; or DeVerna et al. 2021 for the Covaxxy project on COVID19 vaccine disinformation).

References

Related documents

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

While firms that receive Almi loans often are extremely small, they have borrowed money with the intent to grow the firm, which should ensure that these firm have growth ambitions even

Effekter av statliga lån: en kunskapslucka Målet med studien som presenteras i Tillväxtanalys WP 2018:02 Take it to the (Public) Bank: The Efficiency of Public Bank Loans to