• No results found

White Rabbit. The logic and proportion of conspiracy theory videos on YouTube: a Foucauldian discourse analysis

N/A
N/A
Protected

Academic year: 2021

Share "White Rabbit. The logic and proportion of conspiracy theory videos on YouTube: a Foucauldian discourse analysis"

Copied!
84
0
0

Loading.... (view fulltext now)

Full text

(1)

White Rabbit

The logic and proportion of conspiracy

theory videos on YouTube: a Foucauldian

discourse analysis

M K Sam Birch Supervisor: Emil Stjernholm

MA Media & Communication Studies: Culture, Collaborative Media, and Creative Industries Thesis

(2)

Page | i

Abstract

Conspiracy theories are everywhere. The internet has provided people with the tools for

instantaneous, global communication and this has encouraged the spread of alternative narratives on a variety of platforms. This study is designed as a Foucauldian discourse analysis of how

conspiracy theories disseminate and proliferate on YouTube, the pre-eminent provider of video streaming services. It aims to understand how the platform reflexively influences, and is influenced by, narratives of opposition and conflict, creating an online ‘reality’ which has the power to shape the world offline. Primarily, it addresses questions of whether YouTube promotes subversive thinking by accentuating conspiracy theory videos and recommending progressively extreme content. It also investigates how the design of the platform, including its powerful algorithm and embedded ‘social media’ functions, affects user experience and favours particular discourses over others. To achieve this, search terms were entered into YouTube, with recommended videos being studied and coded to reveal any extant bias. In addition to the content of the videos themselves, titles, descriptions, comments, further side-bar recommendations, likes, dislikes and view-counts were all recorded according to bias, providing an extensive overview of influences inherent to the YouTube platform. Results were analysed according to a Foucauldian discourse analysis which, following a multiperspectival approach, was subsequently summarised using a framework developed by Uldam & Kaun. Patterns were discovered indicating a propensity towards the propagation of increasingly extremist material. Furthermore, various discourses, including those of ambiguity, conflict and surprise, were found to proliferate on YouTube, with conspiracy theories actively benefitting from the algorithmic and thematic functions of the website. Overall, the study elucidates how power struggles enacted in online spaces are affected by their environments and, in turn, can have an effect on the beliefs and behaviour of people worldwide.

Keywords: Conspiracy theory, YouTube, video, recommendations, social media, algorithm, comments, extremism, Foucauldian discourse analysis, mixed methods, multiperspectivalism, content analysis, truth.

(3)

Page | ii

Contents

1 Introduction ... 1

2 (Background &) Literature Review ... 1

2.1 YouTube ... 1

2.1.1 YouTube and how it works ... 1

2.1.2 The YouTube recommendations system ... 2

2.1.3 Algorithms and the challenge of studying them ... 4

2.1.4 A balancing act ... 7

2.1.5 YouTube and conspiracy theories ... 9

2.1.6 YouTube’s community functions ... 10

2.2 Conspiracy theories... 10

2.2.1 Conspiracy theories today ... 10

2.2.2 What is a conspiracy theory? ... 11

2.2.3 What are the effects of conspiracy theories? ... 15

2.2.4 Why do people believe in conspiracy theories? ... 19

2.2.5 What are common characteristics of conspiracy theories? ... 21

2.3 Background on Conspiracy theories Chosen for the Study ... 23

2.3.1 Climate change ... 23

2.3.2 Flat Earth theory ... 24

2.3.3 Fluoridation of water ... 25

2.3.4 HAARP ... 26

2.3.5 Denver airport ... 28

3 Theoretical Framework ... 29

3.1 Foucauldian Discourse Analysis (FDA) ... 30

3.1.1 Discourse ... 32

3.1.2 FDA and YouTube ... 33

3.1.3 FDA and conspiracy theories... 35

3.2 Providing stability: a multiperspectival approach ... 37

4 Method ... 40

4.1 Research Questions ... 40

4.2 Research Process ... 41

4.2.1 Search terms used ... 41

4.2.2 Data collection ... 42

(4)

Page | iii

4.4 Ethics ... 48

5 Results and Analysis ... 49

5.1 Recommendations ... 49

5.1.1 Initial top ten search results ... 49

5.1.2 Further recommendations: following the side-bar ... 50

5.1.3 Side-bar recommendations ... 51

5.2 Views ... 53

5.3 Likes and dislikes ... 54

5.3.1 Likes/dislikes compared to views ... 54

5.4 Comments ... 55

5.4.1 Comment bias ... 56

5.5 Video titles ... 57

5.6 Analysis framed by Uldam & Kaun’s four dimensional model ... 59

6 Conclusion ... 61

(5)

Page | 1

1 Introduction

In January of this year, YouTube released a statement asserting that it would “*continue its+ work to improve recommendations”1 on its platform. It proclaimed that it intended to reduce

recommendations of “content that could misinform users,”2 giving the particular examples of popular conspiracy theory subjects such as flat earth theory, so-called miracle cures and the 9/11 terror attacks.3 This study investigates how conspiracy theories are highlighted by the platform’s algorithm, whether it acts as a gateway to more extreme ideas and if there are other ways in which the website (knowingly or otherwise) promotes the spread of potentially detrimental messages, theories and opinions.

Since its inception in 2004, YouTube has become an increasingly influential media outlet, with independent “produsers”4 of videos able to build and maintain massive global fanbases. Despite being a relatively recent development, YouTube reaches over 1 billion people per day5 and is the second most-used search engine on the internet.6 This gives it incredible power, allowing hosted videos to reach international audiences and impact upon global narratives. Furthermore, recent journalistic studies into apparent YouTube recommendation bias during the 2016 US Presidential Elections suggest that there was considerable partiality towards videos which favoured the eventual winner (Donald Trump).7 It is therefore important to consider the ways in which YouTube’s

functionality might (intentionally or inadvertently) create, maintain or encourage alternative and potentially toxic world-views.

The development of online media channels has altered the traditional binary relationship between media producers and audiences, with the power of production being passed from established, often nationally-regulated groups (journalists, photographers, TV executives, etc) to ‘cottage industry’ individuals unbound by the same corporate or public-service considerations. Whilst television, radio, newspapers et al have never been immune from producing sensationalist material, now everyone is able to do so. Furthermore they have platforms which offer access to global audiences and means of profiting financially from spouting opinions which might be untrue or perhaps even dangerous. The digitalisation of conspiracy theories, with their rapid online formation, dissemination and

assumption into other conspiratorial narratives, makes them a potent and sometimes destructive force. Recent news items featuring sensitive subjects can quickly become ‘hijacked’ by intentionally disruptive or malicious parties pushing their own interests and ideologies: examples include the Sandy Hook Elementary School shooting which was branded a ‘hoax’ enacted by crisis actors,8 the Nipsey Hussle murder which was immediately linked to a conspiracy theory suggesting ‘Big Pharma’ was withholding cures for serious diseases,9 and the Christchurch mosque attacks which drew

1 YouTube (2019a) 2 Ibid 3 Ibid 4 Bruns (2007) 5 Nicas (2017) 6 Wagner (2017)

7 Lewis (2018); Lewis & McCormick (2018) 8 Svrluga (2019)

9

(6)

Page | 2 aspersions from right-wing commentators suggesting it could be a ‘false flag’ action by the political

left.10

This thesis focuses on more established conspiracy theories, rather than those developing as a reactionary responses to current affairs, and investigates how YouTube processes favouring certain videos intersect with human propensities towards psychologically and socially validating these theories. It also examines the comments sections which accompany videos to understand how they might provide a breeding ground for misinformation by allowing people to disparage factually correct videos, to show support for content which spreads untruths and to make statements which connect otherwise disparate conspiracy theories. Furthermore, the study addresses the intentional use of ambivalence in both titles and content, which effectively lures audiences into watching videos which may be offensive or harmful.

Promoting the UK campaign to leave the European Union, Michael Gove MP asserted that “people in this country have had enough of experts”11 and contemporary society certainly seems more capable, if not actively willing, to reject the testimony of scientists, economists and eye-witnesses in favour of principles (and actions) that can harm themselves and others. This thesis is an attempt to

understand how faulty narratives interact, disseminate and multiply. YouTube is a particularly useful area for study as it reflects modern values whilst also acting to shape them.

The results of this investigation are framed as a Foucauldian discourse analysis (FDA), albeit one that has been specifically oriented to face the particular challenges of interrogating an opaque algorithm and examining ‘social media’-type data. YouTube is a media platform which allows users to not only watch practically unlimited content, but to create it as well. There are accusations that the website’s recommendations algorithm is responsible for “radicalizing”12 its audience by proffering videos depicting increasingly extreme content, such as right-wing attitudes.13 However, whilst the algorithm might have significant influence over what is viewed on YouTube, the platform is primarily a ‘host’ and so any disruptive content must by produced by its users. Correspondingly, the

recommendations system is informed by the behaviour of people visiting the website. Political power in this case is not a tool wielded by the media but rather negotiated through the availability of multiple discourses, the choices of the general public and the inscrutable machinations of the YouTube algorithm. FDA allows a ‘top-down’ approach to the issue of false narratives; one can investigate how the discourses are developing through their interaction before essentially ‘working backwards’ to the individual videos, or texts. This should help to form an overview of how conspiracy theories function in the framework of YouTube, which may then be extrapolated to other aspects of society. The methodological framework also provides a standpoint from which we can try to analyse the algorithm as a text, perhaps shedding light on its inner workings.

The study is comprised of a content analysis of material found on YouTube. Searches thematically-related to conspiracy theories are made, with the results and subsequently recommended videos being assessed. For each text the title, description and content of the video is evaluated for bias, functions reflecting user-engagement (view-count, ‘like’ buttons, etc) are examined, as is the 10 Moritz-Rabson (2019) 11 Gove (2016) 12 Tufekci (2018) 13 Ibid

(7)

Page | 3 comments section on the webpage. A sample of side-bar recommendations for each video are then

recorded, with those deemed relevant to conspiracy theories being consequently investigated to indicate whether there is a pattern to YouTube’s recommendations. These results are then analysed according to an FDA to delineate the dominant narratives in an attempt to understand how

conspiracy theories function on YouTube: what makes them popular, how the public interacts with them, how they spread across the platform and, ultimately, how they impact upon wider society. Following a multiperspectival approach, these discursive findings are then contextualised by placing them within the four-dimensional framework developed by Uldam and Kaun for studying political participation in social media.14

14

(8)

2 (Background &) Literature Review

2.1 YouTube

2.1.1 YouTube and how it works

Since its inception in 2005,15 YouTube has become one of the goliaths of the internet. Owned by Google,16 it is the second most-visited website worldwide17 and the foremost provider of online video-streaming services.18 There are approximately 2.2 billion individual users of the platform19 who, combined, watch over a billion hours of content every single day.20

Although when “Me at the zoo”21 was uploaded on 23rd April 2005 the founders of YouTube had technically created 100% of the available content, the website was actually designed to host other people’s videos, providing “a convenient and usable platform for online video sharing.”22 Since that time, YouTube has gone from strength to strength, creating a symbiotic relationship with amateur video-makers worldwide leading to its pre-eminent position online and allowing it to affect

multifarious global narratives. One key driver for this success is the huge variety of videos hosted by YouTube which ensures there is something to suit every taste and demographic. Michael

Strangelove neatly describes the appeal of the website thus: “you would have to be dead inside not to find something emotionally or intellectually compelling on YouTube.”23 Through showcasing the antics, endeavours and productions of a global community of video-makers, YouTube naturally features content which reflects the interests, opinions, ambitions, partialities, prejudices and proclivities of a vast sample of humankind. Burgess & Green recognise that the practical ubiquity of YouTube is less related to its “topdown activities”24 than to the endeavours of its users: “various forms of cultural, social, and economic values are collectively produced by users en masse, via their consumption, evaluation, and entrepreneurial activities.”25 The nexus of YouTube and its users has been an ‘active agent’ in the development of media in the 21st century, shaping audience

expectations as well as the type of content being produced.

Entertainment is big business and YouTube is no different; although official figures are not readily available, “analysts at Morgan Stanley estimate that the service’s revenue will top $22 billion in 2019.”26 Content hosted on the video-sharing website is monetised by Google, who use advertising embedded in the site (and pop-ups in the videos themselves) to promote commercial partners: “the longer people stay on YouTube, the more money Google makes.”27 However, it is not just YouTube that sees financial benefits from the attention of its audience, as individual users can also reap 15 Wikipedia (2019) 16 Ibid 17 Alexa (2019) 18 Nicas (2017) 19 Popken (2018) 20 Nicas (2017) 21 Karim (2005) 22

Burgess & Green (2009), p.11 23 Strangelove (2010), p.3 24

Burgess & Green (2009), p.11 25 Ibid, p.11-12

26 Shaw & Bergen (2018) 27

(9)

Page | 2 advertising revenue by choosing to monetise the videos they have created. In addition, the platform

offers a number of services which create profit for the business whilst similarly providing a financial incentive to video-producers; these include channel memberships, a ‘merchandise shelf’ store, Super Chat income and money derived from YouTube Premium subscribers.28 Furthermore, there are practically unlimited ways in which YouTube users can make money from their videos without deriving it directly from Google, some examples of which are “links for direct donations in their video descriptions, their online merchandise stores, affiliate links for apps or paid mentions within the videos.”29 It is therefore in the interests of both YouTube and its money-making video producers to keep the audience engaged for as long as possible. This creates a dynamic whereby both the hosting website and the user-producers are incentivised to ensure that people watching the videos keep clicking on different content which, somewhat unsurprisingly, has encouraged some

questionable practices.

2.1.2 The YouTube recommendations system

A significant player in the ‘keep-ball’ game of audience retention is the YouTube algorithm.

Described by the website’s engineers as one of the “largest scale and most sophisticated industrial recommendation systems in existence,”30 the algorithm is responsible for enticing viewers to ‘stay a-while longer’ and watch more videos. When you conduct a search in YouTube, the results are shaped by the algorithm. When you are watching a video, the side-bar recommendations are chosen by the algorithm. When the video finishes, YouTube automatically enqueues and plays another video, chosen by the algorithm. The product of “using a field of artificial intelligence called machine learning to parse massive databases of user history”31 the algorithm has been modelled on “human data”32 and thus utilises patterns discoverable from previous user(s) behaviour to recommend content and predict future behaviour. Unfortunately, whilst YouTube state that they “update our recommendations system all the time... to make sure we’re suggesting videos that people actually want to watch,”33 there is plentiful evidence that the algorithm is less-than-perfect, recommending content that is potentially harmful to individual users and society as a whole.

Perhaps because it is partly based on an individual’s previous behaviour, the YouTube recommendation system can seem somewhat blinkered. At the more innocuous end of the spectrum the influence of algorithms can lead to a lack of variety in our online entertainment: Christo Wilson (a computer-science professor) is quoted in the New York Journal as saying “if I only watch heavy-metal videos, of course it's serving me more of those. But then I'm missing out on the diversity of culture that exists.”34 Attempted personalisation of content can become restrictive, creating a “filter bubble”35 around the user which removes “conflicting information that the algorithm deems unnecessary.”36 Essentially, a recommendation system is unlikely to suggest 28 YouTube Help (2019) 29 Popken (2018) 30 Lewis (2018) 31 Nicas (2017) 32 Popken (2018) 33 YouTube (2019a) 34 Nicas (2017) 35 Frangos et al (2018), p.259 36 Ibid

(10)

Page | 3 classical music, timber-sports, particle physics or polar bears if you have shown no interest in these

(or closely-related) subjects previously, regardless of whether you might actually have a

predisposition to be interested in them. Although boredom, or general lack of stimulation, might seem the most terrible outcome of such as system, the effects of a ‘filter bubble’ can be much more detrimental to an individual, skewing their world-view and even pushing them towards extreme ideologies.

Humans’ tendency to value and trust information that fits with their existing beliefs and opinions is a recognised psychological phenomenon known as ‘confirmation bias’ or ‘myside bias,’37 and it is this predilection which can be enhanced by the repetition of content pushed by particular algorithms. Put simply, if you like tennis, think it is important and watch a few online videos of the sport, then the (possibly endless) recommendation of further videos related to tennis will serve to ‘confirm’ your belief that tennis is both important and worthwhile. This can be more problematic when the subject is not racquet-sports, with evidence suggesting that algorithms can favour far-right politics, conspiracy theories and literally any other subject in a similar way, with recommendations instantly available to ‘confirm’ even the most tentative personal beliefs in potentially unsavoury ideologies. Furthermore, this ‘confirmation bias’ is not just enacted through repetition but, it is argued by commentators, through increasingly radical intensification of the message.

Critics have variously represented the process of following YouTube recommendations as going down a “radical”38 ‘rabbit hole’ “of extremism”39 or “of untruth.”40 This is because the algorithm seems to advance ever more extreme videos regardless of subject matter: in a newspaper article Tufekci describes how a preliminary study of YouTube recommendations “*appeared+ to constantly up the stakes”41 taking even fairly mundane interests to their outermost limits: “videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.”42 Lewis, a journalist for The Guardian, describes how his journey along the “conveyor belt of clips”43 made scheduled stops at “men mocking distraught teenage fans of Logan Paul... children stealing things and... children having their teeth pulled out with bizarre, homemade contraptions.”44 This trend may be significantly more problematic when transposed to political subjects. Kaiser & Rauchfleisch’s research project in the field of ‘disinformation studies’ connects far-right radicalisation with the ‘filter bubbles’ created within YouTube and the platform’s propensity towards recommending ever-more radical videos.45 Recently, a UK government Home Affairs Select Committee convened to question representatives from YouTube (and other platforms) about the proliferation of extremist and hate-related content being apparently prioritised by the website, with the chairperson, Yvette Cooper, visibly upset by the incrementally extreme far-right content (with seemingly escalating levels of racism) that had been proffered by the website.46 This inclination towards radical content is conceivably related to the ways in which users interact with YouTube,

37

Weigmann (2018), p.2 38 Kaiser & Rauchfleisch (2018) 39

Tufekci (2018) 40

Shaw & Bergen (2018) 41 Tufekci (2018) 42 Ibid 43 Lewis (2018) 44 Ibid

45 Kaiser & Rauchfleisch (2018) 46

(11)

Page | 4 with the algorithm ‘learning’ what type of content makes people click on more videos and therefore

simply ‘holding a mirror up’ to human behaviour (replete with its flaws and psychological weaknesses). Varshney describes how there is an increasing need for “surprise to capture attention”47 and, with people having vast amounts of data and choice at their fingertips, “highly surprising signals *are necessary+ to get attention.”48 This creates an environment where facts devalue relative to the element of surprise and, furthermore, negative sentiments appreciate as they are considered more unusual (or ‘surprising’) than positive ones.49 The YouTube website already uses ‘position bias’ to highlight additional videos, placing them in algorithm-defined order of preference,50 in addition to ‘auto-playing’ the next recommended video. However, individual video producers must compete for their videos to be clicked, leading to an ‘arms race’ to generate surprise, with many instances of unrealistic thumbnail pictures, sensationalist titles and outlandish content being used to motivate viewers. Humans are hard-wired “to pay attention to danger, food, and sex in preference to other subjects”51 and these themes are often writ-large on YouTube thumbnails to entice the online audience, even where their appearance in a video is fleeting (or completely non-existent). YouTube insists that progress has been made in reducing so-called ‘clickbait’ from its recommendations, explaining how tweaks to its algorithm have improved the overall veracity of suggested videos:

“You might remember that a few years ago, viewers were getting frustrated with clickbaity videos with misleading titles and descriptions... We responded by updating our system to focus on viewer satisfaction instead of views, including measuring likes, dislikes, surveys, and time well spent, all while recommending clickbait videos less often.”52

Regardless of these improvements, even a perfunctory surf through the YouTube website provides many examples of video recommendations promising the improbable, the fantastic or the shocking. This, unlike many of the claims of various video-producers on the website,53 should not be a huge surprise. The platform was designed to host user-generated videos and collect revenue from advertisements, so any social responsibilities that have arisen from the service’s popularity and the content provided by its user-base are always likely to be a secondary consideration.

2.1.3 Algorithms and the challenge of studying them

Algorithms are used for everything. They essentially ‘crunch the data’ to create credit scores,54 Spotify playlists,55 warnings that your bank card is being used in a ‘suspicious’ way56 and video recommendations on YouTube.57 They can contain vast reserves of data and, dependant on their individual remit, can be considered both powerful and influential. As discussed above, the YouTube

47 Varshney (2019), p.82 48 Ibid, p.86 49 Ibid, p.82 50 Lerman (2016), p.5 51 Brodie (2009), p.72 52 YouTube (2019a)

53 e.g. ApexTV (2017); Interesting Facts (2018) 54 Zarsky (2016), p.126 55 Bucher (2018), p.54 56 Ibid, p.57 57 Ibid, p.48

(12)

Page | 5 algorithm is central to the website’s functioning; “organizing and gatekeeping”58 but also

“personalising ‘recommendations’”59 which have a direct influence on the platform’s viewers. It is necessary in this thesis to investigate how YouTube’s algorithm affects both its own users and narratives occurring in wider society. Unfortunately, and for a variety of reasons, algorithms are notoriously difficult to interrogate. This section therefore concentrates on exploring the

characteristics of these computational phenomena and how they might be better understood, relying on Bucher’s comprehensive research for significant insight into the matter.

Firstly, when this study repeatedly refers to ‘the YouTube algorithm’ or ‘recommendation system,’ it is something of a misnomer. Algorithms are actually multiplicitous,60 with many different algorithms working simultaneously “to create a unified experience.”61 There is “not... one single algorithm”62 driving YouTube, but a complex “networked *system+”63 of “constantly changing”64 algorithms interacting to create audience recommendations and shape user experience. One cannot, therefore, think about dissecting a single algorithm, because they are legion and defined as much by their relationships to one another as their own constituent parts.

Secondly, as Tufekci says in a TED Talk addressing this subject, “we no longer really understand how these complex algorithms work.”65 She cites the sheer quantity of ‘big data’ that they contain, explaining that trying to comprehend an algorithm’s rationale through looking at its “giant matrices... maybe millions of rows and columns”66 is like discerning what she is thinking through taking “a cross-section of my brain.”67 Bucher conducts an in-depth study of algorithms as “black boxes”68 which is a common analogy for “an object whose inner functioning cannot be known.”69 Whilst admitting that the ‘black box’ paradigm is useful for outlining the “seemingly inaccessible or secretive nature” of algorithms, Bucher questions whether depicting them as ‘unknowable’ is factually correct, or at all useful.70 Essentially algorithms are produced by humans, the programmers who code them and the people whose data ‘feeds’ them, so their “general principles and

operational logics”71 are, in many ways, already known. Their outputs are also observable, which is a key point developed later.

Although algorithms may be difficult to scrutinise, due to their multiplicity and seemingly ‘closed’ nature, they still have a distinctive character. Zarsky highlights two main attributes as being “opacity and... automation”72 with the former relating to the regular absence of transparency and the latter

58 Ibid, p.781 59 Ibid, p.785 60 Ibid, p.48 61 Ibid, p.47 62 Ibid, p.48 63 Ibid, p.47 64 Ibid, p.48 65 Tufecki (2017) 66 Ibid 67 Ibid 68 Bucher (2018), p.41 69 Ibid, p.43 70 Ibid, p.47 71 Ibid, p.57 72 Zarsky (2016), p.119

(13)

Page | 6 describing the automatic data analyses they perform.73 It is the mechanical function of

data-processing that arguably makes the algorithm an amoral entity, “they are not inherently good or bad, neutral or biased,”74 they just process information according to a set of parameters. This means that whilst Bucher describes how many algorithms have “harmful or discriminatory effects”75 and Tufekci raises concerns about them collecting constituting an “infrastructure of surveillance

authoritarianism,”76 it is not the algorithm’s fault: its outputs are simply a result of its inputs. Bucher explains that this issue emphasises the difficulties in attributing “agency”77 to algorithms, as it is unclear where any (unpalatable) discrimination comes from: the “implicated”78 human influence or the calculations derived from “machine-learning.”79 Furthermore, algorithms are perhaps designed (albeit unintentionally) to produce objectionable results; a study by Schmitt et al illustrates how extremist media content is always likely to be conflated with antithetical ‘counter-messages’ because of their related topics80 whilst one can also easily predict that a processing system

predicated on differentiating between subjects based on established patterns is inevitably going to produce discriminatory results. Whether positive, negative or neutral, it is hard to ascertain exactly from where this discrimination originates due to algorithms’ lack of transparency.81

This thesis endeavours to basically understand if YouTube favours certain types of content over others and, with the website’s recommendations system being fundamental to its operation, it is imperative that one understands (as far as is possible) the algorithm(s) behind it. The traits mentioned above (multiplicity, complexity, ‘black box’ similarity, mutability) are not the only ones which make algorithms difficult to study. The creators of specific algorithms are unwilling to reveal the inner workings of their prize assets for commercial reasons (to safeguard intellectual property), for practical reasons (to stop people from “gaming the system”82) and also to protect themselves against accusations of unacceptable practices which they might then be pressured to change. The defence of ignorance, inherent in the description of algorithms as unknowable or autonomous, is useful; it provides developers and owners with a ‘get out of jail free card’ allowing them to deflect unsavoury accusations by claiming “that detection or prior knowledge was impossible.”83 Finally, the ‘personalisation’ of content that algorithms allow ensures everyone’s experience is bespoke, which makes any study even more complicated.

So can we learn anything about these oblique, multiple, ever-changing, heavily guarded, seemingly impenetrable, interrelated bodies of data? Bucher certainly believes so. She suggests it is not necessary to prise open ‘the box’ to gain a better understanding of these oddities84 instead proposing that “the first step... is to unknow them.”85 This is a process of distancing, whereby the 73 Ibid 74 Bucher (2018), p.56 75 Ibid, p.45 76 Tufekci (2017) 77 Bucher (2018), p.51 78 Ibid, p.54 79 Ibid, p.53 80 Schmitt et al (2018), p.780 81 Zarsky (2016), p.126 82 Bucher (2018), p.44 83 Ibid, p.56 84 Bucher, p.58 85 Ibid, p.46

(14)

Page | 7 researcher stops focussing directly upon what the algorithm is, shifting their gaze to observe what

effects it has,86 how it affects (or attracts the awareness of) people87 and what its purposes are.88 This approach is particularly relevant to the question of conspiracy theory proliferation on YouTube when approached via a Foucauldian discourse analysis. Bucher advocates a methodology where one avoids thinking about “why the algorithm came to a certain conclusion,”89 but rather concentrates on “what that conclusion suggests about the kinds of realities that are emerging because people are using algorithms.”90 This could be interpreted as an invitation to evaluate the discursive impacts of algorithmic process to decipher what it means to our social reality. In theoretical terms, discourse analysis seems to be a suitable way of attempting to indirectly decode the algorithm as text. In practical terms, Bucher’s suggestion that we stop trying to look inside algorithms and instead use “speculative experimentation”91 of “inputs and outputs”92 has certainly informed the method employed in this study. These are both explored in greater detail in the respective sections below.

2.1.4 A balancing act

In her widely-disseminated piece for the New Your Times, Zufekci’s assessment of YouTube’s situation is scathing: they “make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.”93 The platform is accused in many journalistic reports of being a breeding-ground for “lies, hoaxes and misinformation,”94 full of “videos light on facts but rife with wild speculation”95 where “fiction is outperforming reality.”96 By providing a platform which is open to everyone, but where popularity (and financial reward) is distributed according to ‘views,’ YouTube is implicated in promoting objectionable practices ranging from “how to make explosives,”97 through right-wing radicalisation98 to “rants by... Holocaust deniers.”99 To make matters worse, the company is profiting from the questionable content it contains.100 YouTube is aware that it harbours contentious material, with their user ‘terms of service’ waiving any rights as regards content which is “factually inaccurate, offensive, indecent, or otherwise objectionable.”101 The recent missive indicating that the company is “taking a closer look at how we can reduce the spread of content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines”102 purports that it is addressing issues around algorithmic recommendation of misleading information, whilst stopping short of banning it 86 Ibid, p.61 87 Ibid, p.63 88 Ibid, p.64 89 Ibid, p.58 90 Ibid, p.58 91 Ibid, p.61 92 Ibid, p.60 93 Tufekci (2018) 94 Ibid 95 Popken (2018) 96 Chaslot (2018) 97 Strangelove (2010), p.151 98 Kaiser & Rauchfleisch (2018) 99

Shaw & Bergen (2018) 100 Wakefield (2019) 101 YouTube (2019b) 102

(15)

Page | 8 altogether: “to be clear, this will only affect recommendations of what videos to watch, not whether

a video is available on YouTube.”103

This stated “balance between maintaining a platform for free speech and living up to our responsibility to users”104 is perfectly understandable from a company perspective, even though outside agencies (i.e. journalists and politicians) might find it frustrating. Lewis and McCormick describe how YouTube puts “a wall around its data... *protecting+ itself from scrutiny,”105 yet as the “the single most important engine of YouTube’s growth”106 the algorithm (and the information it is built upon) is undoubtedly of incredible value to the company. Secrecy in this case is justifiable from a commercial perspective. No business wants their most highly-prized assets in the public domain where they can be meddled with by external bodies or ‘ripped off’ by competitors. There is also a “fear of censorship within the Internet community”107 which has considerable influence online; if a social media site, or video-hosting platform, alienates a proportion of its users through heavy-handed proscriptive practices, entire communities can shift to other websites, domains and services. YouTube have a successful formula/algorithm and they are naturally unwilling to implement any practices which might compromise their dominant market position. Another issue, that has already been intimated above, is the algorithm itself may be functioning in a manner that is beyond the understanding (and thus the effective control) of the platform’s own programmers; a situation that would surely increase international scrutiny of the company.

To surmise, it is in the interests of the organisation to avoid being subject to wide-ranging

regulation. Ensuring compliance with rules imposed by governments and regulatory bodies can be a significant additional expense, especially when a business operates internationally across

innumerable jurisdictions. Although YouTube’s basic position is to “disclaim all and any liability in connection with Content”108 (effectively discharging responsibility to the content-producers) it has been pushed to make statements (and presumably changes) which indicate a greater social responsibility. Public announcements regarding the company’s “commitment and sense of

responsibility to improve the recommendations experience”109 admit there is a public service aspect to the business whilst intimating that it is capable of effective self-regulation. However, for a corporation accused of “long [prioritizing] growth over safety”110 which has effectively established a dynamic media platform “because it was unfettered by producers, network executives, or

regulators,”111 there still appears to be a significant schism between its public relations discourse and the types of content available on (and actively recommended by) the website.112

103

Ibid 104 Ibid 105

Lewis & McCormick (2018) 106 Lewis (2018) 107 Strangelove (2010), p.108 108 YouTube (2019b) 109 YouTube (2018a) 110 Shaw & Bergen (2018) 111 Ibid

112

(16)

Page | 9

2.1.5 YouTube and conspiracy theories

YouTube invites scrutiny because it hosts videos which contain objectionable material. In addition to “pornography, violence, and racism,”113 there is concern about hate speech and far-right content.114 Furthermore the social media, or community, functions of the YouTube platform also seem to act as a highly-visible, largely-unregulated area with the ‘public comments’ section “notorious for online trolling, flaming and abuse.”115

One of the most concerning features of YouTube is the proliferation of misinformation. The company already indemnifies itself against legal recourse from “content that is factually

inaccurate”116 but that does nothing to protect the public from potential harm. In media reports, Popken states that “YouTube, as one of our primary windows into the world, is shaping our

understanding of it”117 whilst Tufekci emphasises “how many... young people — turn to YouTube for information.”118 There are numerous detrimental effects of disseminating misleading information, which are delineated later in the study, but the availability of false narratives on YouTube is simultaneously undeniable and troubling.

A prime example of YouTube foregrounding questionable information is the abundance of videos related to conspiracy theories. In the company’s statement “Continuing our work to improve

recommendations on YouTube,”119 the link between conspiracy theories and their possibly damaging effects is made explicit, with a few specific examples spotlighted:

“we’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”120

The relationship between misinformation, conspiracy theories and radicalisation has been illustrated by various commentators,121 with Lewis reporting that, even after YouTube had begun sanctioning a particular video for ‘violating its guidelines,’ it was simultaneously recommending it to viewers.122 The company has removed advertising revenue from “content like mass shootings”123 to inhibit people from profiting direct from tragedy (although other indirect forms of income are still possible), yet there appears to be a significant number of YouTube users who are making money by creating and distributing false information, such as implausible conspiracy theories. When “a jailed radical preacher ranks top for search term ‘British Muslim spokesman,’”124 independent evidence shows that during the 2016 US election “YouTube was six times more likely to recommend videos that aided Trump than his adversary”125 and the algorithm is accused of “systematically [amplifying] videos that are divisive, sensational and conspiratorial”126 there certainly seems to be something 113 Ibid, p.106 114 Wakefield (2019) 115 Murthy &Sharma (2018), p.192 116 YouTube (2019b) 117 Popken (2018) 118 Tufekci (2018) 119 YouTube (2019) 120 Ibid 121

Kaiser & Rauchfleisch (2018); Shaw & Bergen (2018); Lewis (2018); Tufekci (2018) 122 Lewis (2018) 123 Popken (2018) 124 Wakefield (2019) 125 Lewis (2018) 126 Ibid

(17)

Page | 10 disconcerting happening. With YouTube’s projected turnover for 2019 exceeding $22 billion127 from

“*racking+ up the ad sales”128 the company’s expressed principles become rather more unpalatable.

2.1.6 YouTube’s community functions

Whilst the primary purpose of YouTube is to allow users to share and watch videos, the platform also includes a number of embedded ancillary functions which provoke further user interaction. These “people-focussed features,”129 such as the ability to comment on videos, ‘like’ or ‘dislike’ content and subscribe to producer-run ‘channels,’ make YouTube a more immersive experience, akin to a social media site.130 A prominent component of the website is the comments forum that

accompanies a video, although a minority of producers choose to ‘disable,’ or remove, it. As briefly mentioned earlier, these fora have a reputation for attracting divisive, inflammatory and abusive comments from other users.131 This tendency towards hostility is not exceptional to YouTube, with studies of newspapers’ online discussion boards also revealing a propensity for negativity,132 conflict133 and vitriolic discourse.134

This study included the collection of data related to use of the ‘like’ and ‘dislike’ buttons on the YouTube website, as well as recording the number of views of each video. Following Lerman’s assertion that “social influence bias, communicated through social signals, helps direct attention to online content that has been liked, shared or approved by many others,”135 the presence of a high number of ‘likes’ or views may influence user behaviour. Perhaps most importantly, there is an investigation into comments on every video studied, noting the total number posted and conducting a content analysis of those considered ‘most popular.’ Madden et al consider these fora to be “a large repository of user-generated information which may be mined”136 and the visible, yet semi-anonymous, opinions contained within them provide an important insight into the narratives surrounding each video.

2.2 Conspiracy theories

2.2.1 Conspiracy theories today

Conspiracy theories and the narratives around them have occurred throughout history. From “precursors in antiquity”137 conspiracy theories have been omnipresent, attaching themselves to “every major event of the last 2,000 years.”138

127 Shaw & Bergen (2018)

128

Tufekci (2018) 129

Madden et al (2012), p.694 130 Murthy & Sharma (2018), p.194 131

Murthy & Sharma (2018), p.192; Strangelove (2010), p.106; Madden et al (2012), p.699 132 Slavtcheva-Petkova (2016), p.1125 133 Ibid, p.1129 134 Ksiazek et al (2015), p.850 135 Lerman(2016), p.5 136 Madden et al (2012), p.698 137 Uscinski (2018d), p.33 138 Hunt (2008)

(18)

Page | 11 Despite “the increasing authority of science over our knowledge,”139 conspiracy theories remain part

of the popular consciousness into the 21st century. If anything, with technological developments democratising the media landscape140 by decentralising power from traditional outlets141and providing phenomenal instant global access to information,142 conspiracy theories have become more relevant to a modern audience; to paraphrase Uscinski, timelessness has become timeliness.143 Conspiracy theories are not just “alive and well,”144 they are thriving, with even comparatively small-scale conspiracy theories able to gain traction online, “instantly jump borders”145 and be accepted into “non-state-based alliances among global users.”146

Michael Wood posits that the vast quantity of information available in the internet age provides the “raw material”147 for conspiracy theorising. The apparent counterpart to this hypothesis is that, in a world where open access to data is considered the norm, when “the information stream available to the public... is demonstrably incomplete,”148 it creates a by-product of suspicion which, in turn, causes people to investigate and speculate.

Recently, research into conspiracy theories has increased at a seemingly exponential rate149 with papers being published on the subject “nearly every day.”150 This drive to research has been catalysed by a post-truth151 world where ‘fake news’ and “credible falsehoods”152 permeate all aspects of social interaction; in brief, “conspiracy theories are everywhere.”153

2.2.2 What is a conspiracy theory?

Defining a ‘conspiracy theory’ is not straightforward. Although it is an enduring and well-known concept, and despite many previous attempts, there is still no overwhelming consensus on what the phrase means. This is partly because the term has been used pejoratively and, often, haphazardly to describe any unexplained event or phenomena, regardless of merit.

Conspiracy theories have a “bad reputation”154 and, whilst that is at least partially justified, there is an increasing willingness in the academic community to move away from broad-brush rejection, dispensing with derogatory language and accepting that there are often good reasons for these beliefs (even if they say more about the ‘believers’ than the subject of their theory). Hofstadter’s article “The Paranoid Style in American Politics” justifiably continues to have a strong influence over

139

Fassin (2011), p.41 140 Strangelove (2010), p.158 141

Antony & Thomas (2010), p.1283 142 Slavtcheva-Petkova (2016), p.1116 143 Uscinski (2018), p.1 144 Goertzl (1994), p.738 145 Aaronovitch(2009) 146 Jacobs (2017), p.337 147 Wood (2013), p.32 148 Jacobs (2017), p.352 149 Uscinski (2018d), p.42 150 Uscinski (2018), p.2 151 OED (2019a) 152 Jacobs (2014), p.334 153 Uscinski (2018), p.10 154 Dentith (2018), p.196

(19)

Page | 12 studies in the field yet, even by the author’s own admission, the term ‘paranoid style’ is intentionally

negative.155 Conspiracy theorists have subsequently been summarily excluded from discussions because they are “portrayed as suffering from crippled epistemologies [and] being paranoid(-esque) in their thinking.”156 The term ‘conspiracy theory’ had arguably become so toxic, that association with it could undermine a person’s credibility.157 Orr & Husting highlight how disparaging someone who “*challenges+ authority and power”158 as a conspiracy theorist allows “an accuser to “go meta””159 on them, dodging the subject at hand by directly “impugning their character, intelligence, and... emotional maturity.”160 This inclination towards dismissing conspiracy theories ‘out of hand’ is perhaps why they are frequently associated with the oppressed, the powerless and the

disenfranchised.

Conspiracy theories often assume an anti-establishment stance or tone. This pits the theorists against hegemonic mechanisms of control which traditionally act as arbiters of knowledge, money and power. This is sometimes conceived as a ‘David and Goliath’ struggle between an individual (or small group) and powerful bodies like corporations,161 governments, NGOs or academic institutes. Regardless of the specific theory being espoused, many critics attribute particular significance to this concept of railing against authority.162 Fassin notes that ““people who... consider that they have been... dominated or discriminated against, are particularly prone to conspiracy theories”163 and many critics have found links between racial minorities and these beliefs. There are various reasons extended for this: Goetzel cites “conspiracies... directed specifically at blacks,”164 Hofstadter references those who are “shut out of the political process,”165 Simmons & Parsons demonstrate African-American acceptance of conspiracy theories is inversely correlated with perceptions of the community’s political power166 and Orr & Husting suggest that derogatory terminology can have racial undertones, silencing minorities through undermining their legitimacy.167 By excoriating ‘conspiracy theorists’ en masse, powerful in-groups weaponise the term to maintain existing structures of authority, including those delineated by race. The idea that public debate is unfairly biased against conspiracy theories and their proponents is therefore “not entirely unfounded.”168 Conspiracy theories are not, however, just the domain of the disenfranchised. Perhaps the most high-profile contemporary exponent of subversive thought is also one of the most influential people in the world: Donald Trump. Through his speeches, policies and (in)famous tweets, the 45th

President of America has expressed extreme scepticism about global warming (at one stage suggesting it was invented by the Chinese to undermine US manufacturing)169 and supported “a

155 Hofstadter (1964), p.77 156

Dentith (2018), p.202 157

Orr & Husting (2018), p.85 158 Ibid, p.82 159 Ibid, p.83 160 Ibid 161 Weigmann (2018), p.3 162 Thresher-Andrews (2013), p.6; Neville-Shepard (2018), p.122 163 Fassin (2011), p.46 164 Goetzel (1994), p.736 165 Hofstadter (1964), p.86 166

Simmons & Parsons (2005), p.594 167 Orr & Husting (2018), p.82 & 90 168 Dentith (2018), p.203

169

(20)

Page | 13 range of seemingly unrelated matters [which] could all be boiled down to one singular overarching

conspiracy narrative: political elites sold out the interests of regular Americans to foreign interests.”170 In fact, Trump falls into many demographics commonly associated with conspiracy belief and science denialism: he is white, male and demonstrably right-wing.171

Aaronovitch talks about conspiracy theories as “history for losers”172 and it is this association with powerlessness and failed epistemologies that, perhaps unfairly, taints the entire subject. There is certainly a technical problem in talking about ‘successful’ conspiracy theories: if a theory is proved true, then it is no longer a ‘theory’ so is removed from the field of speculation and appropriated by those who administer actual knowledge. Essentially, the realm of conspiracy theory can only contain artefacts which are (as yet) unverified and unresolved, giving the whole subject the semblance of failure. However, there are many examples of conspiracy theories that have been proven to be true (see cover-up activities around the JFK assassination 173 Operation Northwoods and MK-ULTRA174) and, whilst this element of rightness precludes them from remaining ‘conspiracy theories’ per se, it indicates that some theories are definitely worthwhile.

There is a suggestion that some official narratives, where an authority has stated something as true which has later been proved to be false, is another example of conspiracy theory in action. Apter, in her review of literature, succinctly describes how governments espouse narratives that ask people to believe in some conspiracy theories (that Iraq has ‘weapons of mass destruction,’ that al-Qaeda is a coherent international network of militant jihadists, etc) whilst deterring you from making unofficial connections between the conflict in Iraq and oil supplies, or political motivations behind the fear-mongering ‘war on terror.’175

Whilst some conspiracy theories are proved true, the vast majority are probably not. In saying “when conspiracy theorists are right, it is by chance,”176 Uscinski intimates a kind of infinite

monkeys/works of Shakespeare dynamic; when innumerable theories are fired out, a few will hit the target. It is this proliferation of false theories that contribute to the poor reputation of the entire genre. Some of the ideas being disseminated are unreasonably outlandish (“powerful leaders of this planet are members of a race of lizards”177), contradict millennia-old facts (‘flat earthers’) or involve such a vast conspiracy that they are nigh-on impossible (global warming is a hoax). Taking a

purposive view, it is likely that many conspiracy theorists are promoting subversive concepts to follow commercial, political or psychological personal agendas.

This variation underscores a key problem in defining conspiracy theories. Many are untrue, unlikely or otherwise unconscionable. A few have been proved correct, some contain elements of truth and there is a whole range of others from the probable to the impossible. This creates a wide spectrum

170 Uscinski (2018), p.3 171 Hansson (2017), p.39 172 Aaronovitch (2009) 173 Hagen (2018), p.28 174 Hunt (2008) 175 Apter (2006), p.369 176 Uscinski (2018c), p.109 177 Franks et al (2013), p.2

(21)

Page | 14 of veracity. Whereas some have epistemological value, others “toward the fringe”178 are aberrant

from reality, making any overarching definition based on truth difficult to impose.

So, what can be said about conspiracy theories? According to Dentith, “most scholars… *think+ there is something suspicious”179 about them, and this sentiment is certainly observable in the work of Fassin,180 Hofstadter,181 Wood182 and Brotherton.183 The use of ‘suspicious’ is compelling, because conspiracy theories appear to fulfil both definitions of the word: they are outwardly suspicious of mainstream narratives whilst themselves being intrinsically suspicious (or questionable) in nature. Whilst Dentith lightly condemns the widespread use of ‘suspicion’ to describe conspiracy theories184 he also advocates “*assessing+ such beliefs on a case-by-case basis,”185 a strategy that involves taking an investigative (or ‘suspicious’) look at each theory before passing judgment. Although this

technique admits the possibility that a given theory could be ‘beyond suspicion,’ Dentith’s general position might feasibly be described as treating conspiracy theories as ‘suspicious until proven otherwise.’

Despite the alleged rejection of ‘suspicion,’ Dentith’s particularist186 definition of conspiracy theory is a good starting point; it primarily denies any inherent association with falsehood and encourages the appraisal of each theory according to its “individual merits.”187 This study into representations on YouTube contains a number of very different conspiracy theories spanning the spectrums of veracity and popularity; each one is therefore individually evaluated to ensure the results are contextualised and reflect the variation possible within this continuum of beliefs. The definition used will also follow Dentith’s assertion “that conspiracy theories are theories about conspiracies,”188 whilst applying Uscinski’s exclusions of “strictly paranormal or supernatural phenomena… for example, Bigfoot, Loch Ness and Chupacabra.”189

It is worth remembering that conspiracy theorists are contrarians; they posit ideas which challenge conventional knowledge. Their views are often speculative and can be irrational, “inconsistent and implausible, not to say absurd.”190 Even Dentith makes it clear that conspiracy theories are not “prima facie rational.”191 They visibly champion values of subversion, even where the basis for their opposition is built on flimsy foundations. The definition of ‘conspiracy theory’ used therefore indicates an awareness of the propensity towards conflict (and fallibility towards fallaciousness) that characterises many examples of the genre. Brotherton’s description is a useful reminder of the

178 Uscinski (2018), p.15 179 Dentith (2018b), p.97 180 Fassin(2011), p.40 181 Hofstadter (1964), p.77 182 Wood (2013), p.32 183 Brotherton (2013), p.9 184 Dentith (2018), p.197; Dentith (2018b), p.94 185 Dentith (2018b), p.104 186 Dentith (2018), p.197 187 Ibid 188 Dentith (2018b), p.94 189 Uscinski (2018b), p.49 190 Fassin (2011), p.46 191 Dentith (2018), p.104

(22)

Page | 15 potential imperfections of conspiracy theories: “I define conspiracy theory as an unverified claim of

conspiracy which is not the most plausible account of an event or situation.”192

Finally, any definition of conspiracy theory should indicate that they are increasingly important to society and self-perpetuating in nature. They can undermine a person’s social impulses,193 constitute a “public health issue”194 and influence presidential elections or national referenda.195 Furthermore, multiple studies have proven that people exposed to conspiracy theories are more likely to believe other conspiracy theories.196 This “slippery slope”197 towards accepting many unproven theories is compounded by the fact that many of them are “interwoven”198 by common threads and themes, with one conspiracy acting as a “gateway”199 to others. Uscinski also describes how conspiracy theories multiply “like tribbles,”200 with politicians fending off accusations of conspiracy by advancing alternative conspiracy theories until the truth is completely buried underneath unsubstantiated supposition and conjecture.201

In summary, conspiracy theories are found everywhere and can be disseminated by everyone from the “tinfoil hat crowd”202 to ‘official sources’ like presidents. Marginalised groups have used them to question governments and powerful actors have employed them to maintain their privileged positions. They exist on a spectrum of veracity, from theories which have proven true to obdurately irrational and unverifiable fantasies. It is therefore necessary to evaluate each conspiracy theory on its particular strengths and weaknesses, never assuming intrinsic falsehood. One must, however, keep in mind that these ideologies are tools of contradiction which may be primarily intended to challenge authority, or represent anti-establishment opinion, rather than prove a specific theory is correct. Regardless of their provenance or intention, contemporary conspiracy theories can have a serious impact on public behaviour, which makes them an especially salient subject for study.

2.2.3 What are the effects of conspiracy theories?

There is much interest in how conspiracy theories affect both society and individuals. With theories proliferating ever quicker, it is imperative that we understand their potential benefits and

detriments. Whilst they are considered a simple, innocuous ‘talking-points’ by some, “*representing+ a typical and healthy by-product of a thriving and open democratic society,”203 there are many indications that conspiracy theorising could cause serious, if not catastrophic, harm to the planet and its inhabitants.

192

Emphasis added. Brotherton (2013), p.9 193 Van der Linden (2015), p.173

194

Glick & Booth (2014), p.798 195

Uscinski (2018), p.10

196 Goertzel (1994), p.731; Lewandowsky et al (2013), p.8; Lantian (2013), p.19; Lewandowsky (2018), p.152; Glick & Booth (2014), p.799; Van der Linden (2015), p.171

197

Van der Linden (2015), p.171 198 Mersereau (2018) 199 Ibid 200 Uscinski (2018), p.5 201 Ibid, p.5-8 202 Wolfson (2018) 203 Thresher-Andrews (2013), p.7

(23)

Page | 16 Firstly, let us address the notable positive effects of conspiracy theories. Some have been proven

correct, vindicating the ‘believers’ and forcing conspiring actors to admit collusion.204 They allow ordinary people to question official discourses, demanding accountability from governments and commercial entities. Dentith highlights the problem with conflating ‘official,’ establishment-proffered theories with “epistemic authority.”205 This idea is extended by Hagen, who believes that academia’s tendency towards attributing “a special level of moral purity… to presidents and other high officials… seems at least inappropriate, if not bizarre.”206 Conspiracy theory can form the framework allowing individuals to construct social critiques before introducing them to the popular consciousness. In turn, this element of ‘public investigation’ can encourage politicians and leaders towards “transparency and good behaviour,”207 because people have the legitimate means to ‘weed out’ corruption. Even if a conspiracy theory is incorrect, its focus can provide an indication of public concerns; “[expressing] social imaginaries and political anxieties that [might otherwise] remain... unheard.”208 Through giving an individual the voice to address external (societal, political, moral, etc) concerns, conspiracy theories also invite personal reflection, education and the exercise of skills related to critical thinking and reasoning.209

Perhaps the most significant benefit that most people derive from conspiracy theories is also probably the most frivolous: they are entertaining. Tales of UFOs landing at Roswell, faked moon landings and lizard people infiltrating the upper echelons of society are fun. Considering the

abundance of conspiracy theory videos on YouTube, it is clear that they are proliferating because of their power to entertain: when a video by Vsauce called “Is Earth Actually Flat?” describing various ‘flat-earth’ theories (but not comprehensively expressing an opinion on the matter) is viewed 26,432,647 times,210 one can assume there are large numbers of people watching recreationally. In addition to providing simple diversion, these peculiar tales can have further positive effects: Denver airport has embraced ‘serious’ conspiracy theories, making them into a valuable advertising

gimmick.211 Unfortunately, even where a conspiracy theory has been employed for the purposes of entertainment it can have unintended detrimental effects.

Conspiracy theories are a contemporary concern precisely because they can sway public opinion, influence government policy and lead to actions which cause direct harm to individuals, specific groups or even “the long-term sustainability of human civilization.”212 As discussed earlier, they appear to self-perpetuate, in that “*exposure+ to conspiracy narratives increases the belief in various conspiracy theories,”213 meaning even outwardly anodyne speculations can accumulate to influence people’s thinking and behaviour. Fasce & Picó assert that “unwarranted beliefs are not personally or socially innocuous”214 and one can argue that lending credence to some ‘harmless’ yet irrational beliefs can make it easier for other, harmful, irrational ideas to circulate. Beyond this blanket 204 Uscinski (2018b), p.49 205 Dentith (2018b), p.101 206 Hagen (2018), p.34 207 Uscinski (2018), p.20 208 Fassin (2011), p.41 209 Břízová et al (2018), p.1 210 Vsauce (2014) 211 Wolfson (2018) 212 Hansson (2017), p.39 213 Lantian (2013), p.19 214

(24)

Page | 17 criticism, there are various ways that conspiracy theories are considered to have potentially injurious

consequences; how injurious depends on the type of theory being expounded, its position on the spectrum of veracity and the intentions of the people involved in its dissemination.

Some conspiracy theories are designed to undermine conventional knowledge to maintain, or improve, business interests. Tobacco lobbies or pesticide makers might produce literature or fund studies which counter conventional criticism of the public safety of their products. Weart asserts that, in these cases, it is not necessary (or even the intention) to prove that the scientific consensus is wrong, rather to sow seeds of doubt: “*raising+ enough questions to convince the public that there was no consensus.”215 This technique is apparent in the output of climate change deniers who frequently conceive global warming as a conspiracy whilst supplying cherry-picked evidence designed to complicate the layperson’s understanding of the problem.216 This ‘muddying of the water’ makes it more difficult for the general public to learn about issues which could have a serious effect on their lives. Despite substantial scientific evidence which proves that (amongst other things) fluoridating water supplies is a good method for preventing tooth decay,217 climate change is being caused by human behaviour,218 genetically-modified foods are safe219 and there is no link between the MMR vaccination and autism,220 misinformation on these subjects is still rife. Even by addressing just these examples, we reveal a number of damaging effects to humankind: respectively, public bodies who reject water fluoridation expose their communities to increased risk of dental caries,221 urgent action is not being taken to arrest climate change, third-world farmers are unable to use disease-resistant crops (thus protecting their harvests and the people who rely on them) and children are dying from diseases which can be prevented by vaccination. In all these cases, conspiracy theories which question the scientific consensus and spread misinformation are responsible for causing damage directly to society and its inhabitants.

An indirect consequence of either malicious or reckless misinformation is time-wasting. In its most innocent form, this accusation could be levelled at most YouTube content; when someone watches a conspiracy theory video about UFOs it might inhibit their thesis-writing but does not adversely affect anyone else. The problem arises when misleading or fallacious beliefs which contradict established fact gain in popularity. Weart’s example of climate scientists who “found a large part of their time had to be spent not doing research, as they would have preferred, but responding to attacks and denial”222 highlights how experts are distracted from developing their areas of expertise by a constant need to defend and justify their work. In many disciplines, this is becoming a permanent bureaucratic requirement, with the scientific community forced to spend valuable time considering how to best present factual information to the public to avoid (unjustified) rejection.223 When this

215 Weart (2011), p.45 216 Hansson (2017), p.41 217 Horowitz (2003), p.6 218 Weigmann (2018), p.1 219 Ibid, p.4 220 NHS (2019) 221 Uscinski (2018), p.11 222 Weart (2011), p.48 223

(25)

Page | 18 wasted time could “otherwise have been devoted to research,”224 then human progress is being

perceptibly retarded.

Sadly, the actions of conspiracy theorists can impact more than just an individual’s use of their time. There are several accounts of scientists being targeted for their participation in studies which some theories reject. This can involve abusive language, attempts to discredit their work, exhortations to violence and death threats.225 Jay Cullen, a marine chemist, gives a personal account of insults, threats and harassment suffered at the hands of conspiracy theorists who accused him of fabricating his research on the potential fall-out from the Fukushima nuclear disaster.226 Similarly Lewandowsky, a psychologist studying ‘climate scepticism,’ was subjected to racial abuse and his academic work targeted by a campaign to ‘silence’ him.227 This sustained attack succeeded in disrupting the publishing of articles pertinent to climate change denial and wasted a huge amount of time, effort and money.228

Conspiracy theories can also cause societal problems. They constitute a challenge to extant

authorities and create uncertainty in who, or what, can be believed. Whilst there are specific cases where such theorising has been justified (through the uncovering of genuine cover-ups), in the vast majority of cases this serves to destabilise functioning mechanisms of spreading information: “establishment institutions are designed to ensure stability; conspiracy theories, on the other hand, are instruments of disruption.”229 Traditionally-trusted groups lose discursive power, leading to “the erosion of scientific authority”230 and a movement towards “hundreds of relative truths,”231 each problematised by the existence of the others. This leads to divisive, polarised and partisan

worldviews which, in turn, instigate conflict between proponents of different viewpoints.232 Belief can therefore become inextricably entwined with political affiliation, with a comparative ‘truth’ depending on the ‘side’ you take. A salient example of this situation is the observation by different critics that conservatives, republicans and other persons on the political right are more likely to deny anthropogenic climate change.233

Overall, whilst the capacity for motivating people to action and independent, critical thought could be laudable there appear to be significantly more instances of detrimental effects than there are of positive ones. Uscinski correlates “recent displays of populism, nationalism, xenophobia, and racism”234 with conspiracy thinking, and the language of conspiracy theory is often attacking,

destructive and focussed on conflict. There can be adverse results for individuals on both sides of the divide, with some following misleading advice and others having their legitimate work undermined. Society can suffer when the concept of ‘truth’ becomes a matter of opinion and when contentious

224 Lewandowsky (2018), p.168 225 Mersereau (2018) 226 Cullen (2018), p.135-145 227 Lewandowsky (2018), p.149-172 228 Ibid, p.170. 229 Uscinski (2018), p.19

230 Harambam & Aupers (2015), p.447 231

Jacobs (2017), p.336 232 Weart (2011), p.47

233 Van der Linden (2015), p.173; Weart (2011), p.45; Uscinski & Olivella (2017), p.1 234

Figure

Figure 1: Uldam & Kaun's four dimensions of political participation in social media 485
Table 1.3: Recommended video content for all analysed videos
Table 3.2: Video ‘likes’ and ‘dislikes’ compared to view count Table 3.1: Breakdown of side-bar recommendation views, ‘likes’ and ‘dislikes’ according to initial video content
Table 4.1: Number of comments on videos with enabled comments sections Comments on videos with enabled comments sections
+5

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

those in which knowledge is represented through belief bases instead of logic theories, and those in which the object of the epistemic change does not get the priority over the

I elaborate on Mead’s distinction between social behavior, in the form of (1) functional identification, and social interaction, in form of (2) attitude taking of the thing from

The videos connects the conspiracy to the state (beginning with the founding fathers) and the military complex [1]. The “Roman cult” - Roman paganism, interpreted by the video

Our work has for example focused on discourses about public service broadcasting (Carpentier 2015) and about journalism (Carpentier 2005), on recording industry rhetoric about

NK Roger de Robelin, “On the Provenance of Rembrandt’s The Conspiracy of the Batavians under Claudius Civilis”, in Art Bulletin of Nationalmuseum, Stock- holm, Volume NVI