• No results found

Expressing hate: How overt and covert hate speech operates online

N/A
N/A
Protected

Academic year: 2022

Share "Expressing hate: How overt and covert hate speech operates online"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

Expressing Hate

How overt and covert hate speech operates online

Master’s thesis 45 credits Author’s name: Tove Fäldt

Name of supervisor: Matti Eklund Name of examiner: Andreas Stokke Semester: Spring 2021

Department of Philosophy, Uppsala University

(2)

Abstract

This thesis highlights the complex ways in which hate speech operates online, which ties into more general debates on online hate speech as something special. One way of elucidating this complexity is by dividing online hate speech into overt and covert. In doing so, we can gain a better understanding of both motivations for hate speech as well as insights in how to prevent it. While overt hate speech is widely discussed, there is not much discussion on covert hate speech. This is especially so when it comes to covert hate speech in online contexts. The questions this thesis raises are how hate speech operates online, and how we can understand this in terms of hate speech being overt or covert. By introducing two different ways of understanding overt and covert, via slurs and dog-whistles respectively, this thesis shows that covert hate speech also has some harmful consequences. If ambiguous terms laced with negative attitudes as communicative content seeps into the mainstream, there is a risk of normalisation of these negative attitudes. Given the ambiguity of these terms or statements, it makes it difficult to take proactive measures. With these results, I conclude that covert online hate speech is a vital part of understanding the mechanisms of hate speech overall.

(3)
(4)

Table of contents

Abstract ... ii

1. Introduction ... 1

1.1 Background ... 2

1.1.1 Disclaimer on foul language ... 2

1.1.2 What is hate speech? ... 2

2. Brown and Online hate speech ... 4

3. Nunberg and Slurs ... 8

4. Saul and Dog-whistles ... 13

5. Overt online hate speech ... 17

5.1 What groups? ... 19

6. Covert online hate speech ... 21

6.1 Covert online hate speech as dog-whistles ... 21

6.2 Aspects of offence ... 24

6.3 Aspects of community ... 26

6.4 A potential worry ... 27

7. Consequences with covert online hate speech ... 30

7.1 Normalisation ... 30

7.1.1 How dog-whistles normalises negative attitudes ... 31

7.1.2 Other ways of normalisation ... 31

7.2 From dog-whistles to slurs ... 33

8. Concluding remarks ... 35

9. Bibliography ... 38

(5)

1

1. Introduction

Expressions of hate are unfortunately an almost everyday part of everyone’s life. While some utterances occur out of frustration or anger, some expressions of hate are made on the basis of a person’s social identity. These expressions are often considered to be hate speech.

In this thesis, I will look at how different instances of hate speech operates online. I will be arguing that this does not only happen overtly, but also covertly. I will argue that covert hate speech is just as important to consider as overt hate speech. Given that many online sites have strict policies against hateful conduct, one would think that hate speech is kept at a minimum online.

Especially when considering how hateful groups seem to be thriving online, despite the limitations of expressions. My research questions are thus (1) how does hate speech operate in online settings?

And (2) how can we understand this through two different lenses: overt and covert?

While overt hate speech is difficult to miss, covert hate speech is more subtle. Covert hate speech exploits dubious terms or ambiguous utterances to conceal the negative communicative content. One typical way of concealing the communicative content is through the use of dog- whistles. Moreover, due to the ambiguous nature of the terms used in covert hate speech, the possibility of negative attitudes being normalised increases. Such a normalisation can have serious consequences for targeted groups. Overt hate speech is easy to condemn simply because it is often effortlessly recognised as being offensive. The same cannot be said for covert hate speech.

The purpose of this thesis is to advance the understanding of ways in which hate speech can operate. The aim is to achieve this by looking at different instances of expressions of hate and diagnose some problems that come with the intersection of online hate speech and overt and covert uses of speech. I will be considering how overt and covert hate speech operates online by looking at three different theories. One mostly concerns my first research question on hate speech in online contexts. The other two will be useful as a means to understand the overt versus covert nature of the communication of negative attitudes.

The structure of this thesis will be as follows. In the remainder of this section (1.1), I will discuss two important preliminaries. The first one being a disclaimer, and the second being a clarification on what hate speech is. In section 2 through 4, I will introduce the different theories that are the backbones of this thesis. Section 2 introduces Alexander Brown’s theory on online speech. Section 3 focuses on Geoffrey Nunberg’s theory on slurs. Lastly, section 4 discusses Jennifer Saul’s theory on dog-whistles. This will conclude the first half of the thesis. In section 5 and 6, I will divide online hate speech into overt and covert and describe how they operate online. In section 7, I will look into the implications of this division and look at the consequences of covert online hate speech.

Section 8 will conclude this thesis with some final remarks.

(6)

2 1.1 Background

Before getting into the actual thesis, I want to make two crucial things clear. First, I want to make a disclaimer on the fact that this thesis will contain some explicit language. Second, I want to make a proper introduction to hate speech overall.

1.1.1 Disclaimer on foul language

It is worthy of mentioning that, because of the topics this thesis touches on, some foul and offensive language will appear. I will do my absolute best to avoid any unnecessary mentions of slurs or other pejoratives. But as one could suspect, sometimes it will be necessary to mention such expressions in order for the examples to make any form of sense. That being said, I have tried avoiding the more explicit terms and arch-slurs. It is not my intention to be offensive, rather my intent is to be as pedagogical as possible, which is not possible without some leeway. Nevertheless, I wanted to include this small subsection as a cautionary one. In short, reader discretion is advised.

1.1.2 What is hate speech?

There are different ways one can talk about hate speech. For example, there is a judicial sense of hate speech that is stricter than the more colloquial sense of hate speech. When it comes to judicial instances, a lot of supplementary information is necessary to decide whether an utterance was a case of hate speech and not simply an expression of a personal opinion or slander. Such a definition requires further explication when it comes to speaker intent, context, and perhaps even reactions to the utterance. None of these things are easy to give a one-way answer to. For these reasons, the judicial sense is too strict for the purposes that I aim to fulfil in this thesis. Moreover, I opt out of this route to achieve some sort of universality; using the judicial definition of hate speech in Sweden will greatly differ from the one in the UK, for example. Especially considering that some countries does not even have a legislation against hate speech. I will therefore propose a slightly more common-sensical definition of hate speech. For the purposes of this thesis, this is what I will be taking hate speech to mean, in the words of the United Nations

“The term hate speech is understood as any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor” (UN 2019, 2).

That is to say, hate speech is any form of utterance that expresses contempt solely because of a person’s or group’s social identity, i.e. their nationality, race, ethnicity, sexuality, gender and so on.

In addition to utterances, the definition also includes behaviour as a form of communication.

(7)

3 Furthermore, I will take it that hate speech is not decided as being hate speech based on intent, as that is something that is difficult to establish. This means that a speaker cannot say something like “homosexuals are vermin” and then go on to say that it was not their intention to be offensive.

What matters is that the speaker’s utterance can, on the most reasonable interpretation, be considered as hate speech. Palle Leth (2019) has argued that the most reasonable interpretation, when it comes to charges of racism, can be understood at three levels (Leth 2019, 139). I believe that this can be further extended to other relevant charges within the realm of hate speech, like homophobia to relate back to the utterance in my example. The three levels are attitude, intention and meaning. A speaker may genuinely not have the intent to cause offence, but on the most reasonable interpretation made by the hearer on the level of meaning the utterance can still be met with the charge of homophobia. That is to say, the utterance itself is what carries the homophobic meaning according to the hearer’s interpretation. This avoids the tricky parts of determining speaker intentions, in which speaker has first person authority, as well as speaker attitudes. Instead, it employs a more interactional approach that also embeds the hearer’s interpretation of the utterance.

However, one thing that can make all the difference here is who the hearer is, or in the case of an audience, who the hearers are. For example, the opinion of one hearer over another varies greatly in what the most reasonable interpretation may be. Do we take a majority view into account?

Do we focus on the social identity of the hearer, as them having some sort of interpretive prerogative? These are serious challenges for when it comes to hearer interpretations, especially when the matter at hand is something as grave as hate speech. These challenges are perhaps more serious if we want to make legislature in accordance with hearer interpretation that impose strict liability. Recall that the purpose of this section is to give some form of understanding of which occurrences counts as hate speech. The implications of who the hearer is and how they react has more weight to it when there is a risk of punishment. It is not my intention to discuss the best way of interpreting hate speech, nor is it to analyse exactly what makes an utterance to be hate speech.

It is rather to give some background to what the rest of this thesis will be about.

For the purposes of this thesis, then, hate speech is any such form of communication mentioned in the quote with the addition that the communication can, on the most reasonable interpretation on the level of meaning, be considered to be met with a charge of racism, homophobia, sexism, etc.

In my next section, I will contextualise hate speech on the internet. The internet introduces some interesting characteristics that may affect how hate speech occurs. If not else, it appears that the internet is a suitable environment for instances of hate speech and hateful ideas. The next section will describe the tensions that the online environment injects on hate speech.

(8)

4

2. Brown and Online hate speech

There are analogue channels of communicating hate speech, like for example, through posters, pamphlets or even newspaper or magazine articles. Furthermore, hate speech can be communicated through televised media or the radio. Then there are more interactional forms of hate speech, which could include a perpetrator slurring openly on a bus, a hateful demonstration on the town square, people in a coffee shop expressing hateful ideas in a conversation, etc. A lot of hate speech happens offline. Let us call these occurrences offline hate speech. But much hate speech also happens online. Which naturally leads us to the question: are there important differences between offline and online hate speech? One reason for believing that it is different is the attitudes towards it. In most cases, online hate speech is not considered to be as serious as offline instances. If anything, this points towards a difference being in the pragmatics of hate speech rather than a difference in semantic content.

In this section, I will give some reasons for why we can suspect that online hate speech is different from offline hate speech. I will present an overview of Alex Brown’s (2018) theory of online hate speech. Brown argues that no singular peculiarity of online speech makes online hate speech different or special from offline hate speech, but it is rather a combination of various aspects. These, all taken together, constitutes a difference in online hate speech as opposed to offline hate speech.

Brown first notices that one thing that often comes to mind in regard to online speech overall is that there is an aspect of anonymity (Brown 2018. 298-299). A contributing factor for a hate speaker could be that they might feel as if they lack accountability for their actions since there is nothing that connects their utterance to themselves (more specifically, their offline self). If this was done by the speaker offline, it would perhaps be more difficult to escape this accountability.

They could, for example, get caught red-handed shouting slurs at someone. However, as the internet has evolved since the introduction of social media, the aspect of anonymity is not as certain as it once was. A person can be linked to many accounts online. It is not too rare that you have your Facebook, Twitter, Instagram and YouTube profile all linked together, creating an online identity that can be further linked to your offline identity. This identity can include personas on other online forums, such as accounts linked by email to news sites, shopping sites, Skype or Discord. All of these can be traced back to your offline self if necessary. The online identity is no longer as detached from the offline identity (Brown 2018, 299).

Moreover, even assuming the feeling of anonymity as part of the explanation, it cannot on its own account for how online hate speech could be different. As Brown notes, a stranger shouting slurs on a train may just as well be anonymous to the target of their hate in the sense that they do not know who the perpetrator is. In the turmoil, it could even be difficult for the target to

(9)

5 remember the face of the perpetrator. In this scenario, the perpetrator cannot escape accountability if someone protests the hate speech taking place. That is to say, even if the perpetrator is and remains anonymous throughout the event, they can still be held accountable for their actions.

Anonymity does not prevent backlash; it is a false sense of security that is further transferred online.

The aspect of anonymity is potentially just as important offline as it is online.

Another aspect that Brown considers could be that the perpetrator feels like they are invisible (Brown 2018, 300). Perhaps this is closer to what the anonymity aspect was getting at. Even if someone has many of their accounts linked online, you can create an illusion of being invisible.

There is invisibility for the perpetrator in the sense that the target cannot see them or their faces.

There is also invisibility in the sense that the perpetrator cannot see the target and could thus be desensitised to writing something offensive to them. As previously mentioned, it is sometimes said that what happens online is not really real, essentially alluding to that all utterances of hate speech online are just harmless flaming. However, this form of invisibility is not unique to online hate- speech according to Brown. The same sort of distance can be found in articles, pamphlets, posters and ads. The hateful message can still be conveyed without the creator being physically present (Brown 2018, 300).

There is also an aspect of community online. Brown argues that most, if not all, online platforms consist of and encourage community building (Brown 2018, 302). This, of course, includes communities surrounding hate and hate-speech. Having a community may encourage individuals who would not have otherwise engaged in hate-speech offline to do so, because of them not being tethered to a community in the same way. It gives a sense of belonging with like-minded people.

This makes it easier to find strength through the sheer number and support from other members.

Community building online has been, and still is, a strategy often employed by far-right movements (Adams & Roscigno 2005: 759-760, Bliuc et. Al 2020). Offline hate-groups can amplify their reach by going online and introduce new members - members that may otherwise not have engaged in hate-speech or hateful ideologies because of the individuals being isolated. But perhaps, Brown suggests, this is more of a difference in a method in community-building and not a contributor to a difference in online hate speech per se (Brown 2018, 302).

Brown argues that the aforementioned aspects of online speech cannot on their own account for a difference when it comes to online hate speech. As mentioned, many of these aspects seem to exist outside of online contexts as well. Brown continues that there is a missing element of online hate speech that revolves around the speed and spontaneity that the internet appears to offer and encourage. Brown calls this feature instantaneousness (Brown 2018, 304). Combined with that the internet provides you with almost instant publishing capabilities to massive audiences and a close to non-existent time-delay between thought and expression, Brown believes that the internet encourages this form of spontaneous and instant hate speech. He argues that people tend to not take time to reflect over what they post; they are often unconsidered remarks, first thoughts or gut-

(10)

6 reactions (Brown 2018 304-305). The spontaneity and speed might be what drives online hate speech. But specifically, for online hate speech, it is how these platforms encourages speed and spontaneity.

Again, independently, instantaneousness is not sufficient in explaining the difference from offline hate speech. The fact that the internet provides you with almost instant publishing capabilities is not unique to the internet. Speaking, in a sense, also provides you with instant

“publishing” capabilities. It may be that it mostly is not to a big audience, but this could also happen. Consider a speech at a protest or demonstration or some informal event with a lot of people present, or a televised debate. Surely, it would not reach as many as it could reach on the internet, but it would still be a significant audience. The same applies to spontaneity, which is not unique to the internet. If anything, it appears that it is easier for someone to blurt something out while speaking rather than in writing. It’s not even that uncommon to say something that you haven’t completely thought through before saying it. Even if someone were to record their speaking something spontaneous, there is still some time to reflect before posting. And, even further, after having posted it you can still delete it; something that you cannot do when speaking offline.

Recall that the point that Brown stressed was that instantaneousness encourages online hate speech. This does not interfere with the existence of carefully considered online hate speech, nor does it deny the existence of spontaneous offline hate speech. This suggests that the difference in online hate speech is not due to a single aspect of the internet as a platform. To Brown, it is only natural that online hate speech reflects the qualities that make online communication overall different from offline communication (Brown 2018, 306).

Brown is never specific in exactly what sense online hate speech is different or special. That is, is the difference in online hate speech a difference in content? As in, does online hate speech mean something different online? From my understanding, the difference in online hate speech mostly relates to a difference in pragmatics due to the aspects of the internet that Brown mentions. What we can conclude is that if there is a difference in online hate speech and offline hate speech, it is due to a collection of reasons which is combined by the fact that the internet encourages a specific sort of speech. However, the question on harm remains – does the difference in context make for a difference in harm?

Brown argues that there is both a quantitative and a qualitative aspect of the harm of online hate speech that needs to be investigated (Brown 2018, 306). The quantitative aspect is simply if online hate speech occurs more often. The qualitative aspect is to see whether online hate speech has a difference in effect. That is, is online hate speech more or less harmful, or is it the same as offline hate speech? These questions are difficult to answer without any empirical evidence. For now, a theoretical discussion of this will simply have to rely on assumptions on what is reasonable

(11)

7 to assume, which Brown believes would render inconclusive (Brown 2018, 307). There is also a subjective element to hate speech considering that it can be offensive. Some might think that online hate speech is more offensive, while others don’t take it seriously. For my purposes, I will leave the consequences of online hate speech being something special in the sense of more or less harm to the side for now. It appears that the main functions of hate speech remain when it is moved online.

The only important difference pertains to the pragmatics of hate speech.

Having introduced online hate speech as something separate from offline hate speech, let me now take one step back in the next section. When it comes to hate speech, online just as offline, there is a myriad of ways to express it. As we will see later on in this thesis, this can be done both explicitly and implicitly. Let me start off with perhaps the most evident sort of way that can be argued to be hate speech, namely slurs. Although a statement might not automatically become hate speech due to it containing a slur, it is interesting to look at in relation to hate speech in virtue of its peculiar properties as a particularly pernicious way of expressing hate.

(12)

8

3. Nunberg and Slurs

Using a slur is an effective way of expressing contempt towards a target group. In this sense, slurs seem to have some peculiar properties as opposed to other derogatory terms like “idiot” or

“fucker”. Often enough, the mere utterance of a slur is offensive in its own right – almost no matter the context – and some slurs appear to be more offensive than others, compare “kike” with

“honkey”. Yet other slurs have started what seems to be a journey of appropriation and reclamation, like “queer”. Considering this, two questions need to be answered to understand slurs;

What do slurs convey? And, how do slurs harm?

There are at least two popular routes to take when answering these questions. Let us call one route the semantic approach. The semantic approach can be roughly summarised as attaching semantic content to slurs by means of offensive stereotyping. Robin Jeshion (2013) summarises these accounts as typically “take[ing] (non-appropriated) uses of slurring term to semantically encode and express or conventionally implicate stereotypes of the group that is referenced by the slur’s neutral counterpart” (Jeshion 2013, 314). The strengths of the semantic approach mainly lie in that they fit our intuitions about slurs. Both in that the connection between slurs and stereotypes comes naturally, and that there is something special in slurs compared to other pejoratives.

However, focusing on the semantic contents of slurs risks missing some important elements in the pragmatics of slurs. The semanticist cannot explain cases in which a slur is combined with a positive statement. If the semantic content of a slur is consistent with a (negative) stereotype, the positive statement would make the sentence contradictory. Consider a sentence like

(1) “I think the chinks are greatly misunderstood, a great many of them are good at driving”

Offensive as it may be, it does not sound like an unlikely sentence to be uttered. What this sentence is intended to express, is that, contrary to “popular” belief, a few individuals belonging to a target-group cannot be fully included in the stereotype that the slur itself expresses. However, if the property of being bad at driving was an inherent feature in the semantics of “chinks”, which the semantic approach suggests, then this would be contradictory. It would perhaps be better to suggest that, instead of it being in the semantics of a slur, such content can be recalled in the conventional implicature of a slur. Making additional information explicit is not contradictory but helpful for the conversation in the cases of such implicatures. This is not something that the semanticist can account for. Perhaps then, a more desirable approach would be one that did not focus on the semantic content of a slur. One plausible step to take here is to argue further for that the negative stereotype is in the conventional implicature of the slur, as was mentioned in the quote by Jeshion. By doing this, the conventional implicature can rightfully indicate that this is an

(13)

9 exception to the rule without infringing on the semantic content. But considering the nature of conventional implicatures, the same nature of problems can occur here. For example, consider if instead of adding a positive trait in combination with a slur, emphasis is added on the negative trait that is characteristic of the slur. So that instead, someone utters

(2) “The chinks are horrible drivers”

Considering that we are dealing with a conventional implicature here, it can be suggested that the conventions of the linguistic meaning of a slur is something like a stereotype, and this is what is being implicated when uttered. But that would leave (2) to be an unnecessary redundancy, or even a tautology (Nunberg 2018, 249). However, it seems as if some sort of implicature is at play in slurs, let us look at what such an implicature could be.

An alternative route is to drop the focus on semantics and take the non-semantic approach. In this section, I will discuss the non-semantic approach as proposed by Geoffrey Nunberg (2018) to answer the questions. Nunberg is a proponent for a non-semantic approach that further focuses on the socio-linguistic aspects of slurs. It suffices to keep the semantics of slurs at a minimum, Nunberg argues, what matters in understanding slurs is that they have deep roots in speaker attitudes (Nunberg 2018, 245, 252). But this does not take us very far. Derogatory words like “idiot”

or “fucker” are also ways of expressing a speaker’s negative attitudes towards a target. Slurs, in particular, are more marked than this; we notice when someone is using a slur, as a slur.

Slurs are best understood in consideration of how they are used, Nunberg argues. It is not always the case that slurs are only used to offend (Nunberg 2018, 253). Limiting the focus of a slur to express contempt would be a mistake. Nunberg argues that slurs can further be used to create solidarity between users by marking themselves as an in-group in direct contrast to an out-group (Nunberg 2018, 253). Moreover, some speakers may use slurs for the simple reason that they believe it is humorous or entertaining to use inappropriate language (Nunberg 2018, 254). Lastly, Nunberg adds, slurs can be used to emphasise normative values pertaining to the in- and out- groups (Nunberg 2018, 253). These additional reasons allow us to paint a more intricate picture in understanding slurs. Keeping this in mind, let us look closer at what slurs convey.

Whether slurs are used with the intent to harm or not, they are still terms that marks a conversational transgression when uttered. Nunberg suggests that using a slur is a transgression of a maxim that is similar to the Gricean maxim of Manner. Although Grice’s maxim of manner said,

“Be perspicuous”, Nunberg suggests that what is breached is a closely related submaxim like “using appropriate language” (Nunberg 2018, 244). Not only is it essential to use a language that both conversationalists are familiar with; as in, using English to speak to English speakers, but it is further important to use the appropriate language given the context they are in. For example, it is not appropriate for me to cuss in a conversation with the King of Sweden, but it may be appropriate

(14)

10 to cuss when I am conversing with my grandfather of the same age. For instance, I might inappropriately utter how “goddamn fun” it is to meet him to the King, instead of the more appropriate “incredibly joyous” it is to meet him. The latter seems more as a suitable alternative in that context. However, if I said the latter to my grandfather, he would probably raise one or two eyebrows – especially if our jargon is more like the former utterance. Terms that are more or less appropriate given the context in which the conversation is taking place can be said to be marked.

Slurs appear to be working in a similar way to this. Let us look closer at how the markedness of terms can be relevant for understanding slurs.

Nunberg defines the use of marked terms as being a form of ventriloquistic implicature. A ventriloquistic implicature is when a speaker is using a marked term to also express something extra-linguistic. Nunberg argues that this can be done to express an association with a group that one is not otherwise directly associated with (Nunberg 2018, 266-267). Like, for example, using slang or an idiolect. That is, if a speaker chose to use a default term, this association would not have taken place. A default term is, in short, a term that one can assume least responsibility in using (Nunberg 2018, 273). Let me briefly stop and consider what this means. Let us say that someone you know is from Gothenburg utters an idiolect from there. Is the idiolect uttered here a ventriloquistic implicature? It would seem that it is not, since the speaker is associated with a group that uses such marked terms. A ventriloquistic implicature expresses extra-linguistic information on the association with a group one is not otherwise taken to be part of. If this association is already clear, the idiolect would not be a ventriloquistic implicature. I take it that what Nunberg means is that, even if the speaker is directly associated with the group who uses such a marked term, this association is further made explicit by the utterance. The known Gothenburg dweller further reinforces their association with other people from Gothenburg by using this idiolect. Perhaps you are well aware that the speaker is from Gothenburg, but the utterance of the idiolect reinforces the connection with other Gothenburg inhabitants.

There are, Nunberg argues, some features that ventriloquistic speech-acts has in common with slurs. And, as we shall see, it will be further relevant with marked terms for my purposes in the coming sections.

First, ventriloquistic speech-acts are difficult to cancel. Second, the implication of using one term instead of another encodes certain attitudes in the speaker. Third, for this implicature to work, there needs to be a default counterpart term. In short, a speaker invests their utterance with attitude through a specific choice of words and communicates this attitude to a listener with the implication of affiliating with a group. For example, imagine a conference where people from multiple academic disciplines meet. One way of showing that your discipline is, say, gender studies, is to use terminology that expresses your status as someone invested in that discipline. Just as with any other discipline, gender studies has expressions that are unique to the field, or at least has a more technical

(15)

11 meaning in the field. Further, some colloquial terms are more common to use in gender studies than in other areas. As in, there is a certain style of writing and speaking about the topics of the discipline. Telling an interlocutor after a presentation you both watched something like “the presentation really lacked a queer perspective, I believe that there could have been a lot to unpack from that” could signal this form of affiliation with your own field of research without explicitly stating it as being gender studies. Not many use “queer perspective” in such a context if one is not familiar with topics of gender studies. “Unpack” is also a common colloquial term in feminist circles.

Nunberg argues that ventriloquistic speech-acts can further be a case of an affiliatory speech- act, where speakers choose to use terms specifically to affiliate themselves with a certain group (Nunberg 2018, 273). More precisely, they are affiliating themselves with the conventions of language use in that group. When it comes to slurs, these groups need a distinct disparaging term to express contempt towards a target group. Whereas, for non-members, there is no need to have distinct hateful terms to express contempt (Nunberg 2018, 268, 278). In using a marked term, a speaker is conforming to the conventions and attitudes held by that group. Slurs are created and developed by and for these groups. Meanings and functions are therefore not entirely shaped by abstract societal forces, Nunberg argues, but the meaning and use of slurs are moderated “(…) by the interests and self-conceptions of the specific communities that coin and own them” (Nunberg 2018, 279).

This is where the offensiveness of slurs comes in. It is not due to the meaning itself, in any conventional sense, that a slur is offensive. Slurs gain their impact through associations with these groups who use them. The offensive nature of slurs is directly connected to not only the voice of the speaker, but to voices of a history of speakers from hateful groups who have also used the slurs before. The target group of the slurs have often been subjugated, marginalised, hurt or discriminated against by the users of these slurs (and in general). It is not only the utterance and the attitude of the speaker that is recalled and regarded as offensive. In addition, it is also previous heinous acts against the target group. It is a combination of a history of harm as well as a threat of potential harm that drives the impact of slurs.

However, a mere mention of a slur can be offensive to some. It might even evoke feelings of complicitness in hearers. This can be explained by the fact that the person who utters the slur is mistaking the context to be appropriate, which could make the listener feel as if they might have done something to make the speaker comfortable enough to use a slur. Being wrongly associated with a hateful group will no doubt make anyone feel uncomfortable.

Lastly, this leaves Nunberg’s account with two consequences. First, Nunberg’s approach can account for the evaluativeness of slurs. For example, “honkey” as a slur for white people is not as offensive as a slur like the n-word for Black people, since the discrimination and subjugation of black people are incomparably more severe. Furthermore, different slurs for black people, such as

(16)

12 the arch-slur the n-word or “coloured”, have different evaluative meanings. The n-word has a long and dark history of being used in explicitly vulgar contexts. “Coloured”, on the other hand, is still an offensive slur, but the connotations are not as severe. For example, it is still used in the name National Association for the Advancement of Colored people – NAACP.

Let me summarise what has been said in this section. Nunberg has argued for a socio-linguistic approach to slurs. This includes treating slurs not only as a means to offend, but amongst other also recognise it as an important aspect of group affiliation. The group affiliation is approached through the use of ventriloquistic speech-acts. This is when a speaker uses a marked term to show an affiliation to a specific group, despite there being a default term that would not associate the speaker to this group. The offence comes from the term being associated with that group and the negative attitudes that they hold. It is a combination of the history of harm, the oppression and subjugation of the target group and a threat of harm that determines the evaluative aspects of slurs.

Nunberg summarises slurs as “(…) A special case of the way speakers exploits socio-linguistic variation to create self-representation and invest their utterance with attitude” (Nunberg 2018, 290). For the purposes of this thesis, I will assume that Nunberg’s theory of slurs is correct. The reason for doing this will become clearer in the coming sections. For now, it suffices to say that a non-semantic approach to slurs as a means of overt hate speech will be desirable due to parsimonious reasons. As will become clear in my next section, a semantic approach may not always be successful in all aspects of language use. An overall pragmatic approach facilitates for an easier understanding of the elements in online hate speech.

Having discussed slurs as an explicit way of expressing contempt, although it may not always be used as such, let me now move onto a more hidden way of expressing negative attitudes.

Sometimes, subtlety is preferred when it comes to what attitudes are expressed, and to whom these attitudes are being communicated. One such way of communicating hidden content is through dog-whistles, which will be the focus of my next section.

(17)

13

4. Saul and Dog-whistles

The last years have seen an increase in what is now called “dog-whistle politics”. This roughly means that politicians use certain terms with the intent of manipulating their audience. This is especially the case in the United States, but the manipulation tactic is slowly gaining grounds in other countries as well.1 Dog-whistles make for a useful tool of manipulation; they can mark that a speaker is part of a group that they might not typically be considered to belong to without raising any caution. For instance, using gamer-slang to signal to other gamers in the vicinity that you play video games. Non-gamers may not think much of it, but the people who share your interests will pick up on the information you are conveying. Using a familiar example, using idiolects to communicate to people from the same area as yourself.

In this section, I will discuss different kinds of dog-whistles as identified by Jennifer Saul. Given the purpose of this thesis, I will primarily focus on the kinds of dog-whistles that are more threatening and potentially harmful. These dog-whistles can therefore be better compared with slurs rather than slang. Saul introduces two distinctions to help understand dog-whistles. The first distinctions being intentional and unintentional, the other overt and covert. These distinctions yield four different categories: overt intentional, overt unintentional, covert intentional and covert unintentional. I will mostly be focusing on the intentional dog-whistles for the sake of this thesis, because this is what will be most relevant later on. However, I will briefly mention what overt and covert unintentional dog-whistles are and why they are in a separate category. Let me begin with the intentional dog-whistles, starting with the overt intentional one.

Overt intentional dog-whistles are perhaps the most easily spotted and common form of dog- whistles. An overt intentional dog-whistle seeks to convey a hidden message to a subset of the audience, where this subset of the audience can decode the message that is conveyed (Saul 2018, 363). The overt intentional dog-whistle can typically be found in speeches by politicians. By using specific terms often used by specific groups, or easily noted by said groups, a politician can use that term to mark that they are on the voter’s side. While at the same time, non-members do not recognise these terms as specific and therefore fails to recognise the dubious message that was conveyed. This happens without any repercussions facing the speaker that could otherwise have been had if the speaker were to explicitly align themselves to the beliefs of that group. This can be a helpful tool in many circumstances. Other than in politics, it can be used in marketing, as it was in a Subaru campaign during the 1990’s. Subaru used hidden messages to communicate to the LGBTQ audience without raising suspicion from the more conservative parts of their audience.

1 For example, Australian political strategist Lynton Crosby is implementing dog-whistle politics in both Australian and British politics. See https://theconversation.com/fattened-pigs-dog-whistles-and-dead-cats-the-menagerie-of-a- lynton-crosby-campaign-60695. See also Filimon (2020) on dog-whistle politics in the Nordic countries. For dog- whistle politics in the U.S, see Anderson (2015) Aziz (2019), Drakulich, Wozniak, Hagan and Johnson (2020) also Whitley (2014) and Haney-Lopez (2013)

(18)

14 The license plates on the pictured cars read “XENA LVR” and “P-TOWN”2, both referencing to things that the LGBTQ-community could decode. In the same campaign, messages with dubious meanings were used to emphasise this point. “Get out. And stay out.” could both be considered connected to coming out, or it could be a reference to Subaru’s focus on the adventurous and outdoorsy customer (Mayyasi 2016). In the context of the slightly more homophobic U.S in the 90’s, Subaru’s explicitly aligning themselves as supporters of the LGBTQ community could potentially risk them losing a good portion of their client base.

The other form of intentional dog-whistle is the covert kind. This is slightly more complex, since it does not serve to communicate specifically to one subset audience, rather than it is to appeal to an entire audience’s pre-existing attitudes (Saul 2018, 365-366). As such, it can be used as a powerful political manipulation tool. Saul argues that covert intentional dog-whistles can be understood as a species of perlocutionary speech-acts. More precisely, Saul calls this class covert perlocutionary acts which also could include deception, manipulation and lies. Saul argues that this sort of perlocutionary act only succeeds if the intended perlocutionary effect is never realised as being intended (Saul 2018, 377). Just as, lying is only successful if the lie remains uncovered. A perlocutionary act is one of three classes of speech-acts. In addition, there are also locutionary and illocutionary speech acts. A locutionary speech-act is, simply put, an utterance of a meaningful statement. An illocutionary speech-act is an assembly of different speech-acts that can be boiled down to utterances that contain force or intent from the speaker. Like promising, requesting, marrying, or firing. A perlocutionary act, finally, is a speech-act that affects the listener in some way, like inspiring them, changing their course of action, or deterring them.

Covert intentional dog-whistles function through appeal to existing attitudes. These attitudes need to have been there before the utterance, they are not instilled in the listener upon hearing the dog-whistle. A racially charged covert dog-whistle does not simply cause the listener to have racist attitudes. However, if the listener already has a certain bias, then the dog-whistle serves to act on that bias. For example, it has been shown that these dog-whistles have little to no effect on individuals who are on the more racially liberal end of the spectrum. This as opposed to individuals at the racial resenting end of the spectrum, where the effectiveness seems to be significant (Saul 2018, 366-368). In one study of the effects of coded messages, Rachel Wetts and Robb Willer found that “implicit racial appeals increased the effect of racial resentment” (Wetts and Willer 2019, 12) when it came to people supporting policies on, amongst others, gun control. The implicit racial appeal (i.e. the dog-whistle), however, was the most effective if the people were already high in

2 “XENA LVR” is supposed to be a reference to the 90’s television series “Xena the Warrior Princess”, in which the main character and her sidekick has a presumed romantic relationship. This was never officially so, but it is open for interpretation. As such, Xena became a symbol for the queer community. “P-TOWN” is a reference to

Provincetown, a town in Massachusetts, that is frequented by members of the queer community as a vacation spot.

See Mayyasi (2016).

(19)

15 racial resentment (Wetts and Willer 2019, 15). An analogy can be made to actual dog-whistles. You cannot hear the dog-whistle unless you have canine hearing. You cannot “hear” or act on the covert intentional dog-whistle unless you have the attitudes already there, consciously, or not. Of course, it is still possible to recognise a dog-whistle without having these attitudes, but the perlocutionary effects are unsuccessful. You can still hear a lie, but you do not have to be deceived by it.

The focus of a covert dog-whistle is its intended effects, which is to recall the pre-existing attitudes. Covert intentional dog-whistles relies on that the listener is unaware of the intentions behind the effects. Once revealed, the covert dog-whistle loses its intended effects, just as with deceit and manipulation.

Lastly, let me briefly mention unintentional dog-whistles, both overt and covert. As the name might reveal, an unintentional dog-whistle is when a speaker is not aware that the term that they utter is a dog-whistle. For example, a colleague might be talking about something they overheard on their commute. They tell you

(3) “So apparently, if the guy on the train is anything to go by, pit bulls are the most violent dog breed. Did you know that they only make up about 13% of the dog-population but they are responsible for 52% of all attacks on other dogs? That’s insane, how come we let people have pit bulls?”

Unbeknownst to your colleague, this was never a discussion on violent dog breeds. The person on the train was actually using a dog-whistle to communicate negative attitudes on black people to their friend. The 13%/52% ratio is a common dog-whistle3, and so is comparing race to different dog breeds. The reiterating of the dog-whistle continues the spreading of the implicit message conveyed, regardless if your colleague caught the hidden message. This would account for an overt unintentional dog-whistle. Another example might be the re-showing of a political ad in which a covert intentional dog-whistle is used. Using a dog-whistle unintentionally can be said to amplify the hidden message of the dog-whistle (Saul 2018, 368). This is part of the trickier side of dog- whistles. Seeing as there is this on the surface innocuous term, it is only natural that they will be picked up by people outside of the target group, or by people in general. This serves to extend the reach of the original utterance, beyond the original audience, to also reach further audiences as a form of amplifier. When it comes to covert unintentional dog-whistles, it could be argued that this is intentional from the originator’s side. Saul argues that the covert unintentional dog-whistle can be particularly problematic since “(…) people are made into mouthpieces for an ideology that they reject” (Saul 2018, 378).

3 13% refers to the statement that the Black population in the U.S being 13% but committing about 52% of all crimes. See ADL https://www.adl.org/education/references/hate-symbols/1352-1390

(20)

16 In sum, a dog-whistle is a form of special hidden communication. They can be divided into overt, which has a target audience, or covert, which implicitly operates on audience attitudes. I have settled for focusing on intentional dog-whistles, but I briefly mentioned instances where dog- whistles are unintentional.

In my previous section on slurs, I ended by saying that the preference of a non-semantic approach in understanding the coming discussions on online hate speech is primarily because of parsimony. As opposed to slurs, dog-whistles cannot be argued to be explained only in terms of semantics. If this were the case, the effects that especially covert dog-whistles have would be null and void. In any case, the focus of dog-whistles is not on the semantic content but somewhere in the pragmatic effects of it. The main reason for my preference of a pragmatic approach in both slurs and dog-whistles is therefore to retain theoretical parsimony.

In my next section, I will once again go back to the explicit ways of expressing hate. However, this will be focusing on explicit ways of expressing hate in online contexts.

(21)

17

5. Overt online hate speech

In this section, I will argue that one issue that comes with the move to online is connected with the strict policies that most online platforms have. This, however, will not affect the account of slurs that I have introduced. I will argue that slurs can be described as an instance of overt online hate speech, where the speaker is explicitly hateful towards a target group.

An alternative way of formulating the explicitness of slurs is to say that slurs have offensive autonomy. This means that the terms in themselves are offensive, regardless of intent or context (Bolinger 2017). This partly means that slurs are difficult to cancel. Uttering a slur cannot be salvaged by adding “I’m not a racist but…” or “No offence, though” before or after the slur is uttered. The slur itself marks a hostile attitude, because of the conventions of the communities who use them, in a distinct way that cannot easily be reconciled. This markedness of a slur is what gives them their offensive autonomy. One exception can be when one is talking about slurs, as for example in academic contexts or when there has been controversy due to a mentioning of a slur.

But even in such circumstances, the markedness does not simply disappear. There is often a lingering sense of discomfort for many people when they are encountered with a slur.

I will call these explicit occurrences online, like slurs, overt online hate speech. When a speaker is using a slur online, just as with offline, they are explicitly aligning themselves with the hostile attitudes associated with those terms. That is to say, it is an overt way of expressing hate. In this sense, overt online hate speech can also include non-slurring acts of hate speech. For instance, a speaker can replace the slurring term with a neutral counterpart, while still expressing contempt for a target group, as in “Asians are the scum of the earth”. What is special for overt online hate speech is that it is explicitly stated as such. As mentioned in section 1.1.2, hate speech does not necessarily only include slurs, but is rather any utterance that expresses contempt towards a target group.

Slurs, however, have a special part in overt online hate speech since they are easy to spot and stop online. However, even if overt online hate speech is easy to recognise as such, there is something that could complicate using this form of hate speech online. (Overt) Online hate speech is prohibited on most platforms since most platforms have strict policies surrounding the use of hate speech. The policies of these platforms may vary in degree, but the gist is all the same: hate speech is not welcome. The consequences of violating these policies depend on numerous factors, such as the track-record of the speaker as well as the severity of the utterance (Twitter 2021). Some violations might result in that the speaker can be urged to delete the relevant post or that the post gets automatically deleted. They can get a timed writing- or content-ban, or if the offence is recurring and particularly injurious; they can be permanently banned from the platform. Due to the threat of repercussions, some users may choose to migrate to platforms that does not have policies that are as strict – or no policies at all.

(22)

18 Even if overt online hate speech is easy to spot and condemn, some platforms needs to implement further methods of prevention. For example, some websites include special filters or bots that will automatically detect and deter posts and users. Slurs, given their offensive autonomy, are easy targets for this sort of strategy. If the purpose of these filters is to prevent offence, it is natural that the terms that have offensive autonomy are the first to get blocked since they are offensive almost no matter the context. But the fact that slurs are the kind of hate speech that is most susceptible to being policed does not affect the use of slurs to the extent that it also affects the import of the slur. A lot of people still use slurs online and in the ways that Nunberg suggested.

They are used to show some sort of status due to using prohibited terms, or used as something similar to slang, or to reinforce that the utterer is not part of the group that is the target of a slur.

Many speakers still use slurs online, but there is always a tangible risk of being banned or censored for doing so. However, there is not a difference in content per se. If anything, it is consistent in that the difference is in the environment or context. Namely, the difference between slurs online and slurs offline is equal to that of speech overall online and offline, similar to what Brown argued for. The slurs are still understood relative to the conventions of the communities who own them, and these communities exist both offline and online. If there was a difference in these conventions due to them being online, these would not be sufficient to drastically change the import of slurs.

One such potential difference, however, could be that the restrictions implemented by the hate speech policies is only evident when it comes to published forms of offline media. For example, tv companies, magazines or newspapers can choose not to publish an article or report due to it not following their policies on hate speech. Such policies in offline media can thus regulate what sort of content is published. However, this does not extend to cases of offline hate speech that is not connected to some form of media or publishing capabilities. Like, for example, a quick and hateful exchange from one stranger to another or a menacing speech on a demonstration. Although hate speech is illegal in some countries, far from every occurrence is judicially recognised as such, and ultimately not always reprimanded.

The Internet reflects the above distinction as well. On the one hand, you have sites that are more akin to offline media, like news sites, blogs etc. On the other, you have sites that are more socially relaxed, like Facebook or Twitter. Social media sites like Facebook and Twitter are perhaps more comparable to, say, a virtual town square. The things that are posted are not first channelled through editors as it is with news sites. However, it is easier for social media sites to recognise occurrences as hate speech in the way their policies have formulated it. The difference in this aspect of online and offline hate speech is that it is more likely that an individual can be reprimanded for breaking the hate speech policy. Both because it is easier to keep some sort of track record, that there is proof (as in posts or comments) and because the policies are often such that the definition of hate speech can include many different instances. The consequences of breaching these policies,

(23)

19 as mentioned earlier, can be anything ranging from a warning to a permanent ban. Although it is interesting to consider if social media sites ought to have this sort of power over what sort of speech is tolerable, my point is simply that the platforms do have these forms of restrictions. It is important to understand the difference in how more “casual” forms of online hate speech is effectively different from the same form of offline hate speech.

I have now distinguished overt online hate speech as something that is explicitly found to be hate speech, often so by the platform’s policy standards. I argued that this makes the nature of

“casual” forms of online hate speech encounter a different set of problems than similar offline utterances. Before that, I also argued that there is no important difference when it comes to how slurs are used online due to their offensive autonomy, which translates well into online contexts.

5.1 What groups?

Before moving onto what I call covert online hate speech, I want to further limit my discussions by distinguishing the individuals and groups who are most likely to use covert hate speech. To make my case, consider this formerly debated subject in Sweden. For some time, there was, and to some extent still is, a heated debate for the proper term for the Swedish no-bake pastry called

“chokladboll”, or chocolate ball. Previous to the introduction of “chokladboll”, the no-bake pastry was called “negerboll”- roughly, “negro ball”.4 Many people felt very passionate about being able to call their no-bake pastry by a now perceived racial slur. They defended their right to use it since that was what it had been called while these people were growing up. The fact that some people found the word offensive was not their problem and they would continue to use the racial slur to denote a no-bake pastry.

I think there is a difference in defence of the term here. Both are still defending the use of a racial slur, but one comes in the form of adherence to nostalgia and the other to adherence of negative traditions. A defence of adherence to nostalgia might still be problematic, but these individuals might genuinely not see the issue of using words that have problematic sentiments. For them, it is only offensive if you think of it as offensive. Conversely, if you do not think of it as offensive, then it is acceptable to use that term. They are oblivious to their, and others’, racial bias.

Other defenders might use the same line of argument, but also have negative attitudes in general about marginalised groups. Those defenders are fully aware of their racial biases but will use the same defence to be able to say racist things. It is the latter sort of groups which I am interested in talking about here. This group needs to be more careful about their presence on mainstream online platforms since the explicitness is what gets them banned and prevents them from having a potential audience. To put it in Nunberg’s terms, the groups I from now on am primarily concerned

4 The sexual connotation of ball here is simply unfortunate and probably unintentional. It primarily refers to the shape of the pastry, and not necessarily to genitalia.

(24)

20 with are essentially the groups who coin and “own” slurs; the groups whose conventions slurs derive their impact from. These groups are the ones using slurs in their full significance and force.

However, given the restrictions of most platforms that I previously discussed, it makes it difficult for the groups to express their negative attitudes in an efficient way, or even at all. The groups have some possible alternatives to deal with this situation. One is to create a platform in which anyone can say anything, including slurring terms and hate speech. This has been done many times; some of these platforms are still up and running while the server owners have shut other platforms down.5 Some members of these groups are satisfied with this solution. It is further plausible that the defender of nostalgia joins these sites. A second alternative is to turn the overt online hate speech into covert online hate speech. This will allow members of the groups to remain on mainstream platforms and still be able to express their negative attitudes, albeit covertly. Up until this point, one might argue that the way that overt online hate speech seems to operate could also be understood by taking a semantic approach to slurs. As we shall see, covert online hate speech will complicate such an approach. In my next section, I will present how dog-whistles can help these hate groups remain hidden through means of covert online hate speech.

5 For example, in January of 2021, Amazon decided to boot Parler from their servers, due to posts containing incitement of violence. See https://www.npr.org/2021/01/21/956486352/judge-refuses-to-reinstate-parler-after- amazon-shut-it-down

(25)

21

6. Covert online hate speech

Having discussed overt online hate speech, I will now turn to a form of covert online hate speech - dog-whistles. For some people, the negative attitudes they have against marginalised groups motivate them to form a defined group that is founded on hate. These groups can range anywhere from loosely formulated groups of people to established political organisations. One example of a loosely formulated group is the alt-right movement that takes presence online on social platforms like Twitter, Reddit or 4Chan. As briefly discussed in the above section, explicit expressions of negative attitudes against others are constrained. To remain hidden from the policies of the platforms, the alternative ways of communication need to be covert. The mechanisms behind dog- whistles make for a potentially perfect tool to communicate these negative attitudes. However, just as overt online hate speech does not only include slurs, covert online hate speech does not only include dog-whistles in any strict sense of the word. I will leave the possibilities for other forms of covert online hate speech to the side, for now, my focus will be on dog-whistles.

In this section, I will first narrow down a broader understanding of dog-whistles. I will focus on the kinds of dog-whistles that communicate strictly negative attitudes. I will then discuss the workings of overt dog-whistles and covert dog-whistles respectively. I will argue that the overt dog- whistle can be compared to and understood as a form of ventriloquistic speech-act that I discussed in section 3. The covert dog-whistle, however, is not marked in the same sense considering their status as not being directed towards a specific subset of the audience. I will briefly discuss how overt and covert unintentional dog-whistles seem to operate online. In section 6.1, I will consider the connection in dog-whistles and covert online hate speech. In the following section, 6.2, I will compare dog-whistles to slurs and to see how dog-whistles can be offensive. I will argue that the offence that dog-whistles cause is not actual offence, but rather warranted or rational offence. In section 6.3, I will further connect the workings of dog-whistles to slurs by considering the aspects of community when it comes to online dog-whistles. I will argue that the communal aspect is crucial for overt dog-whistles considering that they are dependent on the tensions of the in- and out- groups. Finally, section 6.4 will raise a potential worry on the connection on covert hate speech and hate speech as I introduced in the introduction.

6.1 Covert online hate speech as dog-whistles

One important feature of dog-whistles is that they convey information through innocuous terms that bring certain attitudes to the front in the hearer. These attitudes do not have to be negative.

They could also be attitudes that the speaker would prefer to remain hidden from parts of the audience. For instance, dog-whistles in children’s cartoons can come in the form of innuendos. In an episode of Hey Arnold! Arnold wonders why his grandfather never finished high school. His

(26)

22 grandfather explains that he lost too many braincells at Woodstock to try and get a high school diploma now. Likewise, in a Looney Tunes episode, Bugs Bunny can be seen reading a book on how to multiply, and the flustered reaction on being found out reveals that the book’s topic was not in mathematics. Similarly, dog-whistles can convey negative attitudes without them being a form of hate speech. For example, a group of friends might want to convey negative attitudes about an evil ex-partner through overt dog-whistle terms. But, from now on, when I am talking about dog-whistles I am referring to those that carry negative attitudes towards a target that is a member of a marginalised group unless stated otherwise. More specifically, it concerns negative attitudes held by a speaker that can be categorised as racist, sexist, xenophobic, homophobic, transphobic, etc. That is to say, the negative attitudes are held in virtue of the target of these attitudes being a member of a marginalised group. In other words, I will focus on dog-whistles that could, once uncovered, account for the communicative content as being hate speech.

Overt intentional dog-whistles are, in a way, like the ventriloquistic speech acts discussed in section 3. They operate through being marked for a subset of the audience but go unmarked for the rest. Overt intentional dog-whistles could be said to have neutral counterparts, and this is important in how the term is marked for the other audience. It is marked in the sense that it is intentionally chosen by the speaker over some other term to express something in order for only the subset audience to recognise it. The subset audience can recognise this intention of using one term over another. This is different from how slurs are marked, seeing as slurs are marked for practically all hearers. The markedness of an overt intentional dog-whistle is not intended as being recognised by anyone outside of the subset of the audience it was directed at. It is marked enough for it to be picked up by the intended interpreters, but not enough that it forces any strict accountability upon uttering it. If someone recognises a dog-whistle that is not intended for them, an overt intentional dog-whistle has deniability. Consider the cases in politics again. The reason some politicians in the U.S. use terms like “inner city kids” or “welfare” is because of its racial connotations as well as the speaker being able to deny such connotations when confronted.6 When confronted, the speaker can deny any connotations as intentional and instead deflect the connections made with race onto the hearer. The same tactic can be applied in online cases. It is difficult for a platform to judge what is hate speech unless it is explicit or is rhetorically similar to being hate speech. A speaker can deny intending to express contempt towards a target group by using a dog-whistle. But it could still offend and thus be said to be hate speech if it is rhetorically similar. This leaves enough room for overt intentional dog-whistles to slip through the policies.

6 The connotations (in the U.S) for “inner city kids” is mostly in regard to criminality that happens in the city centre, where there is a considerable population of Black youths. For “welfare”, in the context of the U.S, the connotations are often along the lines of “Black people are lazy”. For a more in-depth discussion on the racial connotations of

“inner city (kids)” and “welfare” and how they can be used as a form of propaganda, see Stanley (2015) pp. 160-163.

(27)

23 However, it is not impossible that some dog-whistles can become overt enough for them to be treated similarly to overt online hate speech. I will return to this in section 7.2.

Given that overt intentional dog-whistles are only marked for the intended audience, it is only natural that the terms can be picked up by parts of the audience that are unaware of it being a dog- whistle. Overt unintentional dog-whistles can get a boost from some of the features that are local to platforms being online. For example, one can be an amplifier to a dog-whistle simply by liking or sharing a post containing an overt dog-whistle. Especially considering on Facebook where your friends can get posts recommended in their feed because you have interacted with it.

The Internet also appears to be working in favour of covert intentional dog-whistles. As with offline instances, covert intentional dog-whistles operate in bringing certain attitudes to the front.

Covert dog-whistles do not have a specific audience in mind that they appeal to, but rather the attitudes of people in general. These dog-whistles do not have an intended audience and are therefore not marked in the same way. Recall further that these dog-whistles only work on attitudes that already exist in a listener, the dog-whistles themselves do not cause people to have these attitudes. What the internet provides covert dog-whistles with is a very broad audience. Covert dog-whistles can be used to influence the attitudes of people online by appealing to attitudes they were not aware of having. Recall that this was a strategy used by politicians offline; covert intentional dog-whistles can be used as propaganda to influence audience votes. There is something with the internet that could potentially make these attitudes more likely to be brought to the front consciously. Combined aspects of anonymity, invisibility, community, and instantaneousness give more room for negative attitudes being expressed and reflected on. Some groups might use these forms of covert intentional dog-whistles as propaganda which gives rise to the last form of dog- whistle.

Covert unintentional dog-whistles could normalise having and expressing negative attitudes, not only in implicit ways. The inauguration of new members could start with the passing of covert intentional dog-whistles into the mainstream to influence people’s attitudes and from there take on their own life as covert unintentional dog-whistles. I will get back to this discussion in section 7.1.

But seeing as it is fairly easy to uncover dog-whistles, this does not always equate to more active members in the hateful groups. This is just one form of propaganda used by these hate groups.

It appears, then, that the features of the dog-whistle are precisely what is needed to be able to express negative attitudes without getting caught in the hate speech policies set forth by the sites.

Both the overt and the covert form of dog-whistle is thriving online. The overt dog-whistle does so through being able to communicate to other members and the covert does so by implicitly working to normalise certain attitudes. As we can see, covert online hate speech is dependent on non-semantic features like perlocutionary effects. There is nothing inherent in the term “welfare”

that makes it into an effective dog-whistle, but rather it is the effects it has on certain attitudes in the hearer. As such it is a bit more complex. Here it is important to underline the difference in

References

Related documents

The large marine protected area (MPA) declared in 2010 around the Chagos Archipelago, also known as the British Indian Ocean Territory (BIOT), has led to a conflict in the thick

Linköping Studies in Arts and Science No.656. Studies from the Swedish Institute for Disability

Ett bristande socialt stöd hade dessutom större samband med dålig självskattad hälsa än vad låg utbildningsnivå och ålder hade, vilket kan innebära att socialt stöd är en faktor

Eftersom delägaren kan tillämpa reglerna om lågbeskattad utdelning blir han beskattad till 20 % istället för konventionell kapital beskattning om 30 %. I vår mening

Utdata från Smart Eye innehåller brus och det är viktigt att kunna reducera det för att bilden inte ska kännas skakig och för föraren bli ett irritationsmoment som förstör

and comparing it to the governmental stance, the results obtained from question number five are perhaps of most significance to this study. The results revealed students’

However, the data recorded by cost monitoring are small events without any physics data, but with the information that is needed in the computer model, such as which ROBs were

Och när vi försöker utföra det de drömde om, skapas i stunden något helt nytt som aldrig hörts förut, men ändå är alldeles likt det som så många andra har skapat i