Difference between revisions of "The Atlas of Online Harassment"

From Gender and Tech Resources

(About the Atlas)
 
(19 intermediate revisions by the same user not shown)
Line 1: Line 1:
  
 
=== Introduction ===
 
=== Introduction ===
 +
The Atlas of Online Harassment is a living and growing collection of discussions and resources about a challenging issue facing users of the internet: the confrontation between the right to speech and freedom of expression online, and the right to be free from violence and harassment. A growing body of evidence shows that many instances of online speech, from humour to intellectual disagreements, is experienced as harassment by others. The harassment includes the use of abusive language, rape and death threats, sexist comments, and threats to release personal information online as an incitement to carry out rape/death threats. The recipients of this harassment are usually women, people of colour and others who are marginalised by race, ethnicity, religion etc., and tend to be articulate and vocal online. The harassers are often men, but not always. In its simplest form the challenge lies in how to think through calls to regulate harassing speech online: is it possible to establish standards of acceptable and unacceptable speech online?  And should you? 
  
==== About the Atlas ====
+
Gamergate is a loose, faceless, and shifting umbrella moniker for an online community that aggressively polices anyone's right to say pretty much anything they want to say online. Recently, they have been known to attack women (primarily, but also many others) who call for restraint and care in using language that may be hurtful or harmful. Gamergate says that everything is fair game on the internet, and that it is impossible – even illegal – to police speech, that people should be allowed to say anything they want to. The feminist argument is that violence or incitement to violence cannot be considered 'free speech'. Yet, the lines are muddied, and in many cases related to art, literature, humour, and political comment, saying something that someone else takes offense at is often a reason to shut that comment down. In many instances, humour about race, class, gender, sexuality or ethnicity is a source of confrontation and discussion precisely because humour is thought to be effective in its power to offend, an act which is often seen as a challenge to power, and a longstanding tactic of activism. What is the line when humour goes too far? It's impossible to say.
  
The resource has four sections. The first entitled 'Tales of Online Misogyny' features stories about women who have faced various forms of abuse and highlight the impact and effects of harassment towards women. The second section called 'The Atlas of Speech' informs the reader about the various existing definitions of terminology in the field in attempt to expose the complex legal but also social implications of these definitions. 'Reclaiming the Online Space' focuses on the initiatives in the field and the final section entitled 'And now what' will help frame the significant issues in the field, posing some questions for the reader to consider.
+
We believe it is not possible, legal nor useful to call for the establishment of standards of acceptable speech online or offline. We believe in the freedom of speech and expression. However, there is clearly a problem with blanket notions of freedom of speech and expression; there are none, and the regulation of speech is addressed by laws around the world. In many countries, 'hate speech' is clearly identified in the law, but there is no universal idea of 'hate speech' either. Speech is entirely contextual and regulations on speech have evolved from specific contexts. So, 'hate speech' in the South African constitution is discussed very differently from how it is in Germany.  Similarly, there has to be a way to contextualise speech online that is harassing; something that is an incitement to violence, or that addresses violent words or language to individuals. An example from  [http://www.demos.co.uk/blog/misogyny-online/ recent research] about online misogyny by the British research organisation Demos indicates why this is important.
 +
 
 +
Demos was using its Natural Language Processing algorithm to collect and sort through 1.5m tweets using the words 'slut'. After removing tweets that were advertising pornography (54% of them), they were left with 65,000 non-pornographic tweets.
 +
 
 +
"Two types of language was classified: ‘aggressive’ and ‘self-identification’. 33% of the tweets analysed were considered aggressive, whereby the tweets contained further obscenities in addition to ‘slut’ or ‘whore’, included commands (e.g. ‘shut up’), or used ‘you’ to target a specific user. In contrast, just 9% were considered self-identifying, whereby users appeared to reclaim these words to talk about themselves (e.g. “I’m a slut for beautiful sunsets”) or in a jovial manner when directed towards others (e.g. “happy birthday little slut I guess I love you”). The tweets that were not considered aggressive or self-identifying were classified as ‘Other’, usually focussing on slut walks, slut-shaming or discussing the use and connotations of these words."
 +
 
 +
In everyday, face to face conversation, or when watching videos or television, speech is both verbal and non-verbal, so it is far easier to understand the nuances of speech, as well as context. Online, context and nuance are very difficult to evaluate. Yet, it may be an essential part of dealing with the challenge to speech, and to the right to be safe online. Context, however, is less important when it comes to specifically violent and personalised threats. This distinction is important in moving forward, we believe, because there is a qualitative difference between more subtle aspects of language, and words that are explicitly threatening. 
 +
 
 +
Through the Atlas we want to think through how to equip activists who are, variously: users of the internet, privacy and free speech activists,  feminists, who believe and work for both the right to be free from violence online, and the right to speak online, to address these topics confidently and convincingly. How can taking on this issue allow for an evolution of the idea freedom of speech that is feminist, and respects privacy, speech and expression? What are the new ideas we want to advance across all these communities towards informing policy on platform regulation, inspiring more creative and confident online activism and expanding on ideas of speech and expression online? 
 +
 
 +
We refer to this body of work as an atlas, because we want it to be a resource assessing 'power geographies', for way-finding and orientation, and to promote and inspire the representation of local landscape.
 +
 
 +
==== Between online and offline ====
 +
One of the outcomes of the aggregation of evidence of forms of online harassment experienced by women, people of colour, people who identify as LGBTQ, and others who are marginalised by their societies, is that the online is seen as linked to the offline. Evidence shows that harassment that occurs online tends to occur more to those who are already marginalised offline, and the patterns of abuse, intimidation and discrimination online are not that different from what happens offline. So, women tend to continue to be betrayed by their most intimate partners, as instances of non-consensual image sharing (formerly known as 'revenge porn') show. Women journalists and activists  are shamed or aggressively challenged for their opinions, facing everything from insults and abusive language to rape threats and death threats.
 +
 
 +
 
 +
These patterns have been documented in research by APC Women's [https://www.apc.org/en/projects/erotics-exploratory-research-project-sexuality-and-0 Program],  [https://securityinabox.org/en/women-hrds Tactical Tech] , and in many journalistic and practitioner articles and research. The gender and sexuality power differential that exists in offline relationships, culture and institutions, is replicated into the supposedly equal space of the internet, and  is often referred to as 'mirroring', or of offline and online being different points on the same continuum of experience.
 +
 
 +
 
 +
This link has made it more possible for online harassment to be acknowledged as a violation that occurs on a significant scale across the internet, rather than a harmless anomaly affecting only a few people. A global conversation about online harassment has, in turn, made offline, deeply entrenched social disparities more visible too, as activists continue to use the digital to raise awareness about offline violence and discrimination. Doing so, in turn, activates online harassment yet again. One example of this is from Nighat Daad, the Pakistani digital rights activist. Qandeel Baloch was a Pakistani social media sensation who came to some prominence for her outspokenness, racy videos and online persona. In July 2016 she was allegedly murdered by her brother who felt that she had brought dishonour on the family. Her death brought a number of Pakistani activists out into the streets and on social media protesting the alleged crime, and the issue of violence against women in Pakistani society more broadly. Nighat reports facing abusive language for supporting Qandeel. Nighat's experience is neither unique nor casual.
 +
 
 +
[[File:Nighat Daad- Tweet1.png|400px]]
 +
 
 +
[[File:Nighat Dad - Tweet2.png|400px]]
 +
 
 +
 
 +
The response to online harassment tends to focus on equipping people vulnerable to it to learn how to secure their online spaces and maintain their privacy, by learning digital skills, or through a series of resourceful new ways to inhabit online spaces, create networks of support, or engage in tactics of creative resistance like counter-speech.
 +
 
 +
 
 +
However, there is an issue arising from this online-offline connection that makes the claims for freedom of speech and expression online tenuous and difficult for activists to address. The online-offline link does not necessarily either reduce the incidence of harassment, nor does it allow for a resolution of cases. It merely allows for it to be recognised. Offline harms and harassment sometimes have a material and bodily component as well, which can be, and are, regulated by law, and in rare cases, allow for this harassment to be convicted for its criminality. Online, the harassment and harms are acts of speech: abusive language, threats, taunts, hate. While it is possible to capture instances of harmful speech online, speech itself is always contextual, open to interpretation; in that sense it is not material.
 +
 
 +
 
 +
Additionally, speech is a specifically protected and legally defined and maintained right and practice. It's offline form and its online form are different, from its production through to its forms and management and policing. At the same time it is not necessarily “hate speech”, which has a long history of definition and regulation in very different ways around the world. Yet, online harassment is still, also, an act of speech, one that is neither adequately represented by online regulation (which swings between either total
 +
blocking and regulation, or none at all) nor by offline laws around speech, although it is this idea of speech that is enshrined in the idea of the internet. It cannot also always be considered as 'hate speech'.
 +
 
 +
 
 +
In claiming acts of violence, and protection and freedom from online harassment, women are claiming the rights they are granted as citizens of countries, and as members of a group or community.  However, the internet is - and is not, at the same time - regulated
 +
by local legal jurisdictions. While data privacy and protection issues are being debated and challenged in courts in the EU and US, issues of speech and its regulation are entirely local. However, transnational corporations such as Google and Facebook do get involved in taking down and regulating speech at the behest of local governments. Platforms like Twitter are increasingly being held to account for taking credit for thinking they enable freedom of speech, but at the same time do little to protect infringements of it against marginalised people.
 +
 
 +
At the heart of this matter is a real conundrum for digital activism today: how is it possible to reconcile the kinds of power for digital actions enabled by the internet, with the seeming restrictions on the kind of agency that activism is supposed to enable?
 +
 
 +
In their 2015 book, Becoming Digital Citizens, Engin Isin and Evelyn Ruppert craft this problem in terms of the idea of digital citizenship. They say that “ studies of the Internet and empirical analyses of specific digital platforms are proliferating, yet we lack concepts for framing and interpreting what these mean for being digital citizens.”<ref>http://www.rowmaninternational.com/wp-content/uploads/2015/04/Being-Digital-Citizens-chapter-1.pdf</ref> While words like online, offline, virtual, real, have currency, they say that these are “placeholders in search of concepts.” This is potentially true to the topic of online harassment as  a kind of unregulate-able speech. Isin and Ruppert go on to explore what it means to be a 'digital citizen' and how the shaping of political life, actions, and subjectivity through the digital is also changing the meanings of these words and ideas. They say:
 +
 
 +
“ If the Internet—or, more precisely, how we are increasingly acting through the Internet—is changing our political subjectivity, what do we think about the way in which we understand ourselves as political subjects, subjects who have rights to speech, access, and privacy, rights that constitute us as political, as beings with
 +
responsibilities and obligations?” (p1) They propose a reframing of what it means to have identities, actions, social relations and claims to rights enacted through the digital. This work goes on to address how “digital lives are configured, regulated, and organized by dispersed arrangements of numerous people and things such as corporations and states but also software and devices as well as people such as programmers and regulators. This question concerns not only by now well-known activists who are mostly male and
 +
 
 +
Euro-American but also the innumerable and often anonymous subjects whose everyday acts through the Internet make claims to its workings and rules.” (p 4 ). 
 +
 
 +
The challenge put forward in this book is one that we believe the community of feminist activists working on preventing and addressing online harassment need to engage with.  Contemporary online actions, and claims made to rights, are being enacted and addressed with laws and notions of rights that originated in different times and spaces, and they don't smoothly apply in the online world. We cannot only build skills to deal with online harassment, we have to also use our growing body of evidence and the creativity of our responses to think through the oppositional challenges to the right to speech. Through the Atlas, we look forward to hosting evidence of online harassment, as well as to inviting users to engage with these debates and help us think through what it means to claim the right and freedom to speak.
 +
 
 +
==== About the Atlas ====
 +
The Atlas is divided into four sections. The first section highlights case studies and stories of women whom have faced online harassment and misogyny, in a table the reader will find 35 case studies with details about the stories
  
 
=== Tales of Online Misogyny ===
 
=== Tales of Online Misogyny ===
Line 279: Line 331:
 
There are very few documented cases in the media linked to sextortion although according to the study conducted by the Center for Technology Innovation at Brookings it is quite common based on a number of cases they reviewed<ref>http://www.brookings.edu/~/media/Research/Files/Reports/2016/05/sextortion/sextortion1.pdf?la=en</ref>. Their definition of Sextortion is the following: “ is old-fashioned extortion or blackmail, carried out over a computer network, involving some threat—generally but not always a threat to release sexually-explicit images of the victim—if the victim does not engage in some form of further sexual activity.”  
 
There are very few documented cases in the media linked to sextortion although according to the study conducted by the Center for Technology Innovation at Brookings it is quite common based on a number of cases they reviewed<ref>http://www.brookings.edu/~/media/Research/Files/Reports/2016/05/sextortion/sextortion1.pdf?la=en</ref>. Their definition of Sextortion is the following: “ is old-fashioned extortion or blackmail, carried out over a computer network, involving some threat—generally but not always a threat to release sexually-explicit images of the victim—if the victim does not engage in some form of further sexual activity.”  
  
Sextortion along with Revenge Porn can be extremely destructive tactics to women, particularly in countries where sexualizing a person is taboo.   
+
Sextortion along with Revenge Porn can be extremely destructive tactics to women, particularly in areas where sexualizing a person is taboo.   
 
   
 
   
 
'''Swatting'''
 
'''Swatting'''
Line 298: Line 350:
 
Over the years many projects, campaigns and tools have emerged to help women combat online harassment and misogyny. We wanted to dedicate a section towards the different initiatives from across the globe and highlight some projects that were started by women who had faced harassment. This section offers a starting point for us to further explore the field, what has been done, what is missing and what their impact is on other women.
 
Over the years many projects, campaigns and tools have emerged to help women combat online harassment and misogyny. We wanted to dedicate a section towards the different initiatives from across the globe and highlight some projects that were started by women who had faced harassment. This section offers a starting point for us to further explore the field, what has been done, what is missing and what their impact is on other women.
  
At the moment the initiatives feature projects, campaigns, tools, research projects and guides.                
+
At the moment the initiatives feature projects, campaigns, tools, research projects and guides are provided in the table available in the link below.                
  
 
[[:File:The Atlas of Online Harassment-Reclaiming the Online Space.ods]]
 
[[:File:The Atlas of Online Harassment-Reclaiming the Online Space.ods]]
  
=== And Now What? ===
+
=== And Now What? ===
 +
 
 
The vast field of online harassment and hate speech is one that has been pushed into the forefront of media coverage over recent years. Today, there isn't a day that goes by that there isn't an article writing about women facing harassment online or an online platform adjusting their features to combat hate speech and harassment. As we spend more and more of our time online and our online and offline worlds have become interchangeable, we are just beginning to understand how offline misogyny has permeated the online world.  Our attempts to highlight particular stories, initiatives and to define the field scratches the surface on developing an understanding of tech based violence and harassment (used in its broad definition) against women and its effects on freedom of speech.
 
The vast field of online harassment and hate speech is one that has been pushed into the forefront of media coverage over recent years. Today, there isn't a day that goes by that there isn't an article writing about women facing harassment online or an online platform adjusting their features to combat hate speech and harassment. As we spend more and more of our time online and our online and offline worlds have become interchangeable, we are just beginning to understand how offline misogyny has permeated the online world.  Our attempts to highlight particular stories, initiatives and to define the field scratches the surface on developing an understanding of tech based violence and harassment (used in its broad definition) against women and its effects on freedom of speech.
  
What we see from the evidence presented above is that online misogyny and harassment is definitely a gendered problem. This isn't a new revelation as multiple studies from Pew Research Center, Demos and many others have reported that women face significantly more harassment online, particularly vocal women such as journalists or activists. In many of the stories we present we see that women have chosen to withdraw from the online world or shut down their online presence<ref>http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1</ref>. Prominent activists and writers such as Kathy Sierra, Carolina Perez and Lola Aronovich decided to go offline<ref>https://www.theguardian.com/technology/2016/apr/11/women-online-abuse-threat-racist</ref> as a result of the harassment and abuse they suffered. Going offline and choosing to disengage from the online public sphere is an obvious restriction and limitation of freedom of speech. It becomes clear to us that online harassment in its many forms leads to the violation of free speech and one that is a gendered.  
+
 
 +
 
 +
What we see from the evidence presented above is that online misogyny and harassment is definitely a gendered problem. This isn't a new revelation as multiple studies from Pew Research Center, Demos and many others have reported that women face significantly more harassment online, particularly vocal women such as journalists or activists. In many of the stories we present we see that women have chosen to withdraw from the online world or shut down their online presence<ref>http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1 </ref>. Prominent activists and writers such as Kathy Sierra, Carolina Perez and Lola Aronovich decided to go offline<ref>https://www.theguardian.com/technology/2016/apr/11/women-online-abuse-threat-racist </ref> as a result of the harassment and abuse they suffered. Going offline and choosing to disengage from the online public sphere is an obvious restriction and limitation of freedom of speech. It becomes clear to us that online harassment in its many forms leads to the violation of free speech and one that is a gendered.  
  
  
Line 311: Line 366:
 
''Definitions of online harassment and hate speech''
 
''Definitions of online harassment and hate speech''
  
Harassment, stalking, bullying, misogyny are all terms used by the media to describe very similar things. Hate speech and online harassment are also used quite interchangeably, in both media and academic circles. On the other hand we've witnessed terms like revenge porn or trolls shift in meaning over the years. The lack of clear cut definitions creates legal complexities where specific laws focus on certain aspects of online harassment, such as stalking or revenge porn and ignore other serious abuses. Adding to these complexities, the absence of legal definitions leaves harassment to the interpretation of the individuals moderating flagged content on social network and media platforms<ref>http://www.wired.com/2014/10/content-moderation/</ref>.
+
Harassment, stalking, bullying, misogyny are all terms used by the media to describe very similar things. Hate speech and online harassment are also used quite interchangeably, in both media and academic circles. On the other hand we've witnessed terms like revenge porn or trolls shift in meaning over the years. The lack of clear cut definitions creates legal complexities where specific laws focus on certain aspects of online harassment, such as stalking or revenge porn and ignore other serious abuses. Adding to these complexities, the absence of legal definitions leaves harassment to the interpretation of the individuals moderating flagged content on social network and media platforms<ref>http://www.wired.com/2014/10/content-moderation/ </ref>.
 +
 
 +
It is crucial for anti-harassment initiatives and women rights defenders to build more universal and agreed upon definitions regarding online harassment, stalking, and misogyny that can be used as the basis of any legal and policy recommendations; but also used to raise awareness of gendered online harassment in mainstream media. 
 +
 
 +
In our search for a more universal definition of online harassment we started to explore the issue from different lenses. With Sarah Joeng's taxonomy of online harassment in mind (see The Atlas of Speech), we acknowledge that online harassment is not just limited to content or merely threats, but is also a behavioral one, one that can also be seen as a violation of a woman's consent, privacy, security and emotional well-being. Doxing and revenge porn are both direct examples those violations, when a woman's private information or nude images are exposed online for the public to see without her consent, we come to view this as a privacy and consent issue as well. To Jennifer Lawerence her nude images constituted a 'sex crime'<ref>https://www.theguardian.com/film/2014/oct/07/jennifer-lawrence-nude-photo-hack-sex-crime </ref>, it no longer became about receiving threats or being stalked online, but evolved into issues around consent and privacy. The question then becomes how do we combat harassment when it is about a violation of consent and privacy? How can we ensure that women's privacy is upheld in the data society that we live in?
 +
 
  
It is crucial for anti-harassment initiatives and women rights defenders to build more universal and agreed upon definitions regarding online harassment, stalking, and misogyny that can be used as the basis of any legal and policy recommendations; but also used to raise awareness of gendered online harassment in mainstream media.
 
 
In our search for a more universal definition of online harassment we started to explore the issue from different lenses. With Sarah Joeng's taxonomy of online harassment in mind (see The Atlas of Speech), we acknowledge that online harassment is not just limited to content or merely threats, but is also a behavioral one, one that can also be seen as a violation of a woman's consent and privacy. Doxing and revenge porn are both direct examples those violations, when a woman's private information or nude images are exposed online for the public to see without her consent, we come to view this as a privacy and consent issue as well. To Jennifer Lawerence her nude images constituted a 'sex crime'<ref>https://www.theguardian.com/film/2014/oct/07/jennifer-lawrence-nude-photo-hack-sex-crime</ref>, it no longer became about receiving threats or being stalked online, but evolved into issues around consent and privacy. The question then becomes how do we combat harassment when it is about a violation of consent and privacy? How can we ensure that women's privacy is upheld in the data society that we live in? 
 
  
 
Another lens worth exploring is the question raised back in 1993 by Julian Dibbell when he recalled the events that happened in LambdaMoo. Does rape in cyber space, where no physical bodies touched constitute rape in the physical world? With this question we would like to also raise the following: is online harassment and digital violence an extension of the physical world? Is the Cyber real? There is no simple answer to this question and we will not attempt to answer this here, but put this forward to our readers in an attempt to provoke a conversation and explore its significance to the framing of online harassment in relation to the physical world.  
 
Another lens worth exploring is the question raised back in 1993 by Julian Dibbell when he recalled the events that happened in LambdaMoo. Does rape in cyber space, where no physical bodies touched constitute rape in the physical world? With this question we would like to also raise the following: is online harassment and digital violence an extension of the physical world? Is the Cyber real? There is no simple answer to this question and we will not attempt to answer this here, but put this forward to our readers in an attempt to provoke a conversation and explore its significance to the framing of online harassment in relation to the physical world.  
Line 323: Line 380:
 
''Media Coverage''
 
''Media Coverage''
  
We view media as a significant source that has helped shape the conversation around online harassment and find it integral to our understanding of the field. Reports of celebrities, journalists and politicians being harassed has dominated the narrative in mainstream media, which has seen an increase in coverage since the harassment of Carolina Perez in 2013 and the breaking of Gamergate in 2014. It is notable that we can find few coverage of women from the Global South, particularly activists. However, media is framing the public opinion on this issue and as a result it is worth running an in-depth analysis of media's discourse on the matter to better understand how this topic is being framed for the general public.  
+
We view media as a significant source that has helped shape the conversation around online harassment and find it integral to our understanding of the field. Reports of celeberties, journalists and politicians being harassed has dominated the narrative in mainstream media, which has seen an increase in coverage since the harassment of
 +
Carolina Perez in 2013 and the breaking of Gamergate in 2014. It is
 +
notable that we can find few coverage of women from the Global South,
 +
particularly activists. However, media is framing the public opinion
 +
on this issue and as a result it is worth running an in-depth
 +
analysis of media's discourse on the matter to better understand how
 +
this topic is being framed for the general public.  
  
  
Line 329: Line 392:
 
''The role of online platforms''
 
''The role of online platforms''
 
   
 
   
A key issue that has dominated part of the conversation on online harassment and hate speech is the role of platforms in tackling these issues. Platforms have resorted to blocking accounts and content or flagging and reporting abusive content that would ultimately lead to the content being removed.
+
A key issue that has dominated part of the conversation on online
 +
harassment and hate speech is the role of platforms in tackling these
 +
issues. Platforms have resorted to blocking accounts and content or
 +
flagging and reporting abusive content that would ultimately lead to
 +
the content being removed.  
  
In 2013 during the harassment of Carolina Perez, Twitter implemented its first anti-harassment feature with their anti-abuse report button<ref>http://www.theverge.com/2014/11/12/7188549/does-twitter-have-a-secret-weapon-for-silencing-trolls</ref>. Over the years these features evolved to allow users to block accounts, and add them to a list of blocked users<ref>http://bits.blogs.nytimes.com/2014/12/02/twitter-improves-tools-for-users-to-report-harassment/?_r=0</ref>, it wasn't until the summer of 2016 that Twitter resorted to banishing users completely<ref>https://www.theguardian.com/technology/2016/jul/20/milo-yiannopoulos-nero-permanently-banned-twitter</ref>.
+
In 2013 during the harassment of Carolina Perez, Twitter implemented its first anti-harassment feature with their anti-abuse report
 +
button<ref>http://www.theverge.com/2014/11/12/7188549/does-twitter-have-a-secret-weapon-for-silencing-trolls</ref>.
 +
Over the years these features evolved to allow users to block
 +
accounts, and add them to a list of blocked users<ref>http://bits.blogs.nytimes.com/2014/12/02/twitter-improves-tools-for-users-to-report-harassment/?_r=0 </ref>,
 +
it wasn't until the summer of 2016 that Twitter resorted to banishing
 +
users completely<ref>https://www.theguardian.com/technology/2016/jul/20/milo-yiannopoulos-nero-permanently-banned-twitter </ref>.
  
Facebook, like Twitter, developed intricate reporting and flagging systems to combat all of hate speech and harassment on their platform, including allowing administrators to block certain content  based on blacklisted words<ref>https://www.facebook.com/help/131671940241729</ref>. Facebook also recently developed a feature that would send an account holder notifications if they suspect that a person is being impersonated online<sup>9</sup>. Facebook is the first platform to develop such a feature even though online impersonations have long plagued women, especially when dealing with abusive former lovers.
 
  
Reddit is a social, entertainment and news platform that takes pride in the fact that it relies on community driven moderation through up and down voting content. However, as Ellen Pao took over Reddit and subsequent to her departure<ref>https://www.theguardian.com/technology/2015/jul/10/ellen-pao-reddit-interim-ceo-resigns</ref> as the CEO, the platform started to undertake strict measures to fight online harassment. In the spring of 2016  Reddit took a major step by developing a new tool allowing the blocking of abusive users<ref>http://www.nytimes.com/2016/04/07/technology/reddit-steps-up-anti-harassment-measures-with-new-blocking-tool.html</ref>.
 
  
Twitter, Facebook and Reddit are but a few of the companies that are combatting harassment predominately by restricting content and users, an action that has been welcomed by many. On the other hand, Jillian C. York argues that blocking content or users does not solve the underlying problems of harassment<ref>https://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship-6e1babd61a9b#.tosss89pm</ref>. Referring to blocking as censorship, can lead to overreaching, where “efforts to censor hate speech, or obscenity, or pornography, are far too often overreaching, creating a chilling effect on other, more innocuous speech.” Her solution to the problem of harassment and hate speech is more speech. With this argument in mind, it would be worth having platforms explore ways where they can empower users to fight harassers rather than simply blocking them and their content.  
+
Facebook, like Twitter, developed intricate reporting and flagging systems to combat all of hate speech and harassment on their
 +
platform, including allowing administrators to block certain content
 +
based on blacklisted words<ref>https://www.facebook.com/help/131671940241729 </ref>. Facebook also recently developed a feature that would send an account
 +
holder notifications if they suspect that a person is being impersonated online<ref>http://www.ibtimes.co.uk/facebook-testing-tool-fight-impersonation-1551379 </ref>. Facebook is the first platform to develop such a feature even though online impersonations have long plagued women, especially when dealing with abusive former lovers.
 +
 
 +
 
 +
 
 +
Reddit is a social, entertainment and news platform that takes pride in the fact that it relies on community driven moderation through up
 +
and down voting content. However, as Ellen Pao took over Reddit and
 +
subsequent to her departure<ref>https://www.theguardian.com/technology/2015/jul/10/ellen-pao-reddit-interim-ceo-resigns </ref>
 +
as the CEO, the platform started to undertake strict measures to
 +
fight online harassment. In the spring of 2016  Reddit took a major
 +
step by developing a new tool allowing the blocking of abusive
 +
users<ref>http://www.nytimes.com/2016/04/07/technology/reddit-steps-up-anti-harassment-measures-with-new-blocking-tool.html </ref>.
 +
 
 +
 
 +
 
 +
Twitter, Facebook and Reddit are but a few of the companies that are
 +
combatting harassment predominately by restricting content and users, an action that has been welcomed by many. On the other hand,
 +
Jillian C. York argues that blocking content or users does not solve
 +
the underlying problems of harassment<ref>https://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship-6e1babd61a9b#.tosss89pm </ref>. Referring to blocking as censorship, can lead to overreaching, where
 +
“efforts to censor hate speech, or obscenity, or pornography, are far too often overreaching, creating a chilling effect on other, more
 +
innocuous speech.” Her solution to the problem of harassment and
 +
hate speech is more speech. With this argument in mind, it would be
 +
worth having platforms explore ways where they can empower users to
 +
fight harassers rather than simply blocking them and their content.  
  
  
Line 343: Line 438:
 
''Solutions to combatting harassment online''
 
''Solutions to combatting harassment online''
  
Reading through some of the solutions and initiatives to combat online harassment and misogyny, we came across a number of methods and tactics that have been put in place. They include:  
+
Reading through some of the solutions and initatives to combat online
+
harassment and misogyny, we came across a number of methods and
* ''Blocking:'' There have been a number of methods and platform features that have resorted to blocking both content and abusive users. This is one of the most commonly used solutions to combat harassment.
+
tactics that have been put in place. They include:  
* ''Reporting and Filtering:'' The majority of platforms over the past few years have enabled reporting and filtering features for people to report online harassment and hate speech. The existing reporting and flagging systems vary from platform to platform, there are continuous attempts to ease the process of flagging content by the user. However, ultimately the process is not without flaws and in many situations flagged content is not always removed<ref>https://arxiv.org/abs/1505.03359</ref>.
+
 
* Bots: Zero Tolerance is one prominent example of using online bots to 'troll the trolls' on Twitter.  The program detects Twitter accounts that use hateful and sexist words, that information is then passed on to 150 bots that tweet at the account user sending them life coaching messages<ref>https://motherboard.vice.com/read/trolling-the-trolls-with-sexism-hunting-twitter-bots</ref>. Others have developed bots to help women block unwanted users from contacting them. Ultimately, bots offer more of an immediate solution rather than a long lasting one.
+
 
* Moderation: There can be a number of mechanisms that are used to moderate content or users online<ref>http://james.grimmelmann.net/files/articles/virtues-of-moderation.pdf</ref>. There is exclusion, pricing, organizing, and norm-setting. These mechanisms include both automated moderation and manual moderation such as distributed moderation. The distributed or voting moderation is a feature used on Reddit and more recently on Periscope<ref>http://www.wsj.com/articles/to-fight-trolls-periscope-puts-users-in-flash-juries-1464711622</ref>. Although there are known risks to distributed moderation, particularly when people organize around votes (i.e. trolls) or when there are not enough votes<ref>https://dl.acm.org/citation.cfm?id=2441866</ref><ref>http://www.cpeterson.org/2013/07/22/a-brief-guide-to-user-generated-censorship/</ref>.
+
* Blocking: There have been a number of methods and platform features that have resorted to blocking both content and abusive users. This is one of the most commonly used solutions to combat harassment.
* Projects and Initiatives: As detailed in the Reclaiming the Online Space, there are a number of initiatives and projects that were started in reaction to women being harassed. These initiatives have emerged to raise awareness about the issues in this field, from a legal perspective, or developing guides and documenting the harassment. The more of these initiatives are established the more awareness is raised that help navigate the field.
+
* Reporting and Filtering: The majority of platforms over the past few years have enabled reporting and filtering features for people to report online harassment and hate speech. The existing reporting and flagging systems vary from platform to platform, there are continuous attempts to ease the process of flagging content by the user. However, ultimately the process is not without flaws and in many situations flagged content is not always removed<ref>https://arxiv.org/abs/1505.03359
* Community and support networks : There is very little written about how the community can help support women who are facing online abuse and misogyny, although talking to activists this may be one of the most important tactic that supports women to fight back and not disengage from cyberspace<ref>https://www.youtube.com/watch?time_continue=654&v=LiGP44mvNcs</ref>.
+
</ref>.
 +
* Bots: Zero Tolerance is one prominent example of using online bots to 'troll the trolls' on Twitter.  The program detects Twitter accounts that use hateful and sexist words, that information is then passed on to 150 bots that tweet at the account user sending them life coaching messages<ref>https://motherboard.vice.com/read/trolling-the-trolls-with-sexism-hunting-twitter-bots </ref>. Others have developed bots to help women block unwanted users from contacting them. Ultimately, bots offer more of an immediate solution rather than a long lasting one.
 +
* Moderation: There can be a number of mechanisms that are used to moderate content or users online<ref>http://james.grimmelmann.net/files/articles/virtues-of-moderation.pdf </ref>. There is exclusion, pricing, organizing, and norm-setting. These mechanisms include both automated moderation and manual moderation such as distributed moderation. The distributed or voting moderation is a feature used on Reddit and more recently on Periscope<ref>http://www.wsj.com/articles/to-fight-trolls-periscope-puts-users-in-flash-juries-1464711622 </ref>. Although there are known risks to distributed moderation, particularly when people organize around votes (ie trolls) or when there are not enough votes<ref>https://dl.acm.org/citation.cfm?id=2441866 </ref><ref>http://www.cpeterson.org/2013/07/22/a-brief-guide-to-user-generated-censorship/ </ref>.
 +
* Projects and Initiatives: As detailed in the Reclaiming the Online Space, there are a number of initiatives and projects that were started in reaction to women being harassed. These initiatives have emerged to raise awareness about the issues in this field, from a legal perspective, or developing guides and documenting the harassment. The more of these initiatives are established the more awareness is raised that help navigate the field.
 +
* Community and support networks : There is very little written about how the community can help support women who are facing online abuse and misogyny, although talking to activists this may be one of the most important tactic that supports women to fight back and not disengage from cyberspace<ref>https://www.youtube.com/watch?time_continue=654&v=LiGP44mvNcs </ref>.
 +
 
 +
 
 +
 
 +
The solutions presented are merely a selection of mechanisms and
 +
tactics that women have been able to use to fight harassers. The
 +
recent trend has focused on blocking or reporting mechanisms in
 +
combatting harassment and abusive content, however there are draw
 +
backs to these mechanisms as they do not resolve the issue from its
 +
root and in fact might aggrevate harassers to escalate their tacitcs.
 +
 
 +
 
 +
 
 +
Notably missing from most discussions over anti-harassment tactics is
 +
the community support factor. Although establishing communities or
 +
networks of activists for women who have been harassed is a
 +
significant method to support and fight back against the abuse and
 +
harassment women face online. While some initiatives offer support
 +
(legal and informational), little is done to establish networks of
 +
activists that can offer a range of support to anyone who faces the
 +
wrath of trolls.
 +
 
 +
 
 +
 
 +
Tactical Tech has in the past 3 years worked on establishing networks
 +
of support within local communities. Through the Gender and Tech
 +
Institute (Berlin and Ecuador) over a 100 women were able to train to
 +
become digtial privacy and security champions and they were able to
 +
return to their communities and relay their knowledge and skills to
 +
their local communities. As a result of both GTI's and follow up
 +
events over a 1000 women are now more aware of their digital presence
 +
and privacy that  provides them with the ability to combat any
 +
harassment they may face. By training, engaging and developing
 +
resources with women rights defenders and activists from across the
 +
globe, Tactical Tech is building both a global and local support
 +
networks of digital privacy champions that can combat the phenomenon
 +
of online harassment.  
  
The solutions presented are merely a selection of mechanisms and tactics that women have been able to use to fight harassers. The recent trend has focused on blocking or reporting mechanisms in combatting harassment and abusive content, however there are draw backs to these mechanisms as they do not resolve the issue from its root and in fact might aggravate harassers to escalate their tactics.
 
  
Notably missing from most discussions over anti-harassment tactics is the community support factor. Although establishing communities or networks of activists for women who have been harassed is a significant method to support and fight back against the abuse and harassment women face online. While some initiatives offer support (legal and informational), little is done to establish networks of activists that can offer a range of support to anyone who faces the wrath of trolls. 
 
  
Tactical Tech has over the past 3 years worked on establishing networks of support within local communities. Through the Gender and Tech Institute (Berlin and Ecuador) over a 100 women were able to train to become digital privacy and security champions and they were able to return to their communities and relay their knowledge and skills to their local communities. As a result of both GTI's and follow up events over a 1000 women are now more aware of their digital presence and privacy that  provides them with the ability to combat any harassment they may face. By training, engaging and developing resources with women rights defenders and activists from across the globe, Tactical Tech is building both a global and local support networks of digital privacy champions that can combat the phenomenon of online harassment.
+
This resource is an initial attempt to navigate the field and help us
 +
and our community make sense of what is out there and what solutions
 +
are available to combat harassment online. In the coming months, we
 +
plan to continue building on the content, add more stories from the
 +
Global South and work towards building stronger networks of support.
 +
Some of the questions and issues we raised above are ones that we
 +
hope to find answers and solutions for.  
  
This resource is an initial attempt to navigate the field and help us and our community make sense of what is out there and what solutions are available to combat harassment online. In the coming months, we plan to continue building on the content, add more stories from the Global South and work towards building stronger networks of support. Some of the questions and issues we raised above are ones that we hope to find answers and solutions for.
 
  
 
===Annex: Recommended Reads===
 
===Annex: Recommended Reads===
Line 370: Line 508:
 
* Marlisse Silver Sweeney, “[http://www.theatlantic.com/technology/archive/2014/11/what-the-law-can-and-cant-do-about-online-harassment/382638/ What the Law Can (and Can't) Do About Online Harassment]” (The Atlantic)   
 
* Marlisse Silver Sweeney, “[http://www.theatlantic.com/technology/archive/2014/11/what-the-law-can-and-cant-do-about-online-harassment/382638/ What the Law Can (and Can't) Do About Online Harassment]” (The Atlantic)   
 
* Mathias Schwartz: “[http://www.nytimes.com/2008/08/03/magazine/03trolls-t.html?_r=0 The Trolls Amoung US]” (New York Times)
 
* Mathias Schwartz: “[http://www.nytimes.com/2008/08/03/magazine/03trolls-t.html?_r=0 The Trolls Amoung US]” (New York Times)
 +
* Melissa Gira Grant: "[https://psmag.com/in-an-unsafe-space-70824a82d369#.evudwp958 In An Unsafe Space]" (Pacific Standard)
 
* Nathan J. Matias et all: “[[metawikimedia:Research:Online_harassment_resource_guide|Online Harassment: A Resource Guide]]” (Wiki)
 
* Nathan J. Matias et all: “[[metawikimedia:Research:Online_harassment_resource_guide|Online Harassment: A Resource Guide]]” (Wiki)
 
* Soraya Chemaly, “[http://wmcspeechproject.com/2016/02/11/10-must-read-books-about-online-harassment-and-free-speech/ Ten must read books about online harassment and free speech]” (Women's Media Center)
 
* Soraya Chemaly, “[http://wmcspeechproject.com/2016/02/11/10-must-read-books-about-online-harassment-and-free-speech/ Ten must read books about online harassment and free speech]” (Women's Media Center)
Line 384: Line 523:
  
 
=== Links ===
 
=== Links ===
[[Category:How_To]]
 

Latest revision as of 09:55, 15 November 2016

Introduction

The Atlas of Online Harassment is a living and growing collection of discussions and resources about a challenging issue facing users of the internet: the confrontation between the right to speech and freedom of expression online, and the right to be free from violence and harassment. A growing body of evidence shows that many instances of online speech, from humour to intellectual disagreements, is experienced as harassment by others. The harassment includes the use of abusive language, rape and death threats, sexist comments, and threats to release personal information online as an incitement to carry out rape/death threats. The recipients of this harassment are usually women, people of colour and others who are marginalised by race, ethnicity, religion etc., and tend to be articulate and vocal online. The harassers are often men, but not always. In its simplest form the challenge lies in how to think through calls to regulate harassing speech online: is it possible to establish standards of acceptable and unacceptable speech online? And should you?

Gamergate is a loose, faceless, and shifting umbrella moniker for an online community that aggressively polices anyone's right to say pretty much anything they want to say online. Recently, they have been known to attack women (primarily, but also many others) who call for restraint and care in using language that may be hurtful or harmful. Gamergate says that everything is fair game on the internet, and that it is impossible – even illegal – to police speech, that people should be allowed to say anything they want to. The feminist argument is that violence or incitement to violence cannot be considered 'free speech'. Yet, the lines are muddied, and in many cases related to art, literature, humour, and political comment, saying something that someone else takes offense at is often a reason to shut that comment down. In many instances, humour about race, class, gender, sexuality or ethnicity is a source of confrontation and discussion precisely because humour is thought to be effective in its power to offend, an act which is often seen as a challenge to power, and a longstanding tactic of activism. What is the line when humour goes too far? It's impossible to say.

We believe it is not possible, legal nor useful to call for the establishment of standards of acceptable speech online or offline. We believe in the freedom of speech and expression. However, there is clearly a problem with blanket notions of freedom of speech and expression; there are none, and the regulation of speech is addressed by laws around the world. In many countries, 'hate speech' is clearly identified in the law, but there is no universal idea of 'hate speech' either. Speech is entirely contextual and regulations on speech have evolved from specific contexts. So, 'hate speech' in the South African constitution is discussed very differently from how it is in Germany. Similarly, there has to be a way to contextualise speech online that is harassing; something that is an incitement to violence, or that addresses violent words or language to individuals. An example from recent research about online misogyny by the British research organisation Demos indicates why this is important.

Demos was using its Natural Language Processing algorithm to collect and sort through 1.5m tweets using the words 'slut'. After removing tweets that were advertising pornography (54% of them), they were left with 65,000 non-pornographic tweets.

"Two types of language was classified: ‘aggressive’ and ‘self-identification’. 33% of the tweets analysed were considered aggressive, whereby the tweets contained further obscenities in addition to ‘slut’ or ‘whore’, included commands (e.g. ‘shut up’), or used ‘you’ to target a specific user. In contrast, just 9% were considered self-identifying, whereby users appeared to reclaim these words to talk about themselves (e.g. “I’m a slut for beautiful sunsets”) or in a jovial manner when directed towards others (e.g. “happy birthday little slut I guess I love you”). The tweets that were not considered aggressive or self-identifying were classified as ‘Other’, usually focussing on slut walks, slut-shaming or discussing the use and connotations of these words."

In everyday, face to face conversation, or when watching videos or television, speech is both verbal and non-verbal, so it is far easier to understand the nuances of speech, as well as context. Online, context and nuance are very difficult to evaluate. Yet, it may be an essential part of dealing with the challenge to speech, and to the right to be safe online. Context, however, is less important when it comes to specifically violent and personalised threats. This distinction is important in moving forward, we believe, because there is a qualitative difference between more subtle aspects of language, and words that are explicitly threatening.

Through the Atlas we want to think through how to equip activists who are, variously: users of the internet, privacy and free speech activists, feminists, who believe and work for both the right to be free from violence online, and the right to speak online, to address these topics confidently and convincingly. How can taking on this issue allow for an evolution of the idea freedom of speech that is feminist, and respects privacy, speech and expression? What are the new ideas we want to advance across all these communities towards informing policy on platform regulation, inspiring more creative and confident online activism and expanding on ideas of speech and expression online?

We refer to this body of work as an atlas, because we want it to be a resource assessing 'power geographies', for way-finding and orientation, and to promote and inspire the representation of local landscape.

Between online and offline

One of the outcomes of the aggregation of evidence of forms of online harassment experienced by women, people of colour, people who identify as LGBTQ, and others who are marginalised by their societies, is that the online is seen as linked to the offline. Evidence shows that harassment that occurs online tends to occur more to those who are already marginalised offline, and the patterns of abuse, intimidation and discrimination online are not that different from what happens offline. So, women tend to continue to be betrayed by their most intimate partners, as instances of non-consensual image sharing (formerly known as 'revenge porn') show. Women journalists and activists are shamed or aggressively challenged for their opinions, facing everything from insults and abusive language to rape threats and death threats.


These patterns have been documented in research by APC Women's Program, Tactical Tech , and in many journalistic and practitioner articles and research. The gender and sexuality power differential that exists in offline relationships, culture and institutions, is replicated into the supposedly equal space of the internet, and is often referred to as 'mirroring', or of offline and online being different points on the same continuum of experience.


This link has made it more possible for online harassment to be acknowledged as a violation that occurs on a significant scale across the internet, rather than a harmless anomaly affecting only a few people. A global conversation about online harassment has, in turn, made offline, deeply entrenched social disparities more visible too, as activists continue to use the digital to raise awareness about offline violence and discrimination. Doing so, in turn, activates online harassment yet again. One example of this is from Nighat Daad, the Pakistani digital rights activist. Qandeel Baloch was a Pakistani social media sensation who came to some prominence for her outspokenness, racy videos and online persona. In July 2016 she was allegedly murdered by her brother who felt that she had brought dishonour on the family. Her death brought a number of Pakistani activists out into the streets and on social media protesting the alleged crime, and the issue of violence against women in Pakistani society more broadly. Nighat reports facing abusive language for supporting Qandeel. Nighat's experience is neither unique nor casual.

Nighat Daad- Tweet1.png

Nighat Dad - Tweet2.png


The response to online harassment tends to focus on equipping people vulnerable to it to learn how to secure their online spaces and maintain their privacy, by learning digital skills, or through a series of resourceful new ways to inhabit online spaces, create networks of support, or engage in tactics of creative resistance like counter-speech.


However, there is an issue arising from this online-offline connection that makes the claims for freedom of speech and expression online tenuous and difficult for activists to address. The online-offline link does not necessarily either reduce the incidence of harassment, nor does it allow for a resolution of cases. It merely allows for it to be recognised. Offline harms and harassment sometimes have a material and bodily component as well, which can be, and are, regulated by law, and in rare cases, allow for this harassment to be convicted for its criminality. Online, the harassment and harms are acts of speech: abusive language, threats, taunts, hate. While it is possible to capture instances of harmful speech online, speech itself is always contextual, open to interpretation; in that sense it is not material.


Additionally, speech is a specifically protected and legally defined and maintained right and practice. It's offline form and its online form are different, from its production through to its forms and management and policing. At the same time it is not necessarily “hate speech”, which has a long history of definition and regulation in very different ways around the world. Yet, online harassment is still, also, an act of speech, one that is neither adequately represented by online regulation (which swings between either total blocking and regulation, or none at all) nor by offline laws around speech, although it is this idea of speech that is enshrined in the idea of the internet. It cannot also always be considered as 'hate speech'.


In claiming acts of violence, and protection and freedom from online harassment, women are claiming the rights they are granted as citizens of countries, and as members of a group or community. However, the internet is - and is not, at the same time - regulated by local legal jurisdictions. While data privacy and protection issues are being debated and challenged in courts in the EU and US, issues of speech and its regulation are entirely local. However, transnational corporations such as Google and Facebook do get involved in taking down and regulating speech at the behest of local governments. Platforms like Twitter are increasingly being held to account for taking credit for thinking they enable freedom of speech, but at the same time do little to protect infringements of it against marginalised people.

At the heart of this matter is a real conundrum for digital activism today: how is it possible to reconcile the kinds of power for digital actions enabled by the internet, with the seeming restrictions on the kind of agency that activism is supposed to enable?

In their 2015 book, Becoming Digital Citizens, Engin Isin and Evelyn Ruppert craft this problem in terms of the idea of digital citizenship. They say that “ studies of the Internet and empirical analyses of specific digital platforms are proliferating, yet we lack concepts for framing and interpreting what these mean for being digital citizens.”[1] While words like online, offline, virtual, real, have currency, they say that these are “placeholders in search of concepts.” This is potentially true to the topic of online harassment as a kind of unregulate-able speech. Isin and Ruppert go on to explore what it means to be a 'digital citizen' and how the shaping of political life, actions, and subjectivity through the digital is also changing the meanings of these words and ideas. They say:

“ If the Internet—or, more precisely, how we are increasingly acting through the Internet—is changing our political subjectivity, what do we think about the way in which we understand ourselves as political subjects, subjects who have rights to speech, access, and privacy, rights that constitute us as political, as beings with responsibilities and obligations?” (p1) They propose a reframing of what it means to have identities, actions, social relations and claims to rights enacted through the digital. This work goes on to address how “digital lives are configured, regulated, and organized by dispersed arrangements of numerous people and things such as corporations and states but also software and devices as well as people such as programmers and regulators. This question concerns not only by now well-known activists who are mostly male and

Euro-American but also the innumerable and often anonymous subjects whose everyday acts through the Internet make claims to its workings and rules.” (p 4 ).

The challenge put forward in this book is one that we believe the community of feminist activists working on preventing and addressing online harassment need to engage with. Contemporary online actions, and claims made to rights, are being enacted and addressed with laws and notions of rights that originated in different times and spaces, and they don't smoothly apply in the online world. We cannot only build skills to deal with online harassment, we have to also use our growing body of evidence and the creativity of our responses to think through the oppositional challenges to the right to speech. Through the Atlas, we look forward to hosting evidence of online harassment, as well as to inviting users to engage with these debates and help us think through what it means to claim the right and freedom to speak.

About the Atlas

The Atlas is divided into four sections. The first section highlights case studies and stories of women whom have faced online harassment and misogyny, in a table the reader will find 35 case studies with details about the stories

Tales of Online Misogyny

In 1993, the online world LambdaMOO was thrown into disarray after one of the participants was accused of committing rape there. The incident was made famous through Julian Dibbell's article “A Rape in CyberSpace”[2]. Julian walked us through the harrowing details of the incident, where a user with the handle Mr Bangles entered the online space and used a mechanism to take control over the movements of another user. Mr. Bangles then preceded to force the users he controlled into committing sexual acts against their will. Although there no one was assaulted physically, the incident left a mark in the online world and among its participants, it brought forward the complex question whether rape in cyberspace constituted rape in the physical world. This may be one of the first cases of online harassment reported in the media, at a time when about 4% of the world's population was online and much less took part in LambdaMoo.

As more cases are coming to light the debate is growing and we are recognizing different actors, circumstances and tactics used to harass and intimidate women. In this section we highlight 35 stories of women who have been harassed or confronted with online misogyny/ hate speech. These cases will help detail the different women harassed, their harassers, tactics used and outcomes in an attempt to gain a better understanding of the diversity in threats against women online. Starting from 2007, we describe and outline the stories based on relevant time periods, tactics, outcomes and different geographic contexts. Through mapping these stories we get a clearer picture of the field, of the 35 stories, 13 occurred in the Global South. 2007 marks a pivotal point in the field, with stories about harassment starting to filter into mainstream media and reached prominence with the case of writer/ blogger Kathy Sierra.

The selection of the case studies does contain bias, as it is based on our analysis of news reports on the stories. This also means that the women who were harassed were able to report on their own stories and many tend to be journalists, activists, or public figures. We are using these case studies as a starting point to explore the field, we would like to expand to include further case studies from the Global South and from women activists.

File:The Atlas of Online Harassment - Tales of Online Misogyny.ods

During our research a number of observations stood out to us with regards to the outcomes of many of the stories, we list our observations below:

Silencing of Women online: One of the reactions we've heard from women who suffered from online harassment and stalking is that they for at least some time period, if not indefinitely, have gone offline. Kathy Sierra canceled her speaking engagement at the time of her harassment, Carolina Perez deactivate her Twitter account and so did Leslie Jones and the list of names continues. However many of these stories have also inspired women to reclaim the online space and establish initiatives that would counter harassment online and help build networks that would support other women.

Counter speech and reclaiming the online space: Regardless of the harassment and intimidation the women face, many of them featured here have internalized the need to online harassment through counter speech or by building helpful resources. Ten out of the thirty-four cases we identified led to the launch of a positive initiative (a campaign, a website, etc.) for victims to reclaim their online territories.

The hashtag #NaoSaoEllas or #MiPrimerAsedio (a trending topic on Twitter) in Brazil were launched after the harassment feminist blogger Lola Aranonovich and 13-year-old Valentina Schulz underwent. Danish journalist and activists Emma Holte started the project Consent after being subjected to Revenge Porn by her former partner. Consent aims to reclaim the image of her body and to raise awareness around the importance of consent.

Legal Practices: Although in some of the case studies we list, some of the women decided to take legal action the legal framework around online harassment remains unclear. In the following section -The Atlas of Words – readers will be able to read through some of the legal and judiciary definitions surrounding both online harassment and hate speech. It is immediately clear that there is no universal legal definition of cyber harassment or cyber stalking and also a diversity of legal landscapes around these issues. Even more so the judiciary system does not seem a serious and efficient recourse for the harassed women. Stemming from the case studies, it appears that the traditional way of handling abuses and harassments offline is not applicable online. So far, some cases were heard after the women subjected to these abuses, spoke out publicly, organized their own form of justice such as Holly Jacobs with 'End Revenge Porn' project that gathers resources and legal information for victims of revenge porn.

Although there have been attempts to introduce new laws to combat online harassment and stalking, particularly in Western countries, there are still issues around authorities' reaction to cases of online harassment, the platforms' reaction and how legally there is still much to be done.

Platform Policies: Over the past four years online platforms have actively taken action to combat hate speech and online harassment. That made it to the forefront of the battle with the prominent case of Carolina Perez. As Carolina was bombarded with harassment on Twitter, the platform decided to take action and enable different features allowing users to block and filter accounts. Twitter is not the only platform to opt for blocking or filtering content, Facebook has also taken very recent steps to combat revenge porn and impersonations[3], on the other hand their real name policy could have caused serious harm for the LGBT communities online[4].

The question worth asking here is whether blocking and filtering content on these platforms is a powerful solution in combating harassment. Studying the impact and effects of content removal and filtering is definitely needed in this field along with the effects of counter speech.

The Atlas of Speech

Much has been written academically and in the media about Hate Speech and particularly Online Harassment in recent years. With this vast body of literature and the legal shifts that have happened over the years translates into a number of definitions that have emerged trying to frame the field. It is for that reason important to dedicate a section that highlights these definitions in an attempt to explore all of the legal, online and universal definitions that exist in this field, if that is the case of course.

This section divides the definitions into two lists, one that focuses on Hate Speech and the other on Online Harassment. This is in no way an expansive list but rather reflective of the many terms linked to forms of online speech.  

Hate Speech

Hate speech[5] has direct ties to free speech and has a long history from a legal perspective. Tracing this issue back to the Universal Declaration of Human Rights in 1948, while not directly addressed hate speech in the form of incitement to violence and discrimination made it's way into International Law. The following years saw a number of international conventions address hate speech in lieu with freedom of expression.

We have selected a few definitions from a legal perspective to be presented here:

International Covenant on Civil and Political Rights (ICCPR) – Article 20 (2)

1. Any propaganda for war shall be prohibited by law.

2. Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”

International Covenant on the Elimination of All forms of Racial Discrimination (ICERD) – Article 4 (a)

States Parties condemn all propaganda and all organizations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form, and undertake to adopt immediate and positive measures designed to eradicate all incitement to, or acts of, such discrimination and, to this end, with due regard to the principles embodied in the Universal Declaration of Human Rights and the rights expressly set forth in article 5 of this Convention, inter alia:

(a) Shall declare an offense punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof;”

European Convention on Human Rights – Article 10 (2)

The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.”

American Convention on Human Rights – Article 13 (5)

Any propaganda for war and any advocacy of national, racial, or religious hatred that constitute incitements to lawless violence or to any other similar action against any person or group of persons on any grounds including those of race, color, religion, language, or national origin shall be considered as offenses punishable by law.” 

Indian Penal Code: 153A.

Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.—

(1) Whoever—

(a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, language or regional groups or castes or communities, or

(b) commits any act which is prejudicial to the maintenance of harmony between different religious, racial, language or regional groups or castes or communities, and which disturbs or is likely to disturb the public tranquillity,

What Platforms Say about Hate Speech?

Although International Law and conventions explicitly address Hate Speech, there is no mention of misogyny, on the other hand, online platforms such as Google, Facebook and Twitter add both gender and sexual orientation within their definitions of hate speech. This presents an alternative and gendered definition of hate speech particularly online, where women face harassment on a daily basis. It is for that reason and linked to the active

With an active work in platform policies to tackle hate speech and harassment online. Recently the platforms have also gone as far as to sign on to EU's 'code of conduct' on hate speech[6].

Google

Hate Speech: Our products are platforms for free expression. But we don't support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line.

YouTube (owned by Google)

We encourage free speech and try to defend your right to express unpopular points of view, but we don't permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

  • race or ethnic origin
  • religion
  • disability
  • gender
  • age
  • veteran status
  • sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their race.”

Facebook: Community Standards

Facebook removes hate speech, which includes content that directly attacks people based on their:

  • race,
  • ethnicity,
  • national origin,
  • religious affiliation,
  • sexual orientation,
  • sex, gender or gender identity, or
  • serious disabilities or diseases.

Facebook also indicates that beyond removing content containing hate speech they also provide 'tools' for people to avoid offensive content and also promote counter-speech, see the following:

While we work hard to remove hate speech, we also give you tools to avoid distasteful or offensive content. Learn more about the tools we offer to control what you see. You can also use Facebook to speak up and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.”

Twitter: According to The Twitter Rules

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Reddit:

Do not post violent content: Do not post content that incites harm against people or groups of people. Additionally, if you're going to post something violent in nature, think about including a NSFW tag. Even mild violence can be difficult for someone to explain to their boss if they open it unexpectedly.”

Blasphemy

The Oxford dictionary defines Blasphemy “as an act of offense of speaking sacrilegiously about God or sacred things; profane talk.” Although the definition may seem straightforward there have been notable cases where the accusation of blasphemy was used to censor certain types of speech, particularly in countries that have strict laws on the topic that enforce detentions and death penalties against people who commit blasphemy. Pakistan is an example of a state using blasphemy laws to censor speech, such as is in the case of Professor Junaid Hafeez, an English lecturer at Bahauddin Zakariya University and known for his liberal views, is currently in prison accused of blasphemy. While it is not clear whether Hafeez posted anything blasphemous, and some believe the evidence was doctored, he remains in jail awaiting his trial since 2013. His case and many others like Hafeez were reported[7] by the Digital Right Foundation in their report “Blasphemy in the Digital Age”.

Counter Speech

In its most simplest form, counter speech is a form of speech that responds to hate speech or extremism. It is a different narrative than that of hate speech and attempts to counter hate speech. Counter speech comes in different forms, in some cases it is a simple response to the speech online i.e. tweet, Facebook comment, in other cases it is creating a Facebook group or blog post. Parody has also become a common tactic to counter hate speech.

Dangerous Speech

As defined by Professor Susan Benesch is a subset of Hate Speech which has a reasonable chance of catalyzing or amplifying violence by on group against another. Prof. Benesch developed a framework that identifies Dangerous Speech and established five variables:

“The most dangerous speech act, or ideal type of dangerous speech, would be one for which all five variables are maximized:

  1. a powerful speaker with a high degree of influence over the audience.
  2. the audience has grievances and fear that the speaker can cultivate.
  3. a speech act that is clearly understood as a call to violence,
  4. a social or historical context that is propitious for violence, for any of a variety of reasons, including longstanding competition between groups for resources, lack of efforts to solve grievances, or previous episodes of violence.
  5. a means of dissemination that is influential in itself, for example because it is the sole or primary source of news for the relevant audience.”[8]

Discrimination

Is the unjust treatment or prejudice against different people based on their sex, race, and age. That list can expand to include sexual orientation and religion.

Defamation[9]

According to the Electronic Frontier Foundation, online defamation stand as: “a false and unprivileged statement of fact that is harmful to someone's reputation, and published "with fault," meaning as a result of negligence or malice. State laws often define defamation in specific ways. Libel is a written defamation; slander is a spoken defamation.”

Freedom of Speech

Perhaps the most famous articles on Freedom of speech and expression is the Universal Declaration of Human Rights and the First Amendment in the US constitution.

Universal Declaration of Human Rights (Article 19):

“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

US Constitution (First Amendment):

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.”

Hate Crime

Not to be confused with Hate Speech, legally hate crime is generally a violent action against another person or community because of their race, ethnicity, national origin, religion, sexual orientation, or disability.

It is notable that international law and conventions do not directly address gender or sexual orientation in the body of text. However, states along with online platforms have taken steps to prohibit discrimination and hate speech based on these issues.

Although hate speech is addressed extensively in many international and state laws, online harassment on the other hand is not addressed in International law and states have recently started to draft laws on that matter, we are starting to see laws addressing cyberstalking, cyber bullying and in some places revenge porn. The following list will attempt to list a set of terminology used to describe harassment online.

Online Harassment

There is no universally agreed upon definition of online harassment and is one that has been difficult to define. A number of scholars have explored the concept and attempted to define it, especially in the absence of a legal definition. We present a few of those definitions below:

From a scholarly perspective, in some of the earlier writings on online harassment Sarah Jameson in her paper Cyberharassment: Striking a Balance Between Free Speech and Privacy [10]defined cyber harassment as such:

Although cyber harassment has no universal definition, it typically occurs when an individual or group with no legitimate purpose uses a form of electronic communication as a means to cause great emotional distress to a person. In addition to e-mail and blogs, conduits of “new media” available to the cyber harasser include chat rooms, instant messaging services, electronic bulletin boards, and social networking sites. The Internet provides cyber harassers with an easy channel to “incite others against their victims.” Not only can the cyber harasser harass his victim, but he can also impersonate the victim and post defamatory messages on bulletin boards, cyberbully his victim, or send vulgar e-mails to the victim’s employer. The victim suffers as a result of actions committed by the cyber harasser.”

On the other hand Danielle Citron, views cyber harassment as: “Although definition of these terms vary, cyber harassment is often understood to involve the intentional infliction of substantial emotional distress accomplished by online speech that is persistent enough to amount to a “course of conduct” rather than an isolated incident”[11]

To Citron, harassment includes: “real threats, privacy invasions, cruelty amounting to intentional infliction about purely private matters, nude images and reputational damage. People lose their jobs, they can’t get new jobs, because Google searches turn up destructive information. They often have to move because they’re afraid of confrontation by strangers. They lose investments and professional opportunities. These are coercive acts.”[12]

Author of the book entitled The Internet of Garabage Sarah Jeong goes as far as to develop a taxonomy for online harassment dividing it into two spectrums one defined by behavior and the other by content. She writes:

Harassment exists on two spectrums at once— one that is defined by behavior and one that is defined by content. The former is the best way to understand harassment, but the latter has received the most attention in both popular discourse and academic treatments.

When looking at harassment as content, we ultimately fixate on “death threats” as one end of the spectrum, and “annoying messages” at the other end. Thus the debate ends up revolving around civil rights versus free speech— where is the line between mean comments and imminent danger? Between jokes and threats?

Behavior is a better, more useful lens through which to look at harassment. On one end of the behavioral spectrum of online harassment, we see the fact that a drive-by vitriolic message has been thrown out in the night; on the other end, we see the leaking of Social Security numbers, the publication of private photographs, the sending of SWAT teams to physical addresses, and physical assault.”[13]   

As for the Pew Research Center in their report entitled Online Harassment, their taxonomy of online harassment varies slightly in that it:   

online harassment falls into two distinct yet frequently overlapping categories. Name-calling and embarrassment constitute the first class and occur widely across a range of online platforms. The second category includes less frequent—but also more intense and emotionally damaging—experiences like sexual harassment, physical threats, sustained harassment, and stalking.”[14]

What online platforms have to say about online harassment:

As is in the case with hate speech, online platforms have established their own specific language on harassment and bullying:

Google

Harassment, Bullying, and Threats: Do not engage in harassing, bullying, or threatening behavior, and do not incite others to engage in these activities. Anyone using our Services to single someone out for malicious abuse, to threaten someone with serious harm, to sexualize a person in an unwanted way, or to harass in other ways may have the offending content removed or be permanently banned from using the Services. In emergency situations, we may escalate imminent threats of serious harm to law enforcement. Keep in mind that online harassment is also illegal in many places and can have serious offline consequences for both the harasser and the victim.”

YouTube

We want you to use YouTube without fear of being subjected to malicious harassment. In cases where harassment crosses the line into a malicious attack it can be reported and will be removed. In other cases, users may be mildly annoying or petty and should simply be ignored.

Harassment may include :

  • Abusive videos, comments, messages
  • Revealing someone’s personal information
  • Maliciously recording someone without their consent
  • Deliberately posting content in order to humiliate someone
  • Making hurtful and negative comments/videos about another person
  • Unwanted sexualization, which encompasses sexual harassment or sexual bullying in any form
  • Incitement to harass other users or creators


Facebook

We don’t tolerate bullying or harassment. We allow you to speak freely on matters and people of public interest, but remove content that appears to purposefully target private individuals with the intention of degrading or shaming them. This content includes, but is not limited to:

* Pages that identify and shame private individuals, * images altered to degrade private individuals, * photos or videos of physical bullying posted to shame the victim, * sharing personal information to blackmail or harass people and * repeatedly targeting other people with unwanted friend requests or messages.

We define private individuals as people who have neither gained news attention nor the interest of the public, by way of their actions or public profession.

Twitter: According to The Twitter Rules

Harassment: You may not incite or engage in the targeted abuse or harassment of others. Some of the factors that we may consider when evaluating abusive behavior include:

if a primary purpose of the reported account is to harass or send abusive messages to others;

if the reported behavior is one-sided or includes threats;

if the reported account is inciting others to harass another account; and

if the reported account is sending harassing messages to an account from multiple accounts.

Reddit[15]:

Unwelcome content

While Reddit generally provides a lot of leeway in what content is acceptable, here are some guidelines for content that is not. Please keep in mind the spirit in which these were written, and know that looking for loopholes is a waste of time.

Content is prohibited if it: * Is illegal * Is involuntary pornography * Encourages or incites violence * Threatens, harasses, or bullies or encourages others to do so * Is personal and confidential information * Impersonates someone in a misleading or deceptive manner * Is spam

Cyber Bullying

It is the general agreement over the definition of bullying, particularly in academic circles, it stands for: “an aggressive act with three hallmark characteristics: a) it is intentional; b) it involves a power imbalance between an aggressor (individual or group) and a victim; c) it is repetitive in nature and occurs over time.”[16]

Cyber Stalking

It is commonly agreed upon that Cyber stalking includes all of stalking, harassment and bullying through electronic means. It also includes acts of online threats, intimidation and impersonation. It is notable that Cyberstalking carries an all encompassing definition that has also been used interchangeably in the media, to indicate all of bullying, harassment and even hate speech at times. Stalking itself is a gendered crime with studies showing that around 90% of stalkers are men and 80% of those stalked are women[17]

Dox or Doxx

Is an abbreviation from the phrase 'Dropping Dox or Docs', it is a tactic used to reveal personal details of a person online. It was used as a common revenge tactic in the 1990s to exact revenge by hackers, however the word itself has existed since 2001. As Tech Security expert Bruce Schneier describes doxing “as a tool to harass and intimidate people, primarily women, on the internet. Someone would threaten a woman with physical harm, or try to incite others to harm her, and publish her personal information as a way of saying "I know a lot about you—like where you live and work."'[18]

Personal information can range from revealing a person's address, social security or credit card information. In recent years, it has also translated into revealing a person's identity online.

Revenge Porn

Revenge porn is the distribution of sexually graphic content obtained either through a former lover or via hacking and is posted with out the consent of the person. The Cyber Civil Rights Initiative describes it as 'nonconsensual pornography', as many of the perpetrators are not motivated by revenge.[19]


Sextortion

There are very few documented cases in the media linked to sextortion although according to the study conducted by the Center for Technology Innovation at Brookings it is quite common based on a number of cases they reviewed[20]. Their definition of Sextortion is the following: “ is old-fashioned extortion or blackmail, carried out over a computer network, involving some threat—generally but not always a threat to release sexually-explicit images of the victim—if the victim does not engage in some form of further sexual activity.”

Sextortion along with Revenge Porn can be extremely destructive tactics to women, particularly in areas where sexualizing a person is taboo.

Swatting

A tactic or 'internet prank' used to trick emergency services particularly swat teams to a persons house under a false pretense that there is an emergency. This is used to harass and intimidate the target, similar to doxing it is a direct way


Troll

Internet trolls are people who try to instigate reactions by posting offensive, provocative content online. In previous years trolling was also referred to as flaming[21] and in the early years flaming was focused on disrupting online communities. In recent years the term troll has included a more negative connotation and include tolls that resort to hate speech and harassment with the goal of doing so for the 'Lulz'[22].

Kathy Sierra who had faced the wrath of trolls in 2007 describes them as hater trolls, to her trolls do not make a distinction: “In hater troll framing, there’s no difference between a single tweet and a DDoS of your employer’s website. There’s no difference between a “you’re a histrionic charlatan” and “here’s a headless corpse and you are next and here’s your address.” It’s all just trollin’ and mean words and not real life.”[23]

The word trolls has shifted in meaning throughout the years, 'hater trolls' (whom have also been referred to as Griefers by Julien Dibbell[24]) are now part of the predominant narrative when writing or reading about internet trolls. As the understanding of the word troll shifts, it's worth revisiting the term and exploring other words to refer to online harassers or hater trolls. Scholar Gabriella Coleman, in her extensive writings about online trolls, hackers and the group Anonymous, has highlighted another aspect of trolls and that is the politcal troll, that resorts to trolling but for a common good[25]. The variety in these definitions and ever shifting meaning of the term would have lead It is because the different nature and goals of trolls that it is time to find an alternative word for those that seek to harm others through harassment, intimidation and more.

Reclaiming Online Spaces

Over the years many projects, campaigns and tools have emerged to help women combat online harassment and misogyny. We wanted to dedicate a section towards the different initiatives from across the globe and highlight some projects that were started by women who had faced harassment. This section offers a starting point for us to further explore the field, what has been done, what is missing and what their impact is on other women.

At the moment the initiatives feature projects, campaigns, tools, research projects and guides are provided in the table available in the link below.            

File:The Atlas of Online Harassment-Reclaiming the Online Space.ods

And Now What?

The vast field of online harassment and hate speech is one that has been pushed into the forefront of media coverage over recent years. Today, there isn't a day that goes by that there isn't an article writing about women facing harassment online or an online platform adjusting their features to combat hate speech and harassment. As we spend more and more of our time online and our online and offline worlds have become interchangeable, we are just beginning to understand how offline misogyny has permeated the online world. Our attempts to highlight particular stories, initiatives and to define the field scratches the surface on developing an understanding of tech based violence and harassment (used in its broad definition) against women and its effects on freedom of speech.


What we see from the evidence presented above is that online misogyny and harassment is definitely a gendered problem. This isn't a new revelation as multiple studies from Pew Research Center, Demos and many others have reported that women face significantly more harassment online, particularly vocal women such as journalists or activists. In many of the stories we present we see that women have chosen to withdraw from the online world or shut down their online presence[26]. Prominent activists and writers such as Kathy Sierra, Carolina Perez and Lola Aronovich decided to go offline[27] as a result of the harassment and abuse they suffered. Going offline and choosing to disengage from the online public sphere is an obvious restriction and limitation of freedom of speech. It becomes clear to us that online harassment in its many forms leads to the violation of free speech and one that is a gendered.


Definitions of online harassment and hate speech

Harassment, stalking, bullying, misogyny are all terms used by the media to describe very similar things. Hate speech and online harassment are also used quite interchangeably, in both media and academic circles. On the other hand we've witnessed terms like revenge porn or trolls shift in meaning over the years. The lack of clear cut definitions creates legal complexities where specific laws focus on certain aspects of online harassment, such as stalking or revenge porn and ignore other serious abuses. Adding to these complexities, the absence of legal definitions leaves harassment to the interpretation of the individuals moderating flagged content on social network and media platforms[28].

It is crucial for anti-harassment initiatives and women rights defenders to build more universal and agreed upon definitions regarding online harassment, stalking, and misogyny that can be used as the basis of any legal and policy recommendations; but also used to raise awareness of gendered online harassment in mainstream media.

In our search for a more universal definition of online harassment we started to explore the issue from different lenses. With Sarah Joeng's taxonomy of online harassment in mind (see The Atlas of Speech), we acknowledge that online harassment is not just limited to content or merely threats, but is also a behavioral one, one that can also be seen as a violation of a woman's consent, privacy, security and emotional well-being. Doxing and revenge porn are both direct examples those violations, when a woman's private information or nude images are exposed online for the public to see without her consent, we come to view this as a privacy and consent issue as well. To Jennifer Lawerence her nude images constituted a 'sex crime'[29], it no longer became about receiving threats or being stalked online, but evolved into issues around consent and privacy. The question then becomes how do we combat harassment when it is about a violation of consent and privacy? How can we ensure that women's privacy is upheld in the data society that we live in?


Another lens worth exploring is the question raised back in 1993 by Julian Dibbell when he recalled the events that happened in LambdaMoo. Does rape in cyber space, where no physical bodies touched constitute rape in the physical world? With this question we would like to also raise the following: is online harassment and digital violence an extension of the physical world? Is the Cyber real? There is no simple answer to this question and we will not attempt to answer this here, but put this forward to our readers in an attempt to provoke a conversation and explore its significance to the framing of online harassment in relation to the physical world.


Media Coverage

We view media as a significant source that has helped shape the conversation around online harassment and find it integral to our understanding of the field. Reports of celeberties, journalists and politicians being harassed has dominated the narrative in mainstream media, which has seen an increase in coverage since the harassment of Carolina Perez in 2013 and the breaking of Gamergate in 2014. It is notable that we can find few coverage of women from the Global South, particularly activists. However, media is framing the public opinion on this issue and as a result it is worth running an in-depth analysis of media's discourse on the matter to better understand how this topic is being framed for the general public.


The role of online platforms

A key issue that has dominated part of the conversation on online harassment and hate speech is the role of platforms in tackling these issues. Platforms have resorted to blocking accounts and content or flagging and reporting abusive content that would ultimately lead to the content being removed.

In 2013 during the harassment of Carolina Perez, Twitter implemented its first anti-harassment feature with their anti-abuse report button[30]. Over the years these features evolved to allow users to block accounts, and add them to a list of blocked users[31], it wasn't until the summer of 2016 that Twitter resorted to banishing users completely[32].


Facebook, like Twitter, developed intricate reporting and flagging systems to combat all of hate speech and harassment on their platform, including allowing administrators to block certain content based on blacklisted words[33]. Facebook also recently developed a feature that would send an account holder notifications if they suspect that a person is being impersonated online[34]. Facebook is the first platform to develop such a feature even though online impersonations have long plagued women, especially when dealing with abusive former lovers.


Reddit is a social, entertainment and news platform that takes pride in the fact that it relies on community driven moderation through up and down voting content. However, as Ellen Pao took over Reddit and subsequent to her departure[35] as the CEO, the platform started to undertake strict measures to fight online harassment. In the spring of 2016 Reddit took a major step by developing a new tool allowing the blocking of abusive users[36].


Twitter, Facebook and Reddit are but a few of the companies that are combatting harassment predominately by restricting content and users, an action that has been welcomed by many. On the other hand, Jillian C. York argues that blocking content or users does not solve the underlying problems of harassment[37]. Referring to blocking as censorship, can lead to overreaching, where “efforts to censor hate speech, or obscenity, or pornography, are far too often overreaching, creating a chilling effect on other, more innocuous speech.” Her solution to the problem of harassment and hate speech is more speech. With this argument in mind, it would be worth having platforms explore ways where they can empower users to fight harassers rather than simply blocking them and their content.


Solutions to combatting harassment online

Reading through some of the solutions and initatives to combat online harassment and misogyny, we came across a number of methods and tactics that have been put in place. They include:


  • Blocking: There have been a number of methods and platform features that have resorted to blocking both content and abusive users. This is one of the most commonly used solutions to combat harassment.
  • Reporting and Filtering: The majority of platforms over the past few years have enabled reporting and filtering features for people to report online harassment and hate speech. The existing reporting and flagging systems vary from platform to platform, there are continuous attempts to ease the process of flagging content by the user. However, ultimately the process is not without flaws and in many situations flagged content is not always removed[38].
  • Bots: Zero Tolerance is one prominent example of using online bots to 'troll the trolls' on Twitter. The program detects Twitter accounts that use hateful and sexist words, that information is then passed on to 150 bots that tweet at the account user sending them life coaching messages[39]. Others have developed bots to help women block unwanted users from contacting them. Ultimately, bots offer more of an immediate solution rather than a long lasting one.
  • Moderation: There can be a number of mechanisms that are used to moderate content or users online[40]. There is exclusion, pricing, organizing, and norm-setting. These mechanisms include both automated moderation and manual moderation such as distributed moderation. The distributed or voting moderation is a feature used on Reddit and more recently on Periscope[41]. Although there are known risks to distributed moderation, particularly when people organize around votes (ie trolls) or when there are not enough votes[42][43].
  • Projects and Initiatives: As detailed in the Reclaiming the Online Space, there are a number of initiatives and projects that were started in reaction to women being harassed. These initiatives have emerged to raise awareness about the issues in this field, from a legal perspective, or developing guides and documenting the harassment. The more of these initiatives are established the more awareness is raised that help navigate the field.
  • Community and support networks : There is very little written about how the community can help support women who are facing online abuse and misogyny, although talking to activists this may be one of the most important tactic that supports women to fight back and not disengage from cyberspace[44].


The solutions presented are merely a selection of mechanisms and tactics that women have been able to use to fight harassers. The recent trend has focused on blocking or reporting mechanisms in combatting harassment and abusive content, however there are draw backs to these mechanisms as they do not resolve the issue from its root and in fact might aggrevate harassers to escalate their tacitcs.


Notably missing from most discussions over anti-harassment tactics is the community support factor. Although establishing communities or networks of activists for women who have been harassed is a significant method to support and fight back against the abuse and harassment women face online. While some initiatives offer support (legal and informational), little is done to establish networks of activists that can offer a range of support to anyone who faces the wrath of trolls.


Tactical Tech has in the past 3 years worked on establishing networks of support within local communities. Through the Gender and Tech Institute (Berlin and Ecuador) over a 100 women were able to train to become digtial privacy and security champions and they were able to return to their communities and relay their knowledge and skills to their local communities. As a result of both GTI's and follow up events over a 1000 women are now more aware of their digital presence and privacy that provides them with the ability to combat any harassment they may face. By training, engaging and developing resources with women rights defenders and activists from across the globe, Tactical Tech is building both a global and local support networks of digital privacy champions that can combat the phenomenon of online harassment.


This resource is an initial attempt to navigate the field and help us and our community make sense of what is out there and what solutions are available to combat harassment online. In the coming months, we plan to continue building on the content, add more stories from the Global South and work towards building stronger networks of support. Some of the questions and issues we raised above are ones that we hope to find answers and solutions for.


Annex: Recommended Reads

A sample of readings that we recommend:


Credits

The Atlas of Online Harassment was developed by the Tactical Technology Collective.

Funding

This was developed thanks to the Swedish Development Cooperation Agency funding support. To note that Sida can not be regarded as having contributed to or vouching for the content.

Links

  1. http://www.rowmaninternational.com/wp-content/uploads/2015/04/Being-Digital-Citizens-chapter-1.pdf
  2. [1] http://www.juliandibbell.com/texts/bungle_vv.html
  3. http://www.bustle.com/articles/149636-facebooks-new-tools-to-reduce-online-harassment-target-impersonating-profiles-revenge-porn-and-theyre-much-needed
  4. https://www.eff.org/deeplinks/2014/09/facebooks-real-name-policy-can-cause-real-world-harm-lgbtq-community
  5. We see extremist speech as a separate issue from hate speech, especially from a legal perspective. As a result, we will not address extremist speech in this resource, however we acknowledge that in certain circumstances hate speech and extremist speech are seen to be closely interlinked.
  6. https://www.theguardian.com/technology/2016/may/31/facebook-youtube-twitter-microsoft-eu-hate-speech-code
  7. http://digitalrightsfoundation.pk/wp-content/uploads/2015/12/Blasphemy-In-The-Digital-Age.pdf
  8. http://dangerousspeech.org/guidelines
  9. https://www.eff.org/issues/bloggers/legal/liability/defamation
  10. http://commlaw.cua.edu/articles/v17/17.1/Jameson.pdf
  11. Citron, Danielle, “Hate Crimes in Cyberspace”, Harvard University Press, Cambridge: US, 2014
  12. https://www.salon.com/2014/09/02/hate_crimes_in_cyberspace_author_everyone_is_at_risk_from_the_most_powerful_celebrity_to_the_ordinary_person/
  13. Jeong, Sarah, “The Internet of Garbage”, Forbes, US, 2015
  14. http://www.pewinternet.org/2014/10/22/online-harassment/
  15. http://www.redditblog.com/2015/05/promote-ideas-protect-people.html
  16. http://ssrn.com/abstract=2146877
  17. http://europe.newsweek.com/how-law-standing-cyberstalking-264251?rm=eu
  18. https://www.schneier.com/essays/archives/2015/10/the_rise_of_politica.html
  19. http://www.cybercivilrights.org/faqs/
  20. http://www.brookings.edu/~/media/Research/Files/Reports/2016/05/sextortion/sextortion1.pdf?la=en
  21. http://techterms.com
  22. A deviation of Lols or laugh out loud, refers to laughing at the expense of others.
  23. http://www.wired.com/2014/10/trolls-will-always-win/
  24. http://www.wired.com/2008/01/mf-goons/
  25. http://gabriellacoleman.org/wp-content/uploads/2012/08/Coleman-Phreaks-Hackers-Trolls.pdf
  26. http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1
  27. https://www.theguardian.com/technology/2016/apr/11/women-online-abuse-threat-racist
  28. http://www.wired.com/2014/10/content-moderation/
  29. https://www.theguardian.com/film/2014/oct/07/jennifer-lawrence-nude-photo-hack-sex-crime
  30. http://www.theverge.com/2014/11/12/7188549/does-twitter-have-a-secret-weapon-for-silencing-trolls
  31. http://bits.blogs.nytimes.com/2014/12/02/twitter-improves-tools-for-users-to-report-harassment/?_r=0
  32. https://www.theguardian.com/technology/2016/jul/20/milo-yiannopoulos-nero-permanently-banned-twitter
  33. https://www.facebook.com/help/131671940241729
  34. http://www.ibtimes.co.uk/facebook-testing-tool-fight-impersonation-1551379
  35. https://www.theguardian.com/technology/2015/jul/10/ellen-pao-reddit-interim-ceo-resigns
  36. http://www.nytimes.com/2016/04/07/technology/reddit-steps-up-anti-harassment-measures-with-new-blocking-tool.html
  37. https://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship-6e1babd61a9b#.tosss89pm
  38. https://arxiv.org/abs/1505.03359
  39. https://motherboard.vice.com/read/trolling-the-trolls-with-sexism-hunting-twitter-bots
  40. http://james.grimmelmann.net/files/articles/virtues-of-moderation.pdf
  41. http://www.wsj.com/articles/to-fight-trolls-periscope-puts-users-in-flash-juries-1464711622
  42. https://dl.acm.org/citation.cfm?id=2441866
  43. http://www.cpeterson.org/2013/07/22/a-brief-guide-to-user-generated-censorship/
  44. https://www.youtube.com/watch?time_continue=654&v=LiGP44mvNcs