Difference between revisions of "The Atlas of Online Harassment"

From Gender and Tech Resources

(And Now What?)
(And Now What?)
Line 301: Line 301:
 
The vast field of online harassment and hate speech is one that has been pushed into the forefront of media coverage over recent years. Today, there isn't a day that goes by that there isn't an article writing about women facing harassment online or an online platform adjusting their features to combat hate speech and harassment. As we spend more and more of our time online and our online and offline worlds have become interchangeable, we are just beginning to understand how offline misogyny has permeated the online world.  Our attempts to highlight particular stories, initiatives and to define the field scratches the surface on developing an understanding of tech based violence and harassment (used in its broad definition) against women and its effects on freedom of speech.
 
The vast field of online harassment and hate speech is one that has been pushed into the forefront of media coverage over recent years. Today, there isn't a day that goes by that there isn't an article writing about women facing harassment online or an online platform adjusting their features to combat hate speech and harassment. As we spend more and more of our time online and our online and offline worlds have become interchangeable, we are just beginning to understand how offline misogyny has permeated the online world.  Our attempts to highlight particular stories, initiatives and to define the field scratches the surface on developing an understanding of tech based violence and harassment (used in its broad definition) against women and its effects on freedom of speech.
  
What we see from the evidence presented above is that online misogyny and harassment is definitely a gendered problem. This isn't a new revelation as multiple studies from Pew Research Center, Demos and many others have reported that women face significantly more harassment online, particularly vocal women such as journalists or activists. In many of the stories we present we see that women have chosen to withdraw from the online world or shut down their online presence<ref>http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1</ref>. Prominent activists and writers such as Kathy Sierra, Carolina Perez and Lola Aronovich decided to go offline<sup>2</sup> as a result of the  harassment and abuse they suffered. Going offline and choosing to disengage from the online public sphere is an obvious restriction and limitation of freedom of speech. It becomes clear to us that online harassment in its many forms leads to the violation of free speech and one that is a gendered.  
+
What we see from the evidence presented above is that online misogyny and harassment is definitely a gendered problem. This isn't a new revelation as multiple studies from Pew Research Center, Demos and many others have reported that women face significantly more harassment online, particularly vocal women such as journalists or activists. In many of the stories we present we see that women have chosen to withdraw from the online world or shut down their online presence<ref>http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1</ref>. Prominent activists and writers such as Kathy Sierra, Carolina Perez and Lola Aronovich decided to go offline<ref>https://www.theguardian.com/technology/2016/apr/11/women-online-abuse-threat-racist</ref> as a result of the  harassment and abuse they suffered. Going offline and choosing to disengage from the online public sphere is an obvious restriction and limitation of freedom of speech. It becomes clear to us that online harassment in its many forms leads to the violation of free speech and one that is a gendered.  
  
  
Line 307: Line 307:
 
''Definitions of online harassment and hate speech''
 
''Definitions of online harassment and hate speech''
  
Harassment, stalking, bullying, misogyny are all terms used by the media to describe very similar things. Hate speech and online harassment are also used quite interchangeably, in both media and academic circles. On the other hand we've witnessed terms like revenge porn or trolls shift in meaning over the years. The lack of clear cut definitions creates legal complexities where specific laws focus on certain aspects of online harassment, such as stalking or revenge porn and ignore other serious abuses. Adding to these complexities, the absence of legal definitions leaves harassment to the interpretation of the individuals moderating flagged content on social network and media platforms<sup>3</sup>.
+
Harassment, stalking, bullying, misogyny are all terms used by the media to describe very similar things. Hate speech and online harassment are also used quite interchangeably, in both media and academic circles. On the other hand we've witnessed terms like revenge porn or trolls shift in meaning over the years. The lack of clear cut definitions creates legal complexities where specific laws focus on certain aspects of online harassment, such as stalking or revenge porn and ignore other serious abuses. Adding to these complexities, the absence of legal definitions leaves harassment to the interpretation of the individuals moderating flagged content on social network and media platforms<ref>http://www.wired.com/2014/10/content-moderation/</ref>.
  
 
It is crucial for anti-harassment initiatives and women rights defenders to build more universal and agreed upon definitions regarding online harassment, stalking, and misogyny that can be used as the basis of any legal and policy recommendations; but also used to raise awareness of gendered online harassment in mainstream media.  
 
It is crucial for anti-harassment initiatives and women rights defenders to build more universal and agreed upon definitions regarding online harassment, stalking, and misogyny that can be used as the basis of any legal and policy recommendations; but also used to raise awareness of gendered online harassment in mainstream media.  
 
   
 
   
In our search for a more universal definition of online harassment we started to explore the issue from different lenses. With Sarah Joeng's taxonomy of online harassment in mind (see The Atlas of Speech), we acknowledge that online harassment is not just limited to content or merely threats, but is also a behavioral one, one that can also be seen as a violation of a woman's consent and privacy. Doxing and revenge porn are both direct examples those violations, when a woman's private information or nude images are exposed online for the public to see without her consent, we come to view this as a privacy and consent issue as well. To Jennifer Lawerence her nude images constituted a 'sex crime'<sup>4</sup>, it no longer became about receiving threats or being stalked online, but evolved into issues around consent and privacy. The question then becomes how do we combat harassment when it is about a violation of consent and privacy? How can we ensure that women's privacy is upheld in the data society that we live in?  
+
In our search for a more universal definition of online harassment we started to explore the issue from different lenses. With Sarah Joeng's taxonomy of online harassment in mind (see The Atlas of Speech), we acknowledge that online harassment is not just limited to content or merely threats, but is also a behavioral one, one that can also be seen as a violation of a woman's consent and privacy. Doxing and revenge porn are both direct examples those violations, when a woman's private information or nude images are exposed online for the public to see without her consent, we come to view this as a privacy and consent issue as well. To Jennifer Lawerence her nude images constituted a 'sex crime'<ref>https://www.theguardian.com/film/2014/oct/07/jennifer-lawrence-nude-photo-hack-sex-crime</ref>, it no longer became about receiving threats or being stalked online, but evolved into issues around consent and privacy. The question then becomes how do we combat harassment when it is about a violation of consent and privacy? How can we ensure that women's privacy is upheld in the data society that we live in?
  
Another lens worth exploring is the question raised back in 1993 by Julian Dibbell when he recalled the events that happened in LambdaMoo. Does rape in cyber space, where no physical bodies touched constitute rape in the physical world? With this question we would like to also raise the following: is online harassment and digital violence an extension of the physical world? Is the Cyber real? There is no simple answer to this question and we will not attempt to answer this here, but put this forward to our readers in an attempt to provoke a conversation and explore its significance to the framing
+
Another lens worth exploring is the question raised back in 1993 by Julian Dibbell when he recalled the events that happened in LambdaMoo. Does rape in cyber space, where no physical bodies touched constitute rape in the physical world? With this question we would like to also raise the following: is online harassment and digital violence an extension of the physical world? Is the Cyber real? There is no simple answer to this question and we will not attempt to answer this here, but put this forward to our readers in an attempt to provoke a conversation and explore its significance to the framing of online harassment in relation to the physical world.  
of online harassment in relation to the physical world.  
+
  
  
Line 326: Line 325:
 
''The role of online platforms''
 
''The role of online platforms''
 
   
 
   
A key issue that has dominated part of the conversation on online harassment and hate speech is the role of platforms in tackling these issues. Platforms have resorted to blocking accounts and content or flagging and reporting abusive content that would ultimately lead to the content being removed.  
+
A key issue that has dominated part of the conversation on online harassment and hate speech is the role of platforms in tackling these issues. Platforms have resorted to blocking accounts and content or flagging and reporting abusive content that would ultimately lead to the content being removed.
  
In 2013 during the harassment of Carolina Perez, Twitter implemented its first anti-harassment feature with their anti-abuse report button<sup>5</sup>. Over the years these features evolved to allow users to block accounts, and add them to a list of blocked users<sup>6</sup>, it wasn't until the summer of 2016 that Twitter resorted to banishing users completely<sup>7</sup>.
+
In 2013 during the harassment of Carolina Perez, Twitter implemented its first anti-harassment feature with their anti-abuse report button<ref>http://www.theverge.com/2014/11/12/7188549/does-twitter-have-a-secret-weapon-for-silencing-trolls</ref>. Over the years these features evolved to allow users to block accounts, and add them to a list of blocked users<ref>http://bits.blogs.nytimes.com/2014/12/02/twitter-improves-tools-for-users-to-report-harassment/?_r=0</ref>, it wasn't until the summer of 2016 that Twitter resorted to banishing users completely<ref>https://www.theguardian.com/technology/2016/jul/20/milo-yiannopoulos-nero-permanently-banned-twitter</ref>.
  
Facebook, like Twitter, developed intricate reporting and flagging systems to combat all of hate speech and harassment on their platform, including allowing administrators to block certain content  based on blacklisted words<sup>8</sup>. Facebook also recently developed a feature that would send an account holder notifications if they suspect that a person is being impersonated online<sup>9</sup>. Facebook is the first platform to develop such a feature even though online impersonations have long plagued women, especially when dealing with abusive former lovers.  
+
Facebook, like Twitter, developed intricate reporting and flagging systems to combat all of hate speech and harassment on their platform, including allowing administrators to block certain content  based on blacklisted words<ref>https://www.facebook.com/help/131671940241729</ref>. Facebook also recently developed a feature that would send an account holder notifications if they suspect that a person is being impersonated online<sup>9</sup>. Facebook is the first platform to develop such a feature even though online impersonations have long plagued women, especially when dealing with abusive former lovers.  
  
Reddit is a social, entertainment and news platform that takes pride in the fact that it relies on community driven moderation through up and down voting content. However, as Ellen Pao took over Reddit and subsequent to her departure<sup>10</sup> as the CEO, the platform started to undertake strict measures to fight online harassment. In the spring of 2016  Reddit took a major step by developing a new tool allowing the blocking of abusive users<sup>11</sup>.
+
Reddit is a social, entertainment and news platform that takes pride in the fact that it relies on community driven moderation through up and down voting content. However, as Ellen Pao took over Reddit and subsequent to her departure<ref>https://www.theguardian.com/technology/2015/jul/10/ellen-pao-reddit-interim-ceo-resigns</ref> as the CEO, the platform started to undertake strict measures to fight online harassment. In the spring of 2016  Reddit took a major step by developing a new tool allowing the blocking of abusive users<ref>http://www.nytimes.com/2016/04/07/technology/reddit-steps-up-anti-harassment-measures-with-new-blocking-tool.html</ref>.
  
Twitter, Facebook and Reddit are but a few of the companies that are combatting harassment predominately by restricting content and users, an action that has been welcomed by many. On the other hand, Jillian C. York argues that blocking content or users does not solve the underlying problems of harassment<sup>12</sup>. Referring to blocking as censorship, can lead to overreaching, where “efforts to censor hate speech, or obscenity, or pornography, are far too often overreaching, creating a chilling effect on other, more innocuous speech.” Her solution to the problem of harassment and hate speech is more speech. With this argument in mind, it would be worth having platforms explore ways where they can empower users to fight harassers rather than simply blocking them and their content.  
+
Twitter, Facebook and Reddit are but a few of the companies that are combatting harassment predominately by restricting content and users, an action that has been welcomed by many. On the other hand, Jillian C. York argues that blocking content or users does not solve the underlying problems of harassment<ref>https://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship-6e1babd61a9b#.tosss89pm</ref>. Referring to blocking as censorship, can lead to overreaching, where “efforts to censor hate speech, or obscenity, or pornography, are far too often overreaching, creating a chilling effect on other, more innocuous speech.” Her solution to the problem of harassment and hate speech is more speech. With this argument in mind, it would be worth having platforms explore ways where they can empower users to fight harassers rather than simply blocking them and their content.  
  
  
Line 343: Line 342:
 
   
 
   
 
* ''Blocking:'' There have been a number of methods and platform features that have resorted to blocking both content and abusive users. This is one of the most commonly used solutions to combat harassment.
 
* ''Blocking:'' There have been a number of methods and platform features that have resorted to blocking both content and abusive users. This is one of the most commonly used solutions to combat harassment.
* ''Reporting and Filtering:'' The majority of platforms over the past few years have enabled reporting and filtering features for people to report online harassment and hate speech. The existing reporting and flagging systems vary from platform to platform, there are continuous attempts to ease the process of flagging content by the user. However, ultimately the process is not without flaws and in many situations flagged content is not always removed<sup>13</sup>.
+
* ''Reporting and Filtering:'' The majority of platforms over the past few years have enabled reporting and filtering features for people to report online harassment and hate speech. The existing reporting and flagging systems vary from platform to platform, there are continuous attempts to ease the process of flagging content by the user. However, ultimately the process is not without flaws and in many situations flagged content is not always removed<ref>https://arxiv.org/abs/1505.03359</ref>.
* Bots: Zero Tolerance is one prominent example of using online bots to 'troll the trolls' on Twitter.  The program detects Twitter accounts that use hateful and sexist words, that information is then passed on to 150 bots that tweet at the account user sending them life coaching messages<sup>14</sup>. Others have developed bots to help women block unwanted users from contacting them. Ultimately, bots offer more of an immediate solution rather than a long lasting one.
+
* Bots: Zero Tolerance is one prominent example of using online bots to 'troll the trolls' on Twitter.  The program detects Twitter accounts that use hateful and sexist words, that information is then passed on to 150 bots that tweet at the account user sending them life coaching messages<ref>https://motherboard.vice.com/read/trolling-the-trolls-with-sexism-hunting-twitter-bots</ref>. Others have developed bots to help women block unwanted users from contacting them. Ultimately, bots offer more of an immediate solution rather than a long lasting one.
* Moderation: There can be a number of mechanisms that are used to moderate content or users online<sup>15</sup>. There is exclusion, pricing, organizing, and norm-setting. These mechanisms include both automated moderation and manual moderation such as distributed moderation. The distributed or voting moderation is a feature used on Reddit and more recently on Periscope<sup>16</sup>. Although there are known risks to distributed moderation, particularly when people organize around votes (ie trolls) or when there are not enough votes<sup>1718</sup>.
+
* Moderation: There can be a number of mechanisms that are used to moderate content or users online<ref>http://james.grimmelmann.net/files/articles/virtues-of-moderation.pdf</ref>. There is exclusion, pricing, organizing, and norm-setting. These mechanisms include both automated moderation and manual moderation such as distributed moderation. The distributed or voting moderation is a feature used on Reddit and more recently on Periscope<ref>http://www.wsj.com/articles/to-fight-trolls-periscope-puts-users-in-flash-juries-1464711622</ref>. Although there are known risks to distributed moderation, particularly when people organize around votes (i.e. trolls) or when there are not enough votes<ref>https://dl.acm.org/citation.cfm?id=2441866</ref><ref>http://www.cpeterson.org/2013/07/22/a-brief-guide-to-user-generated-censorship/</ref>.
* Projects and Initiatives: As detailed in the Reclaiming the Online Space, there are a number of initiatives and projects that were started in reaction to women being harassed. These initiatives have emerged to raise awareness about the issues in this field, from a legal perspective, or developing guides and documenting the harassment. The more of these iniatives are established the more awareness is raised that help navigate the field.
+
* Projects and Initiatives: As detailed in the Reclaiming the Online Space, there are a number of initiatives and projects that were started in reaction to women being harassed. These initiatives have emerged to raise awareness about the issues in this field, from a legal perspective, or developing guides and documenting the harassment. The more of these initiatives are established the more awareness is raised that help navigate the field.
* Community and support networks : There is very little written about how the community can help support women who are facing online abuse and misogyny, although talking to activists this may be one of the most important tactic that supports women to fight back and not disengage from cyberspace<sup>19</sup>.
+
* Community and support networks : There is very little written about how the community can help support women who are facing online abuse and misogyny, although talking to activists this may be one of the most important tactic that supports women to fight back and not disengage from cyberspace<ref>https://www.youtube.com/watch?time_continue=654&v=LiGP44mvNcs</ref>.
  
The solutions presented are merely a selection of mechanisms and
+
The solutions presented are merely a selection of mechanisms and tactics that women have been able to use to fight harassers. The recent trend has focused on blocking or reporting mechanisms in combatting harassment and abusive content, however there are draw backs to these mechanisms as they do not resolve the issue from its root and in fact might aggravate harassers to escalate their tactics.
tactics that women have been able to use to fight harassers. The
+
recent trend has focused on blocking or reporting mechanisms in
+
combatting harassment and abusive content, however there are draw
+
backs to these mechanisms as they do not resolve the issue from its
+
root and in fact might aggrevate harassers to escalate their tacitcs.
+
  
Notably missing from most discussions over anti-harassment tactics is
+
Notably missing from most discussions over anti-harassment tactics is the community support factor. Although establishing communities or networks of activists for women who have been harassed is a significant method to support and fight back against the abuse and harassment women face online. While some initiatives offer support (legal and informational), little is done to establish networks of activists that can offer a range of support to anyone who faces the wrath of trolls.
the community support factor. Although establishing communities or
+
networks of activists for women who have been harassed is a
+
significant method to support and fight back against the abuse and
+
harassment women face online. While some initiatives offer support
+
(legal and informational), little is done to establish networks of
+
activists that can offer a range of support to anyone who faces the
+
wrath of trolls.  
+
  
Tactical Tech has ove the past 3 years worked on establishing networks
+
Tactical Tech has over the past 3 years worked on establishing networks of support within local communities. Through the Gender and Tech Institute (Berlin and Ecuador) over a 100 women were able to train to become digital privacy and security champions and they were able to return to their communities and relay their knowledge and skills to their local communities. As a result of both GTI's and follow up events over a 1000 women are now more aware of their digital presence and privacy that  provides them with the ability to combat any harassment they may face. By training, engaging and developing resources with women rights defenders and activists from across the globe, Tactical Tech is building both a global and local support networks of digital privacy champions that can combat the phenomenon of online harassment.
of support within local communities. Through the Gender and Tech
+
Institute (Berlin and Ecuador) over a 100 women were able to train to
+
become digtial privacy and security champions and they were able to
+
return to their communities and relay their knowledge and skills to
+
their local communities. As a result of both GTI's and follow up
+
events over a 1000 women are now more aware of their digital presence
+
and privacy that  provides them with the ability to combat any
+
harassment they may face. By training, engaging and developing
+
resources with women rights defenders and activists from across the
+
globe, Tactical Tech is building both a global and local support
+
networks of digital privacy champions that can combat the phenomenon
+
of online harassment.  
+
  
This resource is an initial attempt to navigate the field and help us
+
This resource is an initial attempt to navigate the field and help us and our community make sense of what is out there and what solutions are available to combat harassment online. In the coming months, we plan to continue building on the content, add more stories from the Global South and work towards building stronger networks of support. Some of the questions and issues we raised above are ones that we hope to find answers and solutions for.  
and our community make sense of what is out there and what solutions
+
are available to combat harassment online. In the coming months, we
+
plan to continue building on the content, add more stories from the
+
Global South and work towards building stronger networks of support.
+
Some of the questions and issues we raised above are ones that we
+
hope to find answers and solutions for.  
+
  
 
 
 
 
1Citron,
 
Danielle, “The Changing Attitudes Towards Cyber Gender Harassment:
 
Anonymous as a Guide?”, accessed June 28<sup>th</sup>,
 
<nowiki>http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1</nowiki>
 
 
2Laville,
 
Sandra, Wong, Julia Carrie, Hunt, Elle, “The Women Abandoned to
 
their Online Abusers”, accessed May 19<sup>th</sup>, 2016,
 
<nowiki>https://www.theguardian.com/technology/2016/apr/11/women-online-abuse-threat-racist</nowiki>
 
 
3http://www.wired.com/2014/10/content-moderation/
 
 
4https://www.theguardian.com/film/2014/oct/07/jennifer-lawrence-nude-photo-hack-sex-crime
 
 
5http://www.theverge.com/2014/11/12/7188549/does-twitter-have-a-secret-weapon-for-silencing-trolls
 
 
6http://bits.blogs.nytimes.com/2014/12/02/twitter-improves-tools-for-users-to-report-harassment/?_r=0
 
 
7https://www.theguardian.com/technology/2016/jul/20/milo-yiannopoulos-nero-permanently-banned-twitter
 
 
8https://www.facebook.com/help/131671940241729
 
 
9http://www.ibtimes.co.uk/facebook-testing-tool-fight-impersonation-1551379
 
 
10https://www.theguardian.com/technology/2015/jul/10/ellen-pao-reddit-interim-ceo-resigns
 
 
11http://www.nytimes.com/2016/04/07/technology/reddit-steps-up-anti-harassment-measures-with-new-blocking-tool.html
 
 
12https://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship-6e1babd61a9b#.tosss89pm
 
 
13https://arxiv.org/abs/1505.03359
 
 
14https://motherboard.vice.com/read/trolling-the-trolls-with-sexism-hunting-twitter-bots
 
 
15http://james.grimmelmann.net/files/articles/virtues-of-moderation.pdf
 
 
16http://www.wsj.com/articles/to-fight-trolls-periscope-puts-users-in-flash-juries-1464711622
 
 
17https://dl.acm.org/citation.cfm?id=2441866
 
 
18http://www.cpeterson.org/2013/07/22/a-brief-guide-to-user-generated-censorship/
 
 
19https://www.youtube.com/watch?time_continue=654&v=LiGP44mvNcs
 
 
 
===Annex: Recommended Reads===
 
===Annex: Recommended Reads===
 
A sample of readings that we recommend:  
 
A sample of readings that we recommend:  

Revision as of 21:45, 31 July 2016

Introduction

Tales of Online Misogyny

In 1993, the online world LambdaMOO was thrown into disarray after one of the participants was accused of committing rape there. The incident was made famous through Julian Dibbell's article “A Rape in CyberSpace”[1]. Julian walked us through the harrowing details of the incident, where a user with the handle Mr Bangles entered the online space and used a mechanism to take control over the movements of another user. Mr. Bangles then preceded to force the users he controlled into committing sexual acts against their will. Although there no one was assaulted physically, the incident left a mark in the online world and among its participants, it brought forward the complex question whether rape in cyberspace constituted rape in the physical world. This may be one of the first cases of online harassment reported in the media, at a time when about 4% of the world's population was online and much less took part in LambdaMoo.

As more cases are coming to light the debate is growing and we are recognizing different actors, circumstances and tactics used to harass and intimidate women. In this section we highlight 35 stories of women who have been harassed or confronted with online misogyny/ hate speech. These cases will help detail the different women harassed, their harassers, tactics used and outcomes in an attempt to gain a better understanding of the diversity in threats against women online. Starting from 2007, we describe and outline the stories based on relevant time periods, tactics, outcomes and different geographic contexts. Through mapping these stories we get a clearer picture of the field, of the 35 stories, 13 occurred in the Global South. 2007 marks a pivotal point in the field, with stories about harassment starting to filter into mainstream media and reached prominence with the case of writer/ blogger Kathy Sierra.

The selection of the case studies does contain bias, as it is based on our analysis of news reports on the stories. This also means that the women who were harassed were able to report on their own stories and many tend to be journalists, activists, or public figures. We are using these case studies as a starting point to explore the field, we would like to expand to include further case studies from the Global South and from women activists.

File:The Atlas of Online Harassment - Tales of Online Misogyny.ods

During our research a number of observations stood out to us with regards to the outcomes of many of the stories, we list our observations below:

Silencing of Women online: One of the reactions we've heard from women who suffered from online harassment and stalking is that they for at least some time period, if not indefinitely, have gone offline. Kathy Sierra canceled her speaking engagement at the time of her harassment, Carolina Perez deactivate her Twitter account and so did Leslie Jones and the list of names continues. However many of these stories have also inspired women to reclaim the online space and establish initiatives that would counter harassment online and help build networks that would support other women.

Counter speech and reclaiming the online space: Regardless of the harassment and intimidation the women face, many of them featured here have internalized the need to online harassment through counter speech or by building helpful resources. Ten out of the thirty-four cases we identified led to the launch of a positive initiative (a campaign, a website, etc.) for victims to reclaim their online territories.

The hashtag #NaoSaoEllas or #MiPrimerAsedio (a trending topic on Twitter) in Brazil were launched after the harassment feminist blogger Lola Aranonovich and 13-year-old Valentina Schulz underwent. Danish journalist and activists Emma Holte started the project Consent after being subjected to Revenge Porn by her former partner. Consent aims to reclaim the image of her body and to raise awareness around the importance of consent.

Legal Practices: Although in some of the case studies we list, some of the women decided to take legal action the legal framework around online harassment remains unclear. In the following section -The Atlas of Words – readers will be able to read through some of the legal and judiciary definitions surrounding both online harassment and hate speech. It is immediately clear that there is no universal legal definition of cyber harassment or cyber stalking and also a diversity of legal landscapes around these issues. Even more so the judiciary system does not seem a serious and efficient recourse for the harassed women. Stemming from the case studies, it appears that the traditional way of handling abuses and harassments offline is not applicable online. So far, some cases were heard after the women subjected to these abuses, spoke out publicly, organized their own form of justice such as Holly Jacobs with 'End Revenge Porn' project that gathers resources and legal information for victims of revenge porn.

Although there have been attempts to introduce new laws to combat online harassment and stalking, particularly in Western countries, there are still issues around authorities' reaction to cases of online harassment, the platforms' reaction and how legally there is still much to be done.

Platform Policies: Over the past four years online platforms have actively taken action to combat hate speech and online harassment. That made it to the forefront of the battle with the prominent case of Carolina Perez. As Carolina was bombarded with harassment on Twitter, the platform decided to take action and enable different features allowing users to block and filter accounts. Twitter is not the only platform to opt for blocking or filtering content, Facebook has also taken very recent steps to combat revenge porn and impersonations[2], on the other hand their real name policy could have caused serious harm for the LGBT communities online[3].

The question worth asking here is whether blocking and filtering content on these platforms is a powerful solution in combating harassment. Studying the impact and effects of content removal and filtering is definitely needed in this field along with the effects of counter speech.

The Atlas of Speech

Much has been written academically and in the media about Hate Speech and particularly Online Harassment in recent years. With this vast body of literature and the legal shifts that have happened over the years translates into a number of definitions that have emerged trying to frame the field. It is for that reason important to dedicate a section that highlights these definitions in an attempt to explore all of the legal, online and universal definitions that exist in this field, if that is the case of course.

This section divides the definitions into two lists, one that focuses on Hate Speech and the other on Online Harassment. This is in no way an expansive list but rather reflective of the many terms linked to forms of online speech.  

Hate Speech

Hate speech[4] has direct ties to free speech and has a long history from a legal perspective. Tracing this issue back to the Universal Declaration of Human Rights in 1948, while not directly addressed hate speech in the form of incitement to violence and discrimination made it's way into International Law. The following years saw a number of international conventions address hate speech in lieu with freedom of expression.

We have selected a few definitions from a legal perspective to be presented here:

International Covenant on Civil and Political Rights (ICCPR) – Article 20 (2)

1. Any propaganda for war shall be prohibited by law.

2. Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”

International Covenant on the Elimination of All forms of Racial Discrimination (ICERD) – Article 4 (a)

States Parties condemn all propaganda and all organizations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form, and undertake to adopt immediate and positive measures designed to eradicate all incitement to, or acts of, such discrimination and, to this end, with due regard to the principles embodied in the Universal Declaration of Human Rights and the rights expressly set forth in article 5 of this Convention, inter alia:

(a) Shall declare an offense punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof;”

European Convention on Human Rights – Article 10 (2)

The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.”

American Convention on Human Rights – Article 13 (5)

Any propaganda for war and any advocacy of national, racial, or religious hatred that constitute incitements to lawless violence or to any other similar action against any person or group of persons on any grounds including those of race, color, religion, language, or national origin shall be considered as offenses punishable by law.” 

Indian Penal Code: 153A.

Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.—

(1) Whoever—

(a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, language or regional groups or castes or communities, or

(b) commits any act which is prejudicial to the maintenance of harmony between different religious, racial, language or regional groups or castes or communities, and which disturbs or is likely to disturb the public tranquillity,

What Platforms Say about Hate Speech?

Although International Law and conventions explicitly address Hate Speech, there is no mention of misogyny, on the other hand, online platforms such as Google, Facebook and Twitter add both gender and sexual orientation within their definitions of hate speech. This presents an alternative and gendered definition of hate speech particularly online, where women face harassment on a daily basis. It is for that reason and linked to the active

With an active work in platform policies to tackle hate speech and harassment online. Recently the platforms have also gone as far as to sign on to EU's 'code of conduct' on hate speech[5].

Google

Hate Speech: Our products are platforms for free expression. But we don't support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line.

YouTube (owned by Google)

We encourage free speech and try to defend your right to express unpopular points of view, but we don't permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

  • race or ethnic origin
  • religion
  • disability
  • gender
  • age
  • veteran status
  • sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their race.”

Facebook: Community Standards

Facebook removes hate speech, which includes content that directly attacks people based on their:

  • race,
  • ethnicity,
  • national origin,
  • religious affiliation,
  • sexual orientation,
  • sex, gender or gender identity, or
  • serious disabilities or diseases.

Facebook also indicates that beyond removing content containing hate speech they also provide 'tools' for people to avoid offensive content and also promote counter-speech, see the following:

While we work hard to remove hate speech, we also give you tools to avoid distasteful or offensive content. Learn more about the tools we offer to control what you see. You can also use Facebook to speak up and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.”

Twitter: According to The Twitter Rules

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Reddit:

Do not post violent content: Do not post content that incites harm against people or groups of people. Additionally, if you're going to post something violent in nature, think about including a NSFW tag. Even mild violence can be difficult for someone to explain to their boss if they open it unexpectedly.”

Blasphemy

The Oxford dictionary defines Blasphemy “as an act of offense of speaking sacrilegiously about God or sacred things; profane talk.” Although the definition may seem straightforward there have been notable cases where the accusation of blasphemy was used to censor certain types of speech, particularly in countries that have strict laws on the topic that enforce detentions and death penalties against people who commit blasphemy. Pakistan is an example of a state using blasphemy laws to censor speech, such as is in the case of Professor Junaid Hafeez, an English lecturer at Bahauddin Zakariya University and known for his liberal views, is currently in prison accused of blasphemy. While it is not clear whether Hafeez posted anything blasphemous, and some believe the evidence was doctored, he remains in jail awaiting his trial since 2013. His case and many others like Hafeez were reported[6] by the Digital Right Foundation in their report “Blasphemy in the Digital Age”.

Counter Speech

In its most simplest form, counter speech is a form of speech that responds to hate speech or extremism. It is a different narrative than that of hate speech and attempts to counter hate speech. Counter speech comes in different forms, in some cases it is a simple response to the speech online i.e. tweet, Facebook comment, in other cases it is creating a Facebook group or blog post. Parody has also become a common tactic to counter hate speech.

Dangerous Speech

As defined by Professor Susan Benesch is a subset of Hate Speech which has a reasonable chance of catalyzing or amplifying violence by on group against another. Prof. Benesch developed a framework that identifies Dangerous Speech and established five variables:

“The most dangerous speech act, or ideal type of dangerous speech, would be one for which all five variables are maximized:

  1. a powerful speaker with a high degree of influence over the audience.
  2. the audience has grievances and fear that the speaker can cultivate.
  3. a speech act that is clearly understood as a call to violence,
  4. a social or historical context that is propitious for violence, for any of a variety of reasons, including longstanding competition between groups for resources, lack of efforts to solve grievances, or previous episodes of violence.
  5. a means of dissemination that is influential in itself, for example because it is the sole or primary source of news for the relevant audience.”[7]

Discrimination

Is the unjust treatment or prejudice against different people based on their sex, race, and age. That list can expand to include sexual orientation and religion.

Defamation[8]

According to the Electronic Frontier Foundation, online defamation stand as: “a false and unprivileged statement of fact that is harmful to someone's reputation, and published "with fault," meaning as a result of negligence or malice. State laws often define defamation in specific ways. Libel is a written defamation; slander is a spoken defamation.”

Freedom of Speech

Perhaps the most famous articles on Freedom of speech and expression is the Universal Declaration of Human Rights and the First Amendment in the US constitution.

Universal Declaration of Human Rights (Article 19):

“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

US Constitution (First Amendment):

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.”

Hate Crime

Not to be confused with Hate Speech, legally hate crime is generally a violent action against another person or community because of their race, ethnicity, national origin, religion, sexual orientation, or disability.

It is notable that international law and conventions do not directly address gender or sexual orientation in the body of text. However, states along with online platforms have taken steps to prohibit discrimination and hate speech based on these issues.

Although hate speech is addressed extensively in many international and state laws, online harassment on the other hand is not addressed in International law and states have recently started to draft laws on that matter, we are starting to see laws addressing cyberstalking, cyber bullying and in some places revenge porn. The following list will attempt to list a set of terminology used to describe harassment online.

Online Harassment

There is no universally agreed upon definition of online harassment and is one that has been difficult to define. A number of scholars have explored the concept and attempted to define it, especially in the absence of a legal definition. We present a few of those definitions below:

From a scholarly perspective, in some of the earlier writings on online harassment Sarah Jameson in her paper Cyberharassment: Striking a Balance Between Free Speech and Privacy [9]defined cyber harassment as such:

Although cyber harassment has no universal definition, it typically occurs when an individual or group with no legitimate purpose uses a form of electronic communication as a means to cause great emotional distress to a person. In addition to e-mail and blogs, conduits of “new media” available to the cyber harasser include chat rooms, instant messaging services, electronic bulletin boards, and social networking sites. The Internet provides cyber harassers with an easy channel to “incite others against their victims.” Not only can the cyber harasser harass his victim, but he can also impersonate the victim and post defamatory messages on bulletin boards, cyberbully his victim, or send vulgar e-mails to the victim’s employer. The victim suffers as a result of actions committed by the cyber harasser.”

On the other hand Danielle Citron, views cyber harassment as: “Although definition of these terms vary, cyber harassment is often understood to involve the intentional infliction of substantial emotional distress accomplished by online speech that is persistent enough to amount to a “course of conduct” rather than an isolated incident”[10]

To Citron, harassment includes: “real threats, privacy invasions, cruelty amounting to intentional infliction about purely private matters, nude images and reputational damage. People lose their jobs, they can’t get new jobs, because Google searches turn up destructive information. They often have to move because they’re afraid of confrontation by strangers. They lose investments and professional opportunities. These are coercive acts.”[11]

Author of the book entitled The Internet of Garabage Sarah Jeong goes as far as to develop a taxonomy for online harassment dividing it into two spectrums one defined by behavior and the other by content. She writes:

Harassment exists on two spectrums at once— one that is defined by behavior and one that is defined by content. The former is the best way to understand harassment, but the latter has received the most attention in both popular discourse and academic treatments.

When looking at harassment as content, we ultimately fixate on “death threats” as one end of the spectrum, and “annoying messages” at the other end. Thus the debate ends up revolving around civil rights versus free speech— where is the line between mean comments and imminent danger? Between jokes and threats?

Behavior is a better, more useful lens through which to look at harassment. On one end of the behavioral spectrum of online harassment, we see the fact that a drive-by vitriolic message has been thrown out in the night; on the other end, we see the leaking of Social Security numbers, the publication of private photographs, the sending of SWAT teams to physical addresses, and physical assault.”[12]   

As for the Pew Research Center in their report entitled Online Harassment, their taxonomy of online harassment varies slightly in that it:   

online harassment falls into two distinct yet frequently overlapping categories. Name-calling and embarrassment constitute the first class and occur widely across a range of online platforms. The second category includes less frequent—but also more intense and emotionally damaging—experiences like sexual harassment, physical threats, sustained harassment, and stalking.”[13]

What online platforms have to say about online harassment:

As is in the case with hate speech, online platforms have established their own specific language on harassment and bullying:

Google

Harassment, Bullying, and Threats: Do not engage in harassing, bullying, or threatening behavior, and do not incite others to engage in these activities. Anyone using our Services to single someone out for malicious abuse, to threaten someone with serious harm, to sexualize a person in an unwanted way, or to harass in other ways may have the offending content removed or be permanently banned from using the Services. In emergency situations, we may escalate imminent threats of serious harm to law enforcement. Keep in mind that online harassment is also illegal in many places and can have serious offline consequences for both the harasser and the victim.”

YouTube

We want you to use YouTube without fear of being subjected to malicious harassment. In cases where harassment crosses the line into a malicious attack it can be reported and will be removed. In other cases, users may be mildly annoying or petty and should simply be ignored.

Harassment may include :

  • Abusive videos, comments, messages
  • Revealing someone’s personal information
  • Maliciously recording someone without their consent
  • Deliberately posting content in order to humiliate someone
  • Making hurtful and negative comments/videos about another person
  • Unwanted sexualization, which encompasses sexual harassment or sexual bullying in any form
  • Incitement to harass other users or creators


Facebook

We don’t tolerate bullying or harassment. We allow you to speak freely on matters and people of public interest, but remove content that appears to purposefully target private individuals with the intention of degrading or shaming them. This content includes, but is not limited to:

* Pages that identify and shame private individuals, * images altered to degrade private individuals, * photos or videos of physical bullying posted to shame the victim, * sharing personal information to blackmail or harass people and * repeatedly targeting other people with unwanted friend requests or messages.

We define private individuals as people who have neither gained news attention nor the interest of the public, by way of their actions or public profession.

Twitter: According to The Twitter Rules

Harassment: You may not incite or engage in the targeted abuse or harassment of others. Some of the factors that we may consider when evaluating abusive behavior include:

if a primary purpose of the reported account is to harass or send abusive messages to others;

if the reported behavior is one-sided or includes threats;

if the reported account is inciting others to harass another account; and

if the reported account is sending harassing messages to an account from multiple accounts.

Reddit[14]:

Unwelcome content

While Reddit generally provides a lot of leeway in what content is acceptable, here are some guidelines for content that is not. Please keep in mind the spirit in which these were written, and know that looking for loopholes is a waste of time.

Content is prohibited if it: * Is illegal * Is involuntary pornography * Encourages or incites violence * Threatens, harasses, or bullies or encourages others to do so * Is personal and confidential information * Impersonates someone in a misleading or deceptive manner * Is spam

Cyber Bullying

It is the general agreement over the definition of bullying, particularly in academic circles, it stands for: “an aggressive act with three hallmark characteristics: a) it is intentional; b) it involves a power imbalance between an aggressor (individual or group) and a victim; c) it is repetitive in nature and occurs over time.”[15]

Cyber Stalking

It is commonly agreed upon that Cyber stalking includes all of stalking, harassment and bullying through electronic means. It also includes acts of online threats, intimidation and impersonation. It is notable that Cyberstalking carries an all encompassing definition that has also been used interchangeably in the media, to indicate all of bullying, harassment and even hate speech at times. Stalking itself is a gendered crime with studies showing that around 90% of stalkers are men and 80% of those stalked are women[16]

Dox or Doxx

Is an abbreviation from the phrase 'Dropping Dox or Docs', it is a tactic used to reveal personal details of a person online. It was used as a common revenge tactic in the 1990s to exact revenge by hackers, however the word itself has existed since 2001. As Tech Security expert Bruce Schneier describes doxing “as a tool to harass and intimidate people, primarily women, on the internet. Someone would threaten a woman with physical harm, or try to incite others to harm her, and publish her personal information as a way of saying "I know a lot about you—like where you live and work."'[17]

Personal information can range from revealing a person's address, social security or credit card information. In recent years, it has also translated into revealing a person's identity online.

Revenge Porn

Revenge porn is the distribution of sexually graphic content obtained either through a former lover or via hacking and is posted with out the consent of the person. The Cyber Civil Rights Initiative describes it as 'nonconsensual pornography', as many of the perpetrators are not motivated by revenge.[18]


Sextortion

There are very few documented cases in the media linked to sextortion although according to the study conducted by the Center for Technology Innovation at Brookings it is quite common based on a number of cases they reviewed[19]. Their definition of Sextortion is the following: “ is old-fashioned extortion or blackmail, carried out over a computer network, involving some threat—generally but not always a threat to release sexually-explicit images of the victim—if the victim does not engage in some form of further sexual activity.”

Sextortion along with Revenge Porn can be extremely destructive tactics to women, particularly in countries where sexualizing a person is taboo.

Swatting

A tactic or 'internet prank' used to trick emergency services particularly swat teams to a persons house under a false pretense that there is an emergency. This is used to harass and intimidate the target, similar to doxing it is a direct way


Troll

Internet trolls are people who try to instigate reactions by posting offensive, provocative content online. In previous years trolling was also referred to as flaming[20] and in the early years flaming was focused on disrupting online communities. In recent years the term troll has included a more negative connotation and include tolls that resort to hate speech and harassment with the goal of doing so for the 'Lulz'[21].

Kathy Sierra who had faced the wrath of trolls in 2007 describes them as hater trolls, to her trolls do not make a distinction: “In hater troll framing, there’s no difference between a single tweet and a DDoS of your employer’s website. There’s no difference between a “you’re a histrionic charlatan” and “here’s a headless corpse and you are next and here’s your address.” It’s all just trollin’ and mean words and not real life.”[22]

The word trolls has shifted in meaning throughout the years, 'hater trolls' (whom have also been referred to as Griefers by Julien Dibbell[23]) are now part of the predominant narrative when writing or reading about internet trolls. As the understanding of the word troll shifts, it's worth revisiting the term and exploring other words to refer to online harassers or hater trolls. Scholar Gabriella Coleman, in her extensive writings about online trolls, hackers and the group Anonymous, has highlighted another aspect of trolls and that is the politcal troll, that resorts to trolling but for a common good[24]. The variety in these definitions and ever shifting meaning of the term would have lead It is because the different nature and goals of trolls that it is time to find an alternative word for those that seek to harm others through harassment, intimidation and more.

Reclaiming Online Spaces

Over the years many projects, campaigns and tools have emerged to help women combat online harassment and misogyny. We wanted to dedicate a section towards the different initiatives from across the globe and highlight some projects that were started by women who had faced harassment. This section offers a starting point for us to further explore the field, what has been done, what is missing and what their impact is on other women.

At the moment the initiatives feature projects, campaigns, tools, research projects and guides.            

File:The Atlas of Online Harassment-Reclaiming the Online Space.ods

And Now What?

The vast field of online harassment and hate speech is one that has been pushed into the forefront of media coverage over recent years. Today, there isn't a day that goes by that there isn't an article writing about women facing harassment online or an online platform adjusting their features to combat hate speech and harassment. As we spend more and more of our time online and our online and offline worlds have become interchangeable, we are just beginning to understand how offline misogyny has permeated the online world. Our attempts to highlight particular stories, initiatives and to define the field scratches the surface on developing an understanding of tech based violence and harassment (used in its broad definition) against women and its effects on freedom of speech.

What we see from the evidence presented above is that online misogyny and harassment is definitely a gendered problem. This isn't a new revelation as multiple studies from Pew Research Center, Demos and many others have reported that women face significantly more harassment online, particularly vocal women such as journalists or activists. In many of the stories we present we see that women have chosen to withdraw from the online world or shut down their online presence[25]. Prominent activists and writers such as Kathy Sierra, Carolina Perez and Lola Aronovich decided to go offline[26] as a result of the harassment and abuse they suffered. Going offline and choosing to disengage from the online public sphere is an obvious restriction and limitation of freedom of speech. It becomes clear to us that online harassment in its many forms leads to the violation of free speech and one that is a gendered.


Definitions of online harassment and hate speech

Harassment, stalking, bullying, misogyny are all terms used by the media to describe very similar things. Hate speech and online harassment are also used quite interchangeably, in both media and academic circles. On the other hand we've witnessed terms like revenge porn or trolls shift in meaning over the years. The lack of clear cut definitions creates legal complexities where specific laws focus on certain aspects of online harassment, such as stalking or revenge porn and ignore other serious abuses. Adding to these complexities, the absence of legal definitions leaves harassment to the interpretation of the individuals moderating flagged content on social network and media platforms[27].

It is crucial for anti-harassment initiatives and women rights defenders to build more universal and agreed upon definitions regarding online harassment, stalking, and misogyny that can be used as the basis of any legal and policy recommendations; but also used to raise awareness of gendered online harassment in mainstream media.

In our search for a more universal definition of online harassment we started to explore the issue from different lenses. With Sarah Joeng's taxonomy of online harassment in mind (see The Atlas of Speech), we acknowledge that online harassment is not just limited to content or merely threats, but is also a behavioral one, one that can also be seen as a violation of a woman's consent and privacy. Doxing and revenge porn are both direct examples those violations, when a woman's private information or nude images are exposed online for the public to see without her consent, we come to view this as a privacy and consent issue as well. To Jennifer Lawerence her nude images constituted a 'sex crime'[28], it no longer became about receiving threats or being stalked online, but evolved into issues around consent and privacy. The question then becomes how do we combat harassment when it is about a violation of consent and privacy? How can we ensure that women's privacy is upheld in the data society that we live in?

Another lens worth exploring is the question raised back in 1993 by Julian Dibbell when he recalled the events that happened in LambdaMoo. Does rape in cyber space, where no physical bodies touched constitute rape in the physical world? With this question we would like to also raise the following: is online harassment and digital violence an extension of the physical world? Is the Cyber real? There is no simple answer to this question and we will not attempt to answer this here, but put this forward to our readers in an attempt to provoke a conversation and explore its significance to the framing of online harassment in relation to the physical world.


Media Coverage

We view media as a significant source that has helped shape the conversation around online harassment and find it integral to our understanding of the field. Reports of celebrities, journalists and politicians being harassed has dominated the narrative in mainstream media, which has seen an increase in coverage since the harassment of Carolina Perez in 2013 and the breaking of Gamergate in 2014. It is notable that we can find few coverage of women from the Global South, particularly activists. However, media is framing the public opinion on this issue and as a result it is worth running an in-depth analysis of media's discourse on the matter to better understand how this topic is being framed for the general public.


The role of online platforms

A key issue that has dominated part of the conversation on online harassment and hate speech is the role of platforms in tackling these issues. Platforms have resorted to blocking accounts and content or flagging and reporting abusive content that would ultimately lead to the content being removed.

In 2013 during the harassment of Carolina Perez, Twitter implemented its first anti-harassment feature with their anti-abuse report button[29]. Over the years these features evolved to allow users to block accounts, and add them to a list of blocked users[30], it wasn't until the summer of 2016 that Twitter resorted to banishing users completely[31].

Facebook, like Twitter, developed intricate reporting and flagging systems to combat all of hate speech and harassment on their platform, including allowing administrators to block certain content based on blacklisted words[32]. Facebook also recently developed a feature that would send an account holder notifications if they suspect that a person is being impersonated online9. Facebook is the first platform to develop such a feature even though online impersonations have long plagued women, especially when dealing with abusive former lovers.

Reddit is a social, entertainment and news platform that takes pride in the fact that it relies on community driven moderation through up and down voting content. However, as Ellen Pao took over Reddit and subsequent to her departure[33] as the CEO, the platform started to undertake strict measures to fight online harassment. In the spring of 2016 Reddit took a major step by developing a new tool allowing the blocking of abusive users[34].

Twitter, Facebook and Reddit are but a few of the companies that are combatting harassment predominately by restricting content and users, an action that has been welcomed by many. On the other hand, Jillian C. York argues that blocking content or users does not solve the underlying problems of harassment[35]. Referring to blocking as censorship, can lead to overreaching, where “efforts to censor hate speech, or obscenity, or pornography, are far too often overreaching, creating a chilling effect on other, more innocuous speech.” Her solution to the problem of harassment and hate speech is more speech. With this argument in mind, it would be worth having platforms explore ways where they can empower users to fight harassers rather than simply blocking them and their content.


Solutions to combatting harassment online

Reading through some of the solutions and initiatives to combat online harassment and misogyny, we came across a number of methods and tactics that have been put in place. They include:

  • Blocking: There have been a number of methods and platform features that have resorted to blocking both content and abusive users. This is one of the most commonly used solutions to combat harassment.
  • Reporting and Filtering: The majority of platforms over the past few years have enabled reporting and filtering features for people to report online harassment and hate speech. The existing reporting and flagging systems vary from platform to platform, there are continuous attempts to ease the process of flagging content by the user. However, ultimately the process is not without flaws and in many situations flagged content is not always removed[36].
  • Bots: Zero Tolerance is one prominent example of using online bots to 'troll the trolls' on Twitter. The program detects Twitter accounts that use hateful and sexist words, that information is then passed on to 150 bots that tweet at the account user sending them life coaching messages[37]. Others have developed bots to help women block unwanted users from contacting them. Ultimately, bots offer more of an immediate solution rather than a long lasting one.
  • Moderation: There can be a number of mechanisms that are used to moderate content or users online[38]. There is exclusion, pricing, organizing, and norm-setting. These mechanisms include both automated moderation and manual moderation such as distributed moderation. The distributed or voting moderation is a feature used on Reddit and more recently on Periscope[39]. Although there are known risks to distributed moderation, particularly when people organize around votes (i.e. trolls) or when there are not enough votes[40][41].
  • Projects and Initiatives: As detailed in the Reclaiming the Online Space, there are a number of initiatives and projects that were started in reaction to women being harassed. These initiatives have emerged to raise awareness about the issues in this field, from a legal perspective, or developing guides and documenting the harassment. The more of these initiatives are established the more awareness is raised that help navigate the field.
  • Community and support networks : There is very little written about how the community can help support women who are facing online abuse and misogyny, although talking to activists this may be one of the most important tactic that supports women to fight back and not disengage from cyberspace[42].

The solutions presented are merely a selection of mechanisms and tactics that women have been able to use to fight harassers. The recent trend has focused on blocking or reporting mechanisms in combatting harassment and abusive content, however there are draw backs to these mechanisms as they do not resolve the issue from its root and in fact might aggravate harassers to escalate their tactics.

Notably missing from most discussions over anti-harassment tactics is the community support factor. Although establishing communities or networks of activists for women who have been harassed is a significant method to support and fight back against the abuse and harassment women face online. While some initiatives offer support (legal and informational), little is done to establish networks of activists that can offer a range of support to anyone who faces the wrath of trolls.

Tactical Tech has over the past 3 years worked on establishing networks of support within local communities. Through the Gender and Tech Institute (Berlin and Ecuador) over a 100 women were able to train to become digital privacy and security champions and they were able to return to their communities and relay their knowledge and skills to their local communities. As a result of both GTI's and follow up events over a 1000 women are now more aware of their digital presence and privacy that provides them with the ability to combat any harassment they may face. By training, engaging and developing resources with women rights defenders and activists from across the globe, Tactical Tech is building both a global and local support networks of digital privacy champions that can combat the phenomenon of online harassment.

This resource is an initial attempt to navigate the field and help us and our community make sense of what is out there and what solutions are available to combat harassment online. In the coming months, we plan to continue building on the content, add more stories from the Global South and work towards building stronger networks of support. Some of the questions and issues we raised above are ones that we hope to find answers and solutions for.

Annex: Recommended Reads

A sample of readings that we recommend:


Credits

The Atlas of Online Harassment was developed by the Tactical Technology Collective.

Funding

This was developed thanks to the Swedish Development Cooperation Agency funding support. To note that Sida can not be regarded as having contributed to or vouching for the content.

Links

  1. [1] http://www.juliandibbell.com/texts/bungle_vv.html
  2. http://www.bustle.com/articles/149636-facebooks-new-tools-to-reduce-online-harassment-target-impersonating-profiles-revenge-porn-and-theyre-much-needed
  3. https://www.eff.org/deeplinks/2014/09/facebooks-real-name-policy-can-cause-real-world-harm-lgbtq-community
  4. We see extremist speech as a separate issue from hate speech, especially from a legal perspective. As a result, we will not address extremist speech in this resource, however we acknowledge that in certain circumstances hate speech and extremist speech are seen to be closely interlinked.
  5. https://www.theguardian.com/technology/2016/may/31/facebook-youtube-twitter-microsoft-eu-hate-speech-code
  6. http://digitalrightsfoundation.pk/wp-content/uploads/2015/12/Blasphemy-In-The-Digital-Age.pdf
  7. http://dangerousspeech.org/guidelines
  8. https://www.eff.org/issues/bloggers/legal/liability/defamation
  9. http://commlaw.cua.edu/articles/v17/17.1/Jameson.pdf
  10. Citron, Danielle, “Hate Crimes in Cyberspace”, Harvard University Press, Cambridge: US, 2014
  11. https://www.salon.com/2014/09/02/hate_crimes_in_cyberspace_author_everyone_is_at_risk_from_the_most_powerful_celebrity_to_the_ordinary_person/
  12. Jeong, Sarah, “The Internet of Garbage”, Forbes, US, 2015
  13. http://www.pewinternet.org/2014/10/22/online-harassment/
  14. http://www.redditblog.com/2015/05/promote-ideas-protect-people.html
  15. http://ssrn.com/abstract=2146877
  16. http://europe.newsweek.com/how-law-standing-cyberstalking-264251?rm=eu
  17. https://www.schneier.com/essays/archives/2015/10/the_rise_of_politica.html
  18. http://www.cybercivilrights.org/faqs/
  19. http://www.brookings.edu/~/media/Research/Files/Reports/2016/05/sextortion/sextortion1.pdf?la=en
  20. http://techterms.com
  21. A deviation of Lols or laugh out loud, refers to laughing at the expense of others.
  22. http://www.wired.com/2014/10/trolls-will-always-win/
  23. http://www.wired.com/2008/01/mf-goons/
  24. http://gabriellacoleman.org/wp-content/uploads/2012/08/Coleman-Phreaks-Hackers-Trolls.pdf
  25. http://www.forbes.com/sites/daniellecitron/2014/04/27/the-changing-attitudes-towards-cyber-gender-harassment-anonymous-as-a-guide/#4b68f59554b1
  26. https://www.theguardian.com/technology/2016/apr/11/women-online-abuse-threat-racist
  27. http://www.wired.com/2014/10/content-moderation/
  28. https://www.theguardian.com/film/2014/oct/07/jennifer-lawrence-nude-photo-hack-sex-crime
  29. http://www.theverge.com/2014/11/12/7188549/does-twitter-have-a-secret-weapon-for-silencing-trolls
  30. http://bits.blogs.nytimes.com/2014/12/02/twitter-improves-tools-for-users-to-report-harassment/?_r=0
  31. https://www.theguardian.com/technology/2016/jul/20/milo-yiannopoulos-nero-permanently-banned-twitter
  32. https://www.facebook.com/help/131671940241729
  33. https://www.theguardian.com/technology/2015/jul/10/ellen-pao-reddit-interim-ceo-resigns
  34. http://www.nytimes.com/2016/04/07/technology/reddit-steps-up-anti-harassment-measures-with-new-blocking-tool.html
  35. https://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship-6e1babd61a9b#.tosss89pm
  36. https://arxiv.org/abs/1505.03359
  37. https://motherboard.vice.com/read/trolling-the-trolls-with-sexism-hunting-twitter-bots
  38. http://james.grimmelmann.net/files/articles/virtues-of-moderation.pdf
  39. http://www.wsj.com/articles/to-fight-trolls-periscope-puts-users-in-flash-juries-1464711622
  40. https://dl.acm.org/citation.cfm?id=2441866
  41. http://www.cpeterson.org/2013/07/22/a-brief-guide-to-user-generated-censorship/
  42. https://www.youtube.com/watch?time_continue=654&v=LiGP44mvNcs