Difference between revisions of "The Atlas of Online Harassment"

From Gender and Tech Resources

(Hate Speech)
(Hate Speech[4])
Line 128: Line 128:
  
 
“The most dangerous speech act, or ideal type of dangerous speech, would be one for which all five variables are maximized:
 
“The most dangerous speech act, or ideal type of dangerous speech, would be one for which all five variables are maximized:
# a powerful speaker with a high degree of influence over the audience.
+
# a powerful speaker with a high degree of influence over the audience.  
# the audience has grievances and fear that the speaker can cultivate.
+
# the audience has grievances and fear that the speaker can cultivate.
# a speech act that is clearly understood as a call to violence,
+
# a speech act that is clearly understood as a call to violence,
# a social or historical context that is propitious for violence, for any of a variety of reasons, including longstanding competition between groups for resources, lack of efforts to solve grievances, or previous episodes of violence.
+
# a social or historical context that is propitious for violence, for any of a variety of reasons, including longstanding competition between groups for resources, lack of efforts to solve grievances, or previous episodes of violence.
# a means of dissemination that is influential in itself, for example because it is the sole or  primary source of news for the relevant audience.”<ref>http://dangerousspeech.org/guidelines</ref>
+
# a means of dissemination that is influential in itself, for example because it is the sole or  primary source of news for the relevant audience.”<ref>http://dangerousspeech.org/guidelines</ref>
  
 
'''Discrimination''': Is the unjust treatment or prejudice against different people based on their sex, race, and age. That list can expand to include sexual orientation and religion.
 
'''Discrimination''': Is the unjust treatment or prejudice against different people based on their sex, race, and age. That list can expand to include sexual orientation and religion.

Revision as of 19:55, 31 July 2016

Introduction

Tales of Online Misogyny

In 1993, the online world LambdaMOO was thrown into disarray after one of the participants was accused of committing rape there. The incident was made famous through Julian Dibbell's article “A Rape in CyberSpace”[1]. Julian walked us through the harrowing details of the incident, where a user with the handle Mr Bangles entered the online space and used a mechanism to take control over the movements of another user. Mr. Bangles then preceded to force the users he controlled into committing sexual acts against their will. Although there no one was assaulted physically, the incident left a mark in the online world and among its participants, it brought forward the complex question whether rape in cyberspace constituted rape in the physical world. This may be one of the first cases of online harassment reported in the media, at a time when about 4% of the world's population was online and much less took part in LambdaMoo.

As more cases are coming to light the debate is growing and we are recognizing different actors, circumstances and tactics used to harass and intimidate women. In this section we highlight 35 stories of women who have been harassed or confronted with online misogyny/ hate speech. These cases will help detail the different women harassed, their harassers, tactics used and outcomes in an attempt to gain a better understanding of the diversity in threats against women online. Starting from 2007, we describe and outline the stories based on relevant time periods, tactics, outcomes and different geographic contexts. Through mapping these stories we get a clearer picture of the field, of the 35 stories, 13 occurred in the Global South. 2007 marks a pivotal point in the field, with stories about harassment starting to filter into mainstream media and reached prominence with the case of writer/ blogger Kathy Sierra.

The selection of the case studies does contain bias, as it is based on our analysis of news reports on the stories. This also means that the women who were harassed were able to report on their own stories and many tend to be journalists, activists, or public figures. We are using these case studies as a starting point to explore the field, we would like to expand to include further case studies from the Global South and from women activists.

File:Table_Finalized_21.07.16.ods

During our research a number of observations stood out to us with regards to the outcomes of many of the stories, we list our observations below:

Silencing of Women online: One of the reactions we've heard from women who suffered from online harassment and stalking is that they for at least some time period, if not indefinitely, have gone offline. Kathy Sierra canceled her speaking engagement at the time of her harassment, Carolina Perez deactivate her Twitter account and so did Leslie Jones and the list of names continues. However many of these stories have also inspired women to reclaim the online space and establish initiatives that would counter harassment online and help build networks that would support other women.

Counter speech and reclaiming the online space: Regardless of the harassment and intimidation the women face, many of them featured here have internalized the need to online harassment through counter speech or by building helpful resources. Ten out of the thirty-four cases we identified led to the launch of a positive initiative (a campaign, a website, etc.) for victims to reclaim their online territories.

The hashtag #NaoSaoEllas or #MiPrimerAsedio (a trending topic on Twitter) in Brazil were launched after the harassment feminist blogger Lola Aranonovich and 13-year-old Valentina Schulz underwent. Danish journalist and activists Emma Holte started the project Consent after being subjected to Revenge Porn by her former partner. Consent aims to reclaim the image of her body and to raise awareness around the importance of consent.

Legal Practices: Although in some of the case studies we list, some of the women decided to take legal action the legal framework around online harassment remains unclear. In the following section -The Atlas of Words – readers will be able to read through some of the legal and judiciary definitions surrounding both online harassment and hate speech. It is immediately clear that there is no universal legal definition of cyber harassment or cyber stalking and also a diversity of legal landscapes around these issues. Even more so the judiciary system does not seem a serious and efficient recourse for the harassed women. Stemming from the case studies, it appears that the traditional way of handling abuses and harassments offline is not applicable online. So far, some cases were heard after the women subjected to these abuses, spoke out publicly, organized their own form of justice such as Holly Jacobs with 'End Revenge Porn' project that gathers resources and legal information for victims of revenge porn.

Although there have been attempts to introduce new laws to combat online harassment and stalking, particularly in Western countries, there are still issues around authorities' reaction to cases of online harassment, the platforms' reaction and how legally there is still much to be done.


Platform Policies: Over the past four years online platforms have actively taken action to combat hate speech and online harassment. That made it to the forefront of the battle with the prominent case of Carolina Perez. As Carolina was bombarded with harassment on Twitter, the platform decided to take action and enable different features allowing users to block and filter accounts. Twitter is not the only platform to opt for blocking or filtering content, Facebook has also taken very recent steps to combat revenge porn and impersonations[2], on the other hand their real name policy could have caused serious harm for the LGBT communities online[3].

The question worth asking here is whether blocking and filtering content on these platforms is a powerful solution in combating harassment. Studying the impact and effects of content removal and filtering is definitely needed in this field along with the effects of counter speech.

The Atlas of Speech

Much has been written academically and in the media about Hate Speech and particularly Online Harassment in recent years. With this vast body of literature and the legal shifts that have happened over the years translates into a number of definitions that have emerged trying to frame the field. It is for that reason important to dedicate a section that highlights these definitions in an attempt to explore all of the legal, online and universal definitions that exist in this field, if that is the case of course.

This section divides the definitions into two lists, one that focuses on Hate Speech and the other on Online Harassment. This is in no way an expansive list but rather reflective of the many terms linked to forms of online speech.  

Hate Speech[4]

Hate speech has direct ties to free speech and has a long history from a legal perspective. Tracing this issue back to the Universal Declaration of Human Rights in 1948, while not directly addressed hate speech in the form of incitement to violence and discrimination made it's way into International Law. The following years saw a number of international conventions address hate speech in lieu with freedom of expression.

We have selected a few definitions from a legal perspective to be presented here:

International Covenant on Civil and Political Rights (ICCPR) – Article 20 (2):

1. Any propaganda for war shall be prohibited by law.

2. Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”

International Covenant on the Elimination of All forms of Racial Discrimination (ICERD) – Article 4 (a):

States Parties condemn all propaganda and all organizations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form, and undertake to adopt immediate and positive measures designed to eradicate all incitement to, or acts of, such discrimination and, to this end, with due regard to the principles embodied in the Universal Declaration of Human Rights and the rights expressly set forth in article 5 of this Convention, inter alia:

(a) Shall declare an offense punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof;”

European Convention on Human Rights – Article 10 (2):

The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.”

American Convention on Human Rights – Article 13 (5):

Any propaganda for war and any advocacy of national, racial, or religious hatred that constitute incitements to lawless violence or to any other similar action against any person or group of persons on any grounds including those of race, color, religion, language, or national origin shall be considered as offenses punishable by law.” 

Indian Penal Code: 153A.

Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.—

(1) Whoever—

(a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, language or regional groups or castes or communities, or

(b) commits any act which is prejudicial to the maintenance of harmony between different religious, racial, language or regional groups or castes or communities, and which disturbs or is likely to disturb the public tranquillity,

What Platforms Say about Hate Speech?

Although International Law and conventions explicitly address Hate Speech, there is no mention of misogyny, on the other hand, online platforms such as Google, Facebook and Twitter add both gender and sexual orientation within their definitions of hate speech. This presents an alternative and gendered definition of hate speech particularly online, where women face harassment on a daily basis. It is for that reason and linked to the active

With an active work in platform policies to tackle hate speech and harassment online. Recently the platforms have also gone as far as to sign on to EU's 'code of conduct' on hate speech[5].

Google:

Hate Speech: Our products are platforms for free expression. But we don't support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line.

YouTube (owned by Google):

We encourage free speech and try to defend your right to express unpopular points of view, but we don't permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

  • race or ethnic origin
  • religion
  • disability
  • gender
  • age
  • veteran status
  • sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their race.”

Facebook: Community Standards

Facebook removes hate speech, which includes content that directly attacks people based on their:

  • race,
  • ethnicity,
  • national origin,
  • religious affiliation,
  • sexual orientation,
  • sex, gender or gender identity, or
  • serious disabilities or diseases.

Facebook also indicates that beyond removing content containing hate speech they also provide 'tools' for people to avoid offensive content and also promote counter-speech, see the following:

While we work hard to remove hate speech, we also give you tools to avoid distasteful or offensive content. Learn more about the tools we offer to control what you see. You can also use Facebook to speak up and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.”

Twitter- According to The Twitter Rules:

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Reddit:

Do not post violent content: Do not post content that incites harm against people or groups of people. Additionally, if you're going to post something violent in nature, think about including a NSFW tag. Even mild violence can be difficult for someone to explain to their boss if they open it unexpectedly.”

Blasphemy: The Oxford dictionary defines Blasphemy “as an act of offense of speaking sacrilegiously about God or sacred things; profane talk.” Although the definition may seem straightforward there have been notable cases where the accusation of blasphemy was used to censor certain types of speech, particularly in countries that have strict laws on the topic that enforce detentions and death penalties against people who commit blasphemy. Pakistan is an example of a state using blasphemy laws to censor speech, such as is in the case of Professor Junaid Hafeez, an English lecturer at Bahauddin Zakariya University and known for his liberal views, is currently in prison accused of blasphemy. While it is not clear whether Hafeez posted anything blasphemous, and some believe the evidence was doctored, he remains in jail awaiting his trial since 2013. His case and many others like Hafeez were reported[6] by the Digital Right Foundation in their report “Blasphemy in the Digital Age”.

Counter Speech: In its most simplest form, counter speech is a form of speech that responds to hate speech or extremism. It is a different narrative than that of hate speech and attempts to counter hate speech. Counter speech comes in different forms, in some cases it is a simple response to the speech online i.e. tweet, Facebook comment, in other cases it is creating a Facebook group or blog post. Parody has also become a common tactic to counter hate speech.

Dangerous Speech: as defined by Professor Susan Benesch is a subset of Hate Speech which has a reasonable chance of catalyzing or amplifying violence by on group against another. Prof. Benesch developed a framework that identifies Dangerous Speech and established five variables:

“The most dangerous speech act, or ideal type of dangerous speech, would be one for which all five variables are maximized:

  1. a powerful speaker with a high degree of influence over the audience.
  2. the audience has grievances and fear that the speaker can cultivate.
  3. a speech act that is clearly understood as a call to violence,
  4. a social or historical context that is propitious for violence, for any of a variety of reasons, including longstanding competition between groups for resources, lack of efforts to solve grievances, or previous episodes of violence.
  5. a means of dissemination that is influential in itself, for example because it is the sole or primary source of news for the relevant audience.”[7]

Discrimination: Is the unjust treatment or prejudice against different people based on their sex, race, and age. That list can expand to include sexual orientation and religion.

Defamation[8]: According to the Electronic Frontier Foundation, online defamation stand as: “a false and unprivileged statement of fact that is harmful to someone's reputation, and published "with fault," meaning as a result of negligence or malice. State laws often define defamation in specific ways. Libel is a written defamation; slander is a spoken defamation.”

Freedom of Speech: Perhaps the most famous articles on Freedom of speech and expression is the Universal Declaration of Human Rights and the First Amendment in the US constitution.

Universal Declaration of Human Rights (Article 19):

“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

US Constitution (First Amendment):

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.”

Hate Crime: Not to be confused with Hate Speech, legally hate crime is generally a violent action against another person or community because of their race, ethnicity, national origin, religion, sexual orientation, or disability.

It is notable that international law and conventions do not directly address gender or sexual orientation in the body of text. However, states along with online platforms have taken steps to prohibit discrimination and hate speech based on these issues.

Although hate speech is addressed extensively in many international and state laws, online harassment on the other hand is not addressed in International law and states have recently started to draft laws on that matter, we are starting to see laws addressing cyberstalking, cyber bullying and in some places revenge porn. The following list will attempt to list a set of terminology used to describe harassment online.

Online Harassment

There is no universally agreed upon definition of online harassment and is one that has been difficult to define. A number of scholars have explored the concept and attempted to define it, especially in the absence of a legal definition. We present a few of those definitions below:

From a scholarly perspective, in some of the earlier writings on online harassment Sarah Jameson in her paper Cyberharassment: Striking a Balance Between Free Speech and Privacy [9]defined cyber harassment as such:

Although cyber harassment has no universal definition, it typically occurs when an individual or group with no legitimate purpose uses a form of electronic communication as a means to cause great emotional distress to a person. In addition to e-mail and blogs, conduits of “new media” available to the cyber harasser include chat rooms, instant messaging services, electronic bulletin boards, and social networking sites. The Internet provides cyber harassers with an easy channel to “incite others against their victims.” Not only can the cyber harasser harass his victim, but he can also impersonate the victim and post defamatory messages on bulletin boards, cyberbully his victim, or send vulgar e-mails to the victim’s employer. The victim suffers as a result of actions committed by the cyber harasser.”

On the other hand Danielle Citron, views cyber harassment as: “Although definition of these terms vary, cyber harassment is often understood to involve the intentional infliction of substantial emotional distress accomplished by online speech that is persistent enough to amount to a “course of conduct” rather than an isolated incident”[10]

To Citron, harassment includes: “real threats, privacy invasions, cruelty amounting to intentional infliction about purely private matters, nude images and reputational damage. People lose their jobs, they can’t get new jobs, because Google searches turn up destructive information. They often have to move because they’re afraid of confrontation by strangers. They lose investments and professional opportunities. These are coercive acts.”[11]

Author of the book entitled The Internet of Garabage Sarah Jeong goes as far as to develop a taxonomy for online harassment dividing it into two spectrums one defined by behavior and the other by content. She writes:

Harassment exists on two spectrums at once— one that is defined by behavior and one that is defined by content. The former is the best way to understand harassment, but the latter has received the most attention in both popular discourse and academic treatments.

When looking at harassment as content, we ultimately fixate on “death threats” as one end of the spectrum, and “annoying messages” at the other end. Thus the debate ends up revolving around civil rights versus free speech— where is the line between mean comments and imminent danger? Between jokes and threats?

Behavior is a better, more useful lens through which to look at harassment. On one end of the behavioral spectrum of online harassment, we see the fact that a drive-by vitriolic message has been thrown out in the night; on the other end, we see the leaking of Social Security numbers, the publication of private photographs, the sending of SWAT teams to physical addresses, and physical assault.”[12]   

As for the Pew Research Center in their report entitled Online Harassment, their taxonomy of online harassment varies slightly in that it:   

online harassment falls into two distinct yet frequently overlapping categories. Name-calling and embarrassment constitute the first class and occur widely across a range of online platforms. The second category includes less frequent—but also more intense and emotionally damaging—experiences like sexual harassment, physical threats, sustained harassment, and stalking.”[13]

Reclaiming Online Spaces

Over the years many projects, campaigns and tools have emerged to help women combat online harassment and misogyny. We wanted to dedicate a section towards the different initiatives from across the globe and highlight some projects that were started by women who had faced harassment. This section offers a starting point for us to further explore the field, what has been done, what is missing and what their impact is on other women.

At the moment the initiatives feature projects, campaigns, tools, research projects and guides.        

INSERT TABLE HERE    

And Now What?

Annex: Recommended Reads

A sample of readings that we recommend:

  • Amanda Hess: “Why does Hate Thrive Online?” (Slate Magazine)
  • Association Progressive Communication (APC):
    • Three key issues for a feminist internet: Access, agency and movements”
  • Kathy Sierra: “Why Trolls Will Always Win” (Wired Magazine)
  • Laura Thompson: “Is Online Misogynny a Threat to Free Speech?” (Columbia Journalism Review)
  • Marlisse Silver Sweeney, “What the Law Can (and Can't) Do About Online Harassment” (The Atlantic)
  • Mathias Schwartz: “The Trolls Amoung US” (New York Times)
  • Nathan J. Matias et all: “Online Harassment: A Resource Guide” (Wiki)
  • Soraya Chemaly, “Ten must read books about online harassment and free speech” (Women's Media Center)
  • Women, Action and Media (WAM): “Examples of Gender-Based Hate Speech on Facebook”
  • Women, Action and Media (WAM): “Twitter’s Abuse Problem: Now With Actual Solutions And Science”


Credits

The Atlas of Online Horrors was developed by the Tactical Technology Collective in collaboration with:

Reviewers

Maya Indira Ganesh

Special Thanks to

Alex Hache, Vanessa Rizk, Inti Maria Tidball.

Funding

This was developed thanks to the Swedish Development Cooperation Agency funding support. To note that Sida can not be regarded as having contributed to or vouching for the content.
  1. [1] http://www.juliandibbell.com/texts/bungle_vv.html
  2. http://www.bustle.com/articles/149636-facebooks-new-tools-to-reduce-online-harassment-target-impersonating-profiles-revenge-porn-and-theyre-much-needed
  3. https://www.eff.org/deeplinks/2014/09/facebooks-real-name-policy-can-cause-real-world-harm-lgbtq-community
  4. We see extremist speech as a separate issue from hate speech, especially from a legal perspective. As a result, we will not address extremist speech in this resource, however we acknowledge that in certain circumstances hate speech and extremist speech are seen to be closely interlinked.
  5. https://www.theguardian.com/technology/2016/may/31/facebook-youtube-twitter-microsoft-eu-hate-speech-code
  6. http://digitalrightsfoundation.pk/wp-content/uploads/2015/12/Blasphemy-In-The-Digital-Age.pdf
  7. http://dangerousspeech.org/guidelines
  8. https://www.eff.org/issues/bloggers/legal/liability/defamation
  9. http://commlaw.cua.edu/articles/v17/17.1/Jameson.pdf
  10. Citron, Danielle, “Hate Crimes in Cyberspace”, Harvard University Press, Cambridge: US, 2014
  11. https://www.salon.com/2014/09/02/hate_crimes_in_cyberspace_author_everyone_is_at_risk_from_the_most_powerful_celebrity_to_the_ordinary_person/
  12. Jeong, Sarah, “The Internet of Garbage”, Forbes, US, 2015
  13. http://www.pewinternet.org/2014/10/22/online-harassment/