must read: internet violence and how social media turned against women…

In December 2012, an Icelandic woman named Thorlaug Agustsdottir discovered a Facebook group called “Men are better than women.” One image she found there, Thorlaug wrote to us this summer in an email, “was of a young woman naked chained to pipes or an oven in what looked like a concrete basement, all bruised and bloody. She looked with a horrible broken look at whoever was taking the pic of her curled up naked.” Thorlaug wrote an outraged post about it on her own Facebook page.

Before long, a user at “Men are better than women” posted an image of Thorlaug’s face, altered to appear bloody and bruised. Under the image, someone commented, “Women are like grass, they need to be beaten/cut regularly.” Another wrote: “You just need to be raped.” Thorlaug reported the image and comments to Facebook and requested that the site remove them.

“We reviewed the photo you reported,” came Facebook’s auto reply, “but found it does not violate Facebook’s Community Standards on hate speech, which includes posts or photos that attack a person based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability, or medical condition.”

Instead, the Facebook screeners labeled the content “Controversial Humor.” Thorlaug saw nothing funny about it. She worried the threats were real.

fbrape_wam_img_0

Some 50 other users sent their own requests on her behalf. All received the same reply. Eventually, on New Year’s Eve, Thorlaug called the local press, and the story spread from there. Only then was the image removed.

In January 2013, Wired published a critical account of Facebook’s response to these complaints. A company spokesman contacted the publication immediately to explain that Facebook screeners had mishandled the case, conceding that Thorlaug’s photo “should have been taken down when it was reported to us.” According to the spokesman, the company tries to address complaints about images on a case-by-case basis within 72 hours, but with millions of reports to review every day, “it’s not easy to keep up with requests.” The spokesman, anonymous to Wired readers, added, “We apologize for the mistake.”

If, as the communications philosopher Marshall McLuhan famously said, television brought the brutality of war into people’s living rooms, the Internet today is bringing violence against women out of it. Once largely hidden from view, this brutality is now being exposed in unprecedented ways. In the words of Anne Collier, co-director of ConnectSafely.org and co-chair of the Obama administration’s Online Safety and Technology Working Group, “We are in the middle of a global free speech experiment.” On the one hand, these online images and words are bringing awareness to a longstanding problem. On the other hand, the amplification of these ideas over social media networks is validating and spreading pathology.

We, the authors, have experienced both sides of the experiment firsthand. In 2012, Soraya, who had been reporting on gender and women’s rights, noticed that more and more of her readers were contacting her to ask for media attention and help with online threats. Many sent graphic images, and some included detailed police reports that had gone nowhere. A few sent videos of rapes in progress. When Soraya wrote about these topics, she received threats online. Catherine, meanwhile, received warnings to back up while reporting on the cover-up of a sexual assault.

All of this raised a series of troubling questions: Who’s proliferating this violent content? Who’s controlling its dissemination? Should someone be? In theory, social media companies are neutral platforms where users generate content and report content as equals. But, as in the physical world, some users are more equal than others. In other words, social media is more symptom than disease: A 2013 report from the World Health Organization called violence against women “a global health problem of epidemic proportion,” from domestic abuse, stalking, and street harassment to sex trafficking, rape, and murder. This epidemic is thriving in the petri dish of social media.

While some of the aggression against women online occurs between people who know one another, and is unquestionably illegal, most of it happens between strangers. Earlier this year, Pacific Standard published a long story by Amanda Hess about an online stalker who set up a Twitter account specifically to send her death threats.

Across websites and social media platforms, everyday sexist comments exist along a spectrum that also includes illicit sexual surveillance, “creepshots,” extortion, doxxing, stalking, malicious impersonation, threats, and rape videos and photographs. The explosive use of the Internet to conduct human trafficking also has a place on this spectrum, given that three-quarters of trafficked people are girls and women.

A report, “Misogyny on Twitter,” released by the research and policy organization Demos this June, found more than 6 million instances of the word “slut” or “whore” used in English on Twitter between December 26, 2013, and February 9, 2014. (The words “bitch” and “cunt” were not measured.) An estimated 20 percent of the misogyny study Tweets appeared, to researchers, to be threatening. An example: “@XXX @XXX You stupid ugly fucking slut I’ll go to your flat and cut your fucking head off you inbred whore.

A second Demos study showed that while male celebrities, female journalists, and male politicians face the highest likelihood of online hostility, women are significantly more likely to be targeted specifically because of their gender, and men are overwhelmingly those doing the harassing. For women of color, or members of the LGBT community, the harassment is amplified. “In my five years on Twitter, I’ve been called ‘nigger’ so many times that it barely registers as an insult anymore,” explains attorney and legal analyst Imani Gandy.  “Let’s just say that my ‘nigger cunt’ cup runneth over.”

At this summer’s VidCon, an annual nationwide convention held in Southern California, women vloggers shared an astonishing number of examples. The violent threats posted beneath YouTube videos, they observed, are pushing women off of this and other platforms in disproportionate numbers. When Anita Sarkeesian launched a Kickstarter to help fund a feminist video series called Tropes vs. Women, she became the focus of a massive and violently misogynistic cybermob. Among the many forms of harassment she endured was a game where thousands of players “won” by virtually bludgeoning her face. In late August, she contacted the police and had to leave her home after she received a series of serious violent online threats.

Danielle Keats Citron, law professor at the University of Maryland and author of the recently released book Hate Crimes in Cyberspace, explained, “Time and time again, these women have no idea often who it is attacking them. A cybermob jumps on board, and one can imagine that the only thing the attackers know about the victim is that she’s female.” Looking at 1,606 cases of “revenge porn,” where explicit photographs are distributed without consent, Citron found that 90 percent of targets were women. Another study she cited found that 70 percent of female gamers chose to play as male characters rather than contend with sexual harassment.

This type of harassment also fills the comment sections of popular websites. In August, employees of the largely female-staffed website Jezebel published an open letter to the site’s parent company, Gawker, detailing the professional, physical, and emotional costs of having to look at the pornographic GIFs maliciously populating the site’s comments sections everyday. “It’s like playing whack-a-mole with a sociopathic Hydra,” they wrote, insisting that Gawker develop tools for blocking and tracking IP addresses. They added, “It’s impacting our ability to do our jobs.”

For some, the costs are higher. In 2010, 12-year-old Amanda Todd bared her chest while chatting online with a person who’d assured her that he was a boy, but was in fact a grown man with a history of pedophilia. For the next two years, Amanda and her mother, Carol Todd, were unable to stop anonymous users from posting that image on sexually explicit pages. A Facebook page, labeled “Controversial Humor,” used Amanda’s name and image—and the names and images of other girls—without consent. In October 2012, Amanda committed suicide, posting a YouTube video that explained her harassment and her decision. In April 2014, Dutch officials announced that they had arrested a 35-year-old man suspected to have used the Internet to extort dozens of girls, including Amanda, in Canada, the United Kingdom, and the United States. The suspect now faces charges of child pornography, extortion, criminal harassment, and Internet luring.

Almost immediately after Amanda shared her original image, altered versions appeared on pages, and videos proliferated. One of the pages was filled with pictures of naked pre-pubescent girls, encouraging them to drink bleach and die.  While she appreciates the many online tributes honoring her daughter, Carol Todd is haunted by “suicide humor” and pornographic content now forever linked to her daughter’s image. There are web pages dedicated to what is now called “Todding.” One of them features a photograph of a young woman hanging.

Meanwhile, extortion of other victims continues. In an increasing number of countries, rapists are now filming their rapes on cell phones so they can blackmail victims out of reporting the crimes. In August, after a 16-year-old Indian girl was gang-raped, she explained, “I was afraid. While I was being raped, another man pointed a gun and recorded me with his cellphone camera. He said he will upload the film on the Net if I tell my family or the police.”

In Pakistan, the group Bytes for All—an organization that previously sued the government for censoring YouTube videos—released a study showing that social media and mobile tech are causing real harm to women in the country. Gul Bukhari, the report’s author, told Reuters, “These technologies are helping to increase violence against women, not just mirroring it.”

In June 2014, a 16-year-old girl named Jada was drugged and raped at a party in Texas. Partygoers posted a photo of her lying unconscious, one leg bent back. Soon, other Internet users had turned it into a meme, mocking her pose and using the hashtag #jadapose. Kasari Govender, executive director of the Vancouver-based West Coast Legal Education and Action Fund (LEAF), calls this kind of behavior “cybermisogyny.” “Cyberbullying,” she says, “has become this term that’s often thrown around with little understanding. We think it’s important to name the forces that are motivating this in order to figure out how to address it.”

In an unusually bold act, Jada responded by speaking publicly about her rape and the online abuse that followed. Supporters soon took to the Internet in her defense. “There’s no point in hiding,” she told a television reporter. “Everybody has already seen my face and my body, but that’s not what I am and who I am. I’m just angry.”

After Facebook removed Thorlaug’s altered image and the rape threats, she felt relieved, but she was angry too. “These errors are going to manifest again,” she told Wired, “if there isn’t clear enough policy.”

Yet, at the time of Thorlaug’s report, Facebook did have a clear policy. Its detailed Community Standards for speech, often considered the industry’s gold standard, were bolstered by reporting tools that allowed users to report offensive content, and Thorlaug had used these tools as instructed. But serious errors were still manifesting regularly.

Not long after Thorlaug’s struggle to remove her image, a Facebook user posted a video documenting the gang rape of a woman by the side of a road in Malaysia. The six minutes of graphic footage were live for more than three weeks, during which Facebook moderators declined repeated requests for removal. It had been viewed hundreds of times before a reader of Soraya’s forwarded the video to her with a request for help. We notified a contact on Facebook’s Safety Advisory Board, and only then was the video taken offline.

Around the same time, another Icelandic woman, Hildur Lilliendahl Viggósdóttir, decided to draw attention to similar problems by creating a page called “Men who hate women,” where she reposted examples of misogyny she found elsewhere on Facebook. Her page was suspended four times—not because of its offensive content, but because she was reposting images without written permission. Meanwhile, the original postings—graphically depicting rape and glorifying the physical abuse of women—remained on Facebook. As activists had been noting for years, pages like these were allowed by Facebook to remain under the category of “humor.” Other humorous pages live at the time had names like “I kill bitches like you,” “Domestic Violence: Don’t Make Me Tell You Twice,” “I Love the Rape Van,” and “Raping Babies Because You’re Fucking Fearless.”

Jillian C. York, director for international freedom of expression at the Electronic Frontier Foundation, is one of many civil libertarians who believe Facebook and other social media platforms should not screen this, or any, content at all. “It of course must be noted that the company—like any company—is well within its rights to regulate speech as it sees fit,” she wrote in a May 2013 piece in Slate in response to growing activism. “The question is not can Facebook censor speech, but rather, should it?” She argues that censoring any content “sets a dangerous precedent for special interest groups looking to bring their pet issue to the attention of Facebook’s censors.”

When the problem involves half the world’s population, it’s difficult to classify it as a “pet issue.” What’s more, there are free speech issues on both sides of the regulated content equation. “We have the expressive interests of the harassers to threaten, to post photos, to spread defamation, rape threats, lies on the one hand,” explains Citron. “And on the other hand you have the free speech interests, among others, of the victims, who are silenced and are driven offline.”

These loss-of-speech issues tend to draw less attention and sympathy than free speech rights. However, as Citron points out, sexual hostility has already been identified as a source of real harm: Title VII demands that employers regulate such hostility in the workplace. These policies exist, Citron says, because sexual hostility “is understood as conduct interfering with life opportunities.”

For online harassers, this is often an overt goal: to silence female community members, whether through sexual slurs or outright threats. It’s little surprise that the Internet has become a powerful tool in intimate partner violence: A 2012 survey conducted by the National Network to End Domestic Violence (NNEDV) found that 89 percent of local domestic violence programs reported victims who were experiencing technology-enabled abuse, often across multiple platforms.

For their part, social media companies often express commitment to user safety, but downplay their influence on the broader culture. Administrators repeatedly explain that their companies, while very concerned with protecting users, are not in the business of policing free speech. As Twitter co-founder Biz Stone phrased it in a post titled “Tweets Must Flow,” “We strive not to remove Tweets on the basis of their content.” The company’s guidelines encourage readers to unfollow the offensive party and “express your feelings [to a trusted friend] so you can move on.”

None of this was of much help to Caroline Criado-Perez, a British journalist and feminist who helped get a picture of Jane Austen on the £10 banknote. The day Bank of England made the announcement, Criado-Perez began receiving more than 50 violent threats per hour on Twitter. “The immediate impact was that I couldn’t eat or sleep,” she told The Guardian in 2013. She asked Twitter to find some way to stop the threats, but at the time the company offered no mechanism for reporting abuse. Since then, the company has released a reporting button, but its usefulness is extremely limited: It requires that every tweet be reported separately, a cumbersome process that gives the user no way of explaining that she is a target of ongoing harassment. (The system currently provides no field for comments.)

And yet companies like Facebook, Twitter, and YouTube do moderate content and make quasi-governmental decisions regarding speech. Some content moderation is related to legal obligations, as in the case of child pornography, but a great deal more is a matter of cultural interpretation. Companies have disclosed that governments rely on them to implement censorship requests—earlier this year, for example, Twitter blocked tweets and accounts deemed “blasphemous” by the Pakistani government. (In response to these government incursions, a coalition of academics, legal scholars, corporations, non-profit organizations, and schools came together in 2008 to form the Global Network Initiative, a non-governmental organization dedicated to privacy and free expression.)

When it comes to copyright and intellectual property interests, companies are highly responsive, as Hildur’s “Men who hate women” experience highlighted. But, says Jan Moolman, who coordinates the Association of Progressive Communications’s women’s rights division, “‘garden variety’ violence against women—clearly human rights violations—frequently get a lukewarm response until it becomes an issue of bad press.”

For that reasons, when social media companies fail to respond to complaints and requests, victims of online harassment frequently turn to individuals who can publicize their cases. Trista Hendren, an Oregon-based blogger, became an advocate for other women after readers from Iceland, Egypt, Australia, India, Lebanon, and the UK began asking her to write about their experiences. “I was overwhelmed,” she told us. In December 2012, Hendren and several collaborators created a Facebook page called RapeBook where users could flag and report offensive content that the company had refused to take down.

By April 2013, people were using RapeBook to post pictures of women and pre-pubescent girls being raped or beaten. Some days, Hendren received more than 500 anonymous, explicitly violent comments—“I will skull-fuck your children,” for instance. Facebook users tracked down and posted her address, her children’s names, and her phone number and started to call her.

By that time, Hendren had abandoned any hope that using Facebook’s reporting mechanisms could help her. She was able, however, to work directly with a Facebook moderator to address the threats and criminal content. She found that the company sincerely wanted to help. Their representatives discussed the posts with her on a case-by-case basis, but more violent and threatening posts kept coming, and much of the content she considered graphic and abusive was allowed to remain.

Eventually, Hendren told us, she and Facebook became locked in disagreement over what constituted “safety” and “hate” on the site. Facebook’s people, she said, told her they didn’t consider the threats to her and her family credible or legitimate. Hendren, however, was concerned enough to contact the police and the FBI. The FBI started an investigation; meanwhile Hendren, physically and emotionally spent, suspended her Facebook account. “I was the sickest I have ever been,” she said. “It was really disgusting work. We just began to think, ‘Why are we devoting all our efforts on a volunteer basis to do work that Facebook—with billions of dollars—should be taking care of?’”

Hendren contacted Soraya, who continued to press Facebook directly. At the same time, Soraya and Laura Bates, founder of the Everyday Sexism Project, also began comparing notes on what readers were sending them. Bates was struck by surprising ad placements. At the time, a photo captioned “The bitch didn’t know when to shut up” appeared alongside ads for Dove and iTunes. “Domestic Violence: Don’t Make Me Tell You Twice”—a page filled with photos of women beaten, bruised, and bleeding—was populated by ads for Facebook’s COO Sheryl Sandberg’s new bestselling book, Lean In: Women, Work, and the Will to Lead.

In early May, Bates decided to tweet at one of these companies. “Hi @Finnair here’s your ad on another domestic violence page—will you stop advertising with Facebook?” FinnAir responded immediately: “It is totally against our values and policies. Thanks @r2ph! @everydaysexism Could you send us the URL please so that we can take action?”

Soraya, Bates, and Jaclyn Friedman, the executive director of Women, Action, and Media, a media justice advocacy group, joined forces and launched a social media campaign designed to attract advertisers’ attention. The ultimate goal was to press Facebook to recognize explicit violence against women as a violation of its own prohibitions against hate speech, graphic violence, and harassment. Within a day of beginning the campaign, 160 organizations and corporations had co-signed a public letter, and in less than a week, more than 60,000 tweets were shared using the campaign’s #FBrape hashtag. Nissan was the first company to pull its advertising dollars from Facebook altogether. More than 15 others soon followed. The letter emphasized that Facebook’s refusal to take down content that glorified and trivialized graphic rape and domestic violence was actually hampering free expression—it was “marginaliz[ing] girls and women, sidelin[ing] our experiences and concerns, and contribut[ing] to violence against us.”

On May 28, Facebook issued a public response:

In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like … We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better—and we will.

read more..

source: theatlantic.com

0

Leave a Reply

Your email address will not be published. Required fields are marked *

Anti-Spam by WP-SpamShield