Twitter to Review Toothless Policies on Cyberharrassment
The Internet's lowlives scuttled out of their dark lairs following the death of Robin Williams this week, forcing daughter Zelda to abandon her social media accounts. Although the vast majority of Twitter and Instagram users offered sympathy and support, it was not enough to outweigh the perverse attacks of the Web's merciless haters, who've been allowed the freedom to strike at will.
Aug 15, 2014 6:33 AM PT
A deluge of hateful tweets after the suicide of actor Robin Williams earlier this week forced his daughter Zelda to publicly quit Twitter and Instagram.
"We will not tolerate abuse of this nature on Twitter," Del Harvey, the company's vice president of trust and safety, said in a prepared message sent to TechNewsWorld in response to our request for comment.
Twitter has suspended "a number of accounts related to this issue for violating our rules" and is evaluating how it can further improve its policies to better handle tragic situations like this one, Harvey continued. "This includes expanding our policies regarding self-harm and private information, and improving support for family members of deceased users."
Grow Some Fangs, Please!
Twitter's response to reports of abuse can charitably be described as weak.
Take, for example, the case of Imani Gandy, senior legal analyst at RH Reality Check.
She has blocked "at least a thousand" of his accounts during this period, and has "done all the things you're supposed to do when dealing with [assh*les] on the Internet."
Gandy began tweeting screenshots of the harassing messages to Twitter CEO Dick Costolo and to Twitter support. Costolo's eventual response? "We are on this in a couple different ways. I'd prefer to leave it at that here. Thanks very much."
Click Here and Watch Nothing Happen
Twitter has a page on which users can report abuse, and last year introduced a "Report Abuse" button that lets users flag abusive tweets directly. The button was introduced after British feminist Caroline Criado-Perez began receiving rape and death threats.
"The only thing the ... button does is to remove one step from Twitter's standard process for reporting abuse," Gandy wrote. The "report abusive user" form is about as effective as your average YouTube commenter at a spelling bee."
Are You Listening, Costolo?
Twitter users are angry, as Costolo found out when a hashtag meant to promote his interview with CNBC last month drew a barrage of comments about online harassment, according to Thinkprogress.org.
About one-third of the tweets were related to user safety, privacy and abuse, according to CNBC.
"Twitter says ask friends & family for support when I'm being harassed," tweeted gurkeyrith. "Why doesn't Twitter do anything?"
Shivam Manghnani, who reported "a real threat of physical harm," was told the links provided "do not lead to user tweets, and the user may have already deleted the reported tweets." Should the problem persist, Manghnani was told to respond with additional links to the new tweets.
Just Another Day of Being a Girl
Online harassment of women appears almost to be accepted as the norm.
The staff of Jezebel have published an open letter to their parent company Gawker Media complaining about having received GIFs of violent pornography images in the discussion section of stories on the site.
"None of us are paid enough to deal with this on a daily basis," the staff wrote, adding that higher-ups at Gawker are "well aware of the problem" and, in essence, don't plan to address it.
"Eventually, we'll fully wrap our heads around the fact that behavior we wouldn't allow in public in person shouldn't be allowed on social networks either," sighed Rob Enderle, principal analyst at the Enderle Group. "I'd hoped it would have happened sooner."
Where Does the Buck Stop?
Twitter contends it doesn't want to censor free speech, although that argument is undermined by its having handed over user information to governments at their request, and complied with Russian, Turkish and Pakistani governments' demands to block or remove content.
"I think we draw the line when the person receiving messages feels threatened," Enderle told TechNewsWorld. "The provider of the service has the responsibility to ensure its customers are safe."