Social Networking

Twitter’s Timid Anti-Trolling Tweaks

Twitter recently amended its rules in yet another attempt to crack down on the abuses perpetuated by online trolls, but the changes may do little to protect victims. Its latest move was an extension of its ban on threats of violence against others or the promotion of violence against others. The company decided to let its support team lock down abusive accounts for specific periods of time.

Twitter recently amended its rules in yet another attempt to crack down on the abuses perpetuated by online trolls, but the changes may do little to protect victims.

Its latest move was an extension of its ban on threats of violence against others or the promotion of violence against others.

The company decided to let its support team lock down abusive accounts for specific periods of time.

Further, it began testing a new, unspecified feature that helps it identify suspected abusive tweets in an effort to limit their reach.

Twitter planned to monitor the effects of its changes in a continuing effort to evaluate and update its approach to combating abuse.

The announcement of the latest changes followed last month’s public admission by Twitter General Counsel Vijaya Gadde that the company had not responded adequately to abusive behavior, and that even when it did, its responses were slow and insufficient.

Twitter’s New Rubber Teeth

Some of the new things Twitter may do to respond more strenuously:

  • Twitter support may ask users to delete certain tweets before unlocking their accounts.
  • Users may be asked to verify their phone numbers.
  • Twitter may terminate users’ accounts without further notice if they violate its rules or terms of service.

Scrambling to Avoid Trouble

Twitter’s efforts to combat abuse over the years through repeatedly updating its rules can — at best — be described as ineffective.

“Their problem right now is that people are leaving Twitter because of the abuse they face, so they feel the need to address that issue,” observed Rob Enderle, principal analyst at the Enderle Group.

Twitter “needs to figure out a solution before attorneys figure out how to sue them successfully if somebody gets harmed as a result of the threats,” he told TechNewsWorld. “There’s a whole lot of ways to come at them, including class action and shared liability.”

More Oomph, Please

Twitter’s latest updates may not make the cut when it comes to protecting potential victims.The penalties might not be strong enough.

Further, it seems the new tool Twitter is testing will prevent people from seeing derogatory tweets about themselves, but it will not shut down such posts. That raises the question of whether users are indeed safe if they can’t see how they’re being vilified on the service.

“A strong stance by Twitter is to truly penalize the offenders — ban them from the use of their service [and] deanonymize them for the victim ,so she knows whom to report to law enforcement,” said attorney Carrie Goldberg of C.A. Goldberg, who is also a board member of the Cyber Civil Rights Initiative.

Social media companies “need to be part of the solution here and see themselves as yet another weapon in the hands of abusers — and not just a weapon, but a home for abuse,” she told TechNewsWorld.

Identifying Bad Guys

That unnamed tool Twitter is testing could be a natural language programming application.

Text data is a “multifaceted, very complicated thing to analyze,” particularly when various groups and cultures are involved, and words and symbols such as emoticons can mean multiple things, said James Purchase, VP of product management at Attensity.

“NLP helps connect relationships between entities,” he told TechNewsWorld.

Issues Twitter Faces

“Twitter is a private company and, unlike the government, it can put as many restrictions on speech as it wants,” attorney Goldberg pointed out. “Free speech relates to rights we have vis vis the government.”

Granted, but Twitter likely will face a backlash if it clamps down too hard on what people post, a point General Counsel Gadde addressed.

On the other hand, it’s already facing a backlash over abuse, so it’s damned if it does and damned if it doesn’t.

Twitter “can police their forums — people do this all the time — but doing it on Twitter’s scale brings the problem of making sure you’re not just punishing one person at the expense of another, or that you’re not being racist or targeting particular groups,” Enderle remarked.

“Doing this even-handedly is always going to be a problem for Twitter,” he said. “It’s a nightmare to do it right, but it’s a nightmare not to.”

Richard Adhikari

Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true? You can connect with Richard on Google+.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels