Social Networking

Google’s Jigsaw Launches Troll-Thwarting API

A new tool is available to check the persistent harassment of online trolls. Google’s Jigsaw think tank last week launched Perspective, an early stage technology that uses machine learning to help neutralize trolls.

Perspective reviews comments and scores them based on their similarity to comments people have labeled as toxic, or that are likely to result in someone leaving a conversation.

Publishers can select what they want to do with the information Perspective provides to them. Their options include the following:

  • Flagging comments for their own moderators to review;
  • Providing tools to help users understand the potential toxicity of comments as they write them; and
  • Letting readers sort comments based on their likely toxicity.

Forty-seven percent of 3,000 Americans aged 15 or older reported experiencing online harassment or abuse, according to a survey Data & Society conducted last year. More 70 percent said they had witnessed online harassment or abuse.

Perspective got its training through an examination of hundreds of thousands of comments labeled by human reviewers who were asked to rate online comments on a scale from “very toxic” to “very healthy.”

Like all machine learning applications, Perspective improves as it’s used.

Partners and Future Plans

A number of partners have signed on to work with Jigsaw in this endeavor:

  • The Wikimedia Foundation is researching ways to detect personal attacks against volunteer editors on Wikipedia;
  • The New York Times is building an open source moderation tool to expand community discussion
  • The Economist is reworking its comments platform; and
  • The Guardian is researching how best to moderate comment forums, and host online discussions between readers and journalists.

Jigsaw has been testing a version of this technology with The New York Times, which has a team sifting through and moderating 11 thousand comments daily before they are posted.

Jigsaw is working to train models that let moderators sort through comments more quickly.

The company is looking for more partners. It wants to deliver models that work in languages other than English, as well as models that can identify other characteristics, such as when comments are unsubstantial or off-topic.

Some Perspective on Perspective

“Perspective is one of those things that’s both fascinating and scary,” said Rob Enderle, principal analyst at the Enderle Group.

Its potential to be used as a tool for censorship is concerning, he suggested.

“We already know that the Left and Right are getting very different news feeds, [and Perspective] could further exacerbate this problem because people often view other world views as hostile, false and toxic,” Enderle told TechNewsWorld. “As this technology matures, it could effectively ensure that you only see what you agree with.”

Further, “getting around systems like this with creative spelling isn’t that difficult,” he maintained.

Perspective “really doesn’t address the core problem, which is that trolls are largely unpunished and seem to gain status,” Enderle said.

Perspective likely is not very sophisticated when it comes to context or sensitivity, said Michael Jude, a program manager at Stratecast/Frost & Sullivan.

“What one person might consider to be toxic, another might not — and this is contextual,” he told TechNewsWorld.

Further, “an AI system lacks common sense, so there’s significant potential for unexpected consequences,” Jude said. Further, implicit bias “is a significant danger.”

Perspective’s Utility for Social Media

Twitter has been hit particularly hard by online trolls. Perspective might prove helpful in its battles against online abusers “if it actually works,” said Jude.

However, Twitter would have to “admit that their service isn’t a bastion of free speech,” he added.

Clamping down on comments viewed subjectively as online abuse is a form of censorship, said Jude, and it raises the question of what “public forum” really means.

“If a social media site isn’t truly an open public forum, then it shouldn’t pretend to be,” he argued.

Sites that restrict comments should warn that all posts will be reviewed to ensure they meet community standards, said Jude. If they don’t wish to undertake such a review, they should post a warning on the landing page: “Beware all ye who enter here.”

Richard Adhikari

Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology. Email Richard.

1 Comment

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels