Cybersecurity

Twitter Steps Up Counterterrorism Efforts

Twitter last week announced it had suspended 235,000 accounts since February for promoting terrorism, bringing to 360,000 the total number of suspensions since mid-2015.

Daily suspensions have increased more than 80 percent since last year, spiking immediately after terrorist attacks. Twitter’s response time for suspending reported accounts, the length of time offending accounts are active on its platform, and the number of followers they draw all have decreased dramatically, the company said.

Twitter also has made progress in preventing those who have been suspended from getting back on its platform quickly.

Tools and Tactics

The number of teams reviewing reports around the clock has increased, and reviewers now have more tools and language capabilities.

Twitter uses technology such as proprietary spam-fighting tools to supplement reports from users. Over the past six months, those tools helped identify more than one third of the 235,000 accounts suspended.

Global efforts to silence #Daesh online are bearing fruit.#UnitedAgainstDaesh pic.twitter.com/InNXnYUmEj

— مركز صواب (@sawabcenter) July 13, 2016

Twitter’s global public policy team has expanded partnerships with organizations working to counter violent extremism online, including True Islam in the United States; Parle-moi d’Islam in France; Imams Online in the UK; the Wahid Foundation in Indonesia; and the Sawab Center in the UAE.

Twitter executives have attended government-convened summits on countering violent extremism hosted by the French Interior Ministry and the Indonesian National Counterterrorism Agency.

A Fine Balance

Twitter has been largely reactive rather than proactive, and that’s “been hit and miss, but from [its] standpoint, that’s probably the best they can do without being too draconian,” said Chenxi Wang, chief strategy officer at Twistlock.

“You could, perhaps, consider creating a statistical analysis model that will be predictive in nature,” she told TechNewsWorld, “but then you are venturing into territories that may violate privacy and freedom of speech.”

Further, doing so “is not in Twitter’s best interest,” Wang suggested, as a social network’s aim is for people “to participate rather than be regulated.”

Gauging Effectiveness

It’s not easy to judge Twitter’s success in combating terrorism online.

“How often does Twitter actually influence people who might be violent?” wondered Michael Jude, a program manager at Stratecast/Frost & Sullivan. “How likely is it that truly crazy people will use Twitter as a means to incite violence? And how likely is it that Twitter will be able to apply objective standards to making a determination that something is likely to encourage terrorism?”

The answers to the first two questions are uncertain, he told TechNewsWorld.

The last question raises “highly problematic” issues, Jude said. “What if Twitter’s algorithms are set such that supporters of Trump or Hillary are deemed terroristic? Is that an application of censorship to spirited discourse?”

There Oughta Be a Law…

Meanwhile, pressure on the Obama administration to come up with a plan to fight terrorism online is growing.

The U.S. House of Representatives last year passed the bipartisan Bill H.R. 3654, the “Combat Terrorist Use of Social Media Act of 2015,” which calls on the president to provide a report on U.S. strategy to combat terrorists’ and terrorist organizations’ use of social media.

The Senate Homeland Security and Governmental Affairs Committee earlier this year approved a Senate version of the bill, which has yet to be voted on in the full chamber.

“It’s probably a good idea for the president to have a plan, but it would need to conform to the Constitution,” Jude remarked.

“Policies haven’t yet caught up … . It’s not out of the question that government policies may one day govern social media activities,” Twistlock’s Wang suggested. “Exactly how and when remains to be seen.”

Automatic Counterterrorism

YouTube and Facebook this summer began implementing automated systems to block or remove extremist content from their pages, according to reports.

The technology, developed to identify and remove videos protected by copyright, looks for hashes assigned to videos, matches them against content previously removed for being unacceptable, and then takes appropriate action.

That approach is problematic, however.

Such automatic blocking of content “goes against the concepts of freedom of speech and the Internet,” said Jim McGregor, a principal analyst at Tirias Research.

“On the other hand, you have to consider the threat posed by these organizations,” he told TechNewsWorld. “Is giving them an open platform for promotion and communication any different than putting a gun in their hands?”

“The pros of automatic blocking terrorist content online are it’s fast and it’s consistent,” observed Rob Enderle, principal analyst at the Enderle Group.

“The cons are, automatic systems can be easy to figure out and circumvent, and you may end up casting too wide a net — like Reddit did with the Orlando shooting,” he told TechNewsWorld.

“I’m all for free speech and freedom of the Internet,” McGregor said, but organizations posting extremist content “are responsible for crimes against humanity and pose a threat to millions of innocent people and should be stopped. However, you have to be selective on the content to find that fine line between combating extremism and censorship.”

There is the danger of content being misidentified as extremist, and the people who uploaded it then being put on a watch list mistakenly. There have been widespread reports of errors in placing individuals on the United States government’s no-fly list, for example, and the process of getting off that list is difficult.

“I have one friend who’s flagged just because of her married name,” McGregor said. “There needs to be a system in place to re-evaluate those decisions to make sure people aren’t wrongly accused.”

Fighting Today’s Battles

The automated blocking reportedly being implemented by YouTube and Facebook works only on content previously banned or blocked. It can’t deal with freshly posted content that has not yet been hashtagged.

There might be a solution to that problem, however. The Counter Extremism Project, a private nonprofit organization, recently announced a hashing algorithm that would take a proactive approach to flagging extremist content on Internet and social media platforms.

Its algorithm works on images, videos and audio clips.

The CEP has proposed the establishment of a National Office for Reporting Extremism, which would house a comprehensive database of extremist content. Its tool would be able to flag matching content online immediately and flag it for removal by any company using the hashing algorithm.

Microsoft’s Contribution

Microsoft provided funding and technical support to Hany Farid, a professor at Dartmouth College, to support his work on the CEP algorithm.

Farid previously had helped develop PhotoDNA, a tool that scans and eliminates child pornography images online, which Microsoft distributed it freely.

Among other actions, Microsoft has amended its terms of use to specifically prohibit the posting of terrorist content on its hosted consumer services.

That includes any material that encourages violent action or endorses terrorist organizations included on the Consolidated United Nations Security Council Sanctions List.

Recommendations for Social Media Firms

The CEP has proposed five steps social media companies can take to combat extremism online:

  • Grant trusted reporting status to governments and groups like CEP to swiftly identify and ensure the removal of extremist online content;
  • Streamline the process for users to report suspected extremist activity;
  • Adopt a clear public policy on extremism;
  • Disclose detailed information, including the names, of the most egregious posters of extremist content; and
  • Monitor and remove content proactively as soon as it appears online.

Richard Adhikari

Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true? You can connect with Richard on Google+.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Richard Adhikari
More in Cybersecurity

Technewsworld Channels