Social Networking

Evidence of War Crimes Vanishing From Social Media

The Columbia Journalism Review last week published a paper on Google-owned YouTube’s practice of taking down videos with graphically violent content, which also may be evidence of recent war crimes.

YouTube’s policy — similar to other social media platforms — has been to remove content that contains hate speech, disinformation or disturbing images.

In the past, Facebook and other services have been called out for failing to do enough to stop the live streaming of criminal and violent acts. As a result, many user-generated content services have stepped up efforts to remove problematic videos as quickly as possible.

The issue is far from straightforward, though, as the CJR paper notes. Among the videos removed from YouTube were some showing Syrian government attacks in the country’s civil war. Some showed the use of “barrel bombs,” which Human Rights Watch and other groups consider to be a war crime.

Taking down such video is akin to erasing history, according to critics — and even more important, deleting evidence of war crimes.

YouTube is not the only platform that has removed such content. Facebook and Twitter reportedly have engaged in pulling those types of videos for violating community standards prohibiting the inclusion of graphic content.

A Math Problem

One of the reasons that such videos are taken down is that the social media platforms all utilize artificial intelligence-based algorithms to detect and remove content that is suspected of violating the user terms of agreement. Facebook and other services have been under pressure to remove graphically violent content, including videos posted by terrorist groups and radical organizations.

Facebook has upped its game to remove content automatically, and rival services have been working hard to delete such content almost as quickly as it is posted. The problem is that the filters can’t differentiate between terrorist propaganda and potential evidence of war crimes.

After content is deleted, the posters have little recourse. Users must appeal to the service, arguing that their content was deleted erroneously, but that process can be slow and oftentimes ineffective. In most cases, the content remains offline even after the appeal.

Social Reaction

In many cases, the content in question does violate YouTube’s terms, including its community standards, even though it might serve a serious and important purpose.

“The question isn’t whether YouTube and other social media sites should police content — of course, they should,” said James R. Bailey, professor of leadership at the George Washington University School of Business.

The question is how those companies go about it.

“What constitutes ‘inflammatory’ or ’emancipating’ depends entirely on perspective,” Bailey told TechNewsWorld.

“The First Amendment includes social media posts, but free speech rights are rightly limited,” he added.

The Constitution doesn’t protect against slander, obscenity, threats, and intellectual property violations, for example.

“Speech that incites violence is also prohibited,” Bailey noted. “That obligates sites like YouTube to scrutinize posts with gruesome content that could stir resentment or revenge. Given social media’s tenuous status, caution — not controversy — is called for.”

Is It Censorship?

There could be a case made that YouTube’s removal of such content — even when it violates community standards — could be considered censoring of news by a media company.

“These are the symptoms of uncontrolled, unverified, and collaborated stories; the illness is consequence and censorship,” warned Lon Safko, social media consultant and coauthor of The Social Media Bible.

“All of this gets back to ‘no consequences for dishonest journalism’ and censorship,” he told TechNewsWorld.

Who exactly is the audience for such content? Is this the sort of video that is appropriate for the masses to view without proper context or background? Does such content become akin to a “snuff film”? Might such crimes even be diminished because of the way these videos are presented?

“Do we all want to watch war crimes as they take place, and is this appropriate content for YouTube?” pondered Safko.

“This is tough for social media sites like YouTube. First, they are not news organizations — they are media organizations,” said Dustin York, director of Undergraduate and Graduate Communication at Maryville University.

“Reporting on the news is not their mission,” he told TechNewsWorld.

“YouTube is also in a tough situation, as they must use AI to support their censorship. No one would argue that YouTube shouldn’t censor anything,” added York. “It would be impossible for them to hire enough humans to manually censor all content — therefore, AI is used. AI can make mistakes on what to censor and what not to censor — this is where some of these human rights videos are seeing the issue.”

Social News

Other social media services have run into the exact same issues, but Facebook has taken a different approach to solve the problem.

“Facebook is rolling out their new ‘News Tab,’ which they hope to help with many issues, this being one,” noted York.

If Facebook’s News Tab is successful, YouTube could follow suit and treat such videos as actual “news” rather than user-generated content.

“Human rights videos presented by vetted media will never be censored,” said York.

“An option others have mentioned is for removed videos to be flagged by the uploader as a human rights video,” he said.

Flagged content could call for a human to review the context of the video and decide whether to remove it or not.

“However, this may cause the same issue that social sites already face — not enough humans,” cautioned York. “If the majority of propaganda videos also flag their videos, YouTube is in the same situation as they began. At the end of the day, AI is needed, but AI is not perfect.”

Can We Trust It?

Then there is the issue of whether such content can be trusted, especially in the era of so-called “deepfake” videos, which use computer technology to manipulate and mislead. There is also the issue of authenticity and whether the videos we see accurately depict an event.

Earlier this month, ABC News was called out for airing videos it claimed were from Syria, but which actually were from an annual machine gun shoot in Kentucky.

More importantly, there are many cases throughout history of governments doctoring photos as part of a misinformation or propaganda campaign. Should videos purporting to document war crimes be taken at face value or be vetted by actual reporters?

“Videos capturing war crimes are like any other videos, and they can be manufactured or fabricated,” warned George Washington University’s Bailey.

Is YouTube even qualified to verify if a video is real?

“Are they real — or fake war crimes like the ones Russia staged to support attacking Ukraine?” asked Safko.

“If not, what are the consequences?” he wondered.

“This is a tricky situation indeed, and social media platforms should be held to the same standards as broadcast media when it comes to validating the authenticity of the video that is shared on the respective platforms,” said Josh Crandall, principal analyst at Netpop Research.

The answer may fall to having reputable news organizations work with YouTube in some capacity to vet the videos.

“Let the press, elevated by the First Amendment, do their job,” said Bailey.

Sharing the Story

Then there is the issue of the importance that such content presents, especially in parts of the world where the free press has been silenced.

Social media often has been the only means of communication in war zones — notably during the Arab Spring uprisings in Tunisia, Libya, and Egypt, as well as remote parts of Syria — and it could be the best way to get out important information about war crimes.

Social media could be a way for information to be passed on so that it can then be vetted by news organizations.

“Social media platforms can be used to investigate war crimes by allowing users to upload videos but not enabling other users to see the video until the proper channels — non-governmental organizations (NGO), United Nations, and the police — are notified of the potential atrocity,” Crandall told TechNewsWorld.

“In other words, social media platforms should not be in the business of breaking the news via user contributions,” he added. “Rather, they need to be aware of how bad actors can exploit the platforms for their own ends.”

Peter Suciu

Peter Suciu has been an ECT News Network reporter since 2012. His areas of focus include cybersecurity, mobile phones, displays, streaming media, pay TV, and autonomous vehicles. He has written and edited for numerous publications and websites, including Newsweek, Wired, and FoxNews.com.Email Peter.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Peter Suciu
More in Social Networking

Technewsworld Channels