Facebook recently promised that it would increaseefforts to remove so-called “deepfake” videos, including content thatincluded “misleading manipulated media.”
In addition to fears that deepfakes — altered videos that appear to be authentic — could impact the upcoming 2020 general election in the United States,there are growing concerns that they could ruin reputations and impact businesses.
A manipulated video that looks real could convince viewers tobelieve that the subjects in the video said things they didn’t say,or did things they didn’t do.
Deepfakes have become more sophisticated and easier to produce, thanks to artificial intelligence and machine learning, which can be applied to existing videos quickly andeasily, achieving results that took professional special effects teams and digital artists hours or days to achieve in the past.
“Deepfake technology is being weaponized for political misinformationand cybercrime,” said Robert Prigge, CEO of Jumio.
In one high profile case, criminals last year used AI-basedsoftware to impersonate a chief executive’s voice and demand afraudulent transfer of US$243,000, Prigge told TechNewsWorld.
“Unfortunately, deepfakes can also be used to bypass manybiometric-based identity verification systems, which have rapidly grownin popularity in response to impersonation attacks, identity theft andsocial engineering,” he added.
Facing Off Against Deepfakes
Given the potential for damage, both Facebook and Twitter have banned such content. However, it’s not clear what the bans cover. For its part Facebook will utilize third-party fact-checkers, reportedly including more than 50 partners working worldwide in more than 40 languages.
“They’re banning videos created through machine learning that areintended to be deceptive,” explained Paul Bischoff, privacy advocateat Comparitech.
“The videos must have both audio and video that the average personwouldn’t reasonably assume is fake,” he told TechNewsWorld.
“A deepfake superimposes existing video footage of aface onto a source head and body using advanced neural network-poweredAI to create increasingly realistic doctored videos,” noted Prigge. “In other words, adeepfake looks to be a real person’s recorded face and voice, but thewords they appear to be speaking were never really uttered by them.”
One troubling issue with deepfakes is simply determining what is adeepfake and what is just edited video. In many cases deepfakes arebuilt by utilizing the latest technology to edit or manipulate video. News outlets regularly edit interviews, press conferences and other events when crafting news stories, as way to highlight certain elements and get juicy sound bites.
Of course, there have been plenty of criticisms of mainstream newsmedia for manipulating video footage to change thecontext without AI or machine learning, simply using the tools of the editing suite.
Deepfakes generally are viewed as far more dangerous because it isn’t just context that is altered.
“At its heart, a deepfake is when someone uses sophisticatedtechnology — artificial intelligence — to blend multiple images oraudio together in order to change its original meaning and conveysomething that is not true or valid,” said Chris Olson, CEO of The Media Trust.
“From manipulating audio to creating misleading images, deepfakesfoster the spread of disinformation as the end user typically doesn’tknow that the content or message is not real,” he told TechNewsWorld.
“To varying degrees, social platforms have issued policies prohibitingthe posting of highly manipulated videos that are not clearly labeledor readily apparent to consumers as fake,” added Olson.
Still, “while these policies are a step in the right direction, they do not explicitly ban manipulated video or audio,” he pointed out. “Having youraccount blocked isn’t much of a deterrent.”
Manipulation Without Malice
Facebook’s ban and other efforts to ban orotherwise curb deepfakes do not apply to political speech or parodies.
Consent may be another issue that needs to be addressed.
“This is a great point — fake videos and images can be definedbroadly — for example, anything that is manipulated,” said ShumanGhosemajumder, CTO of Shape Security and former fraud czar at Google.
“But most media created is, to some extent, manipulated,” he told TechNewsWorld.
Manipulations include automatic digital enhancements to photos taken using modern cameras — those equipped with HDR settings or other AI-based enhancement — as well as filters, and aesthetic editing and retouching, noted Ghosemajumder.
“If most media is thus automatically marked on a platform as’synthetic’ or ‘manipulated,’ this will reduce the benefit of such atag,” he remarked.
The next step will be to figure out objective criteria to excludethat type of editing and focus on “maliciously manipulated”media, which could be an inherently subjective standard.
However, “it can’t be a question of individuals consenting to be invideos, because no such consent is generally required of publicfigures or of videos and images that are taken in public places,”observed Ghosemajumder, “and public figures are the ones that aremost likely to be targeted by malicious users of these technologies.”
AI Tools Singled Out
Facebook’s deepfakes ban singles out videos that use AI technology or machine learning to manipulate the content.
“This is an incomplete approach, since most fake content, includingmisleading videos posted today, are not created with such technology,”said Ghosemajumder.
The now famous Nancy Pelosi video “could have been created withtechnology from 40-plus years ago, since it was just simple videoediting,” he added.
More importantly, “maliciousness cannot be defined based only on thetechnology used,” said Ghosemajumder, “since much of the same technology used to create a malicious deepfake is already being used to create legitimate works ofart, such as the de-aging technology used in The Irishman.”
As the Facebook policy stands, satire and parody would be exempt, but what falls into those categories isn’t always clear. Viewer reactions don’t always align with what thecontent maker may have had in mind. A joke video that falls flat might not be viewed as satire.
“The standards for judging satire or fan films are also subjective –it may be possible to determine what is or is not intended as satirein a court of law to society’s satisfaction in an individual instance,but it is much more difficult to make such determinationsautomatically for millions of pieces of content in a social mediaplatform,” warned Ghosemajumder.
In addition, even in cases when a video is created with obviouslysatirical intent, that intent can get lost if the video is shortened, taken out of context, or even just reposted by someone who didn’t understand the original intent.
“There are many examples of satirical content fooling people whodidn’t understand the humor,” said Ghosemajumder.
“It’s more about how the audience perceives it. Satire doesn’t fallinto the ban. Nor does parody, and if the video is clearly labeled asfiction, it should be fine,” countered Bischoff.
“It is our understanding that Facebook and Twitter are not banningsatire or parody — intention is the key differentiator,” added AlexeyKhitrov, CEO of ID R&D.
“Satire by definition is the use of humor, exaggeration or irony,whereas the intention of a deepfake is to pass off altered orsynthetic video or speech as authentic,” he told TechNewsWorld.”Deepfakes are used to trick a viewer and spread misinformation. Whilea deepfake aims to deceive the average user, satire is apparent.”
There have been legal efforts to stop the proliferation ofdeepfakes, but the government might not be the best entity to tacklethis high-tech problem.
“Over the past two years, several U.S. states introduced legislationto govern deepfakes, with the Malicious Deep Fake Prohibition Act andDEEPFAKES Accountability Act introduced to the U.S. Senate and Houseof Representatives respectively,” said The Media Trust’s Olson.
Both bills stalled with lawmakers, and neither proposed muchchange beyond introducing penalties. Even if laws were passed,it is unlikely a legislative approach can keep up with technological advancements.
“It’s very difficult to effectively legislate against a moving targetlike emerging technology,” warned Olson.
“Until there is perfect recognition that content is a deepfake,platforms and media outlets need to disclose to consumers the sourceof the content,” he suggested.
“Deepfake videos cannot be stopped, just like photoshopped photoscannot be stopped,” said Josh Bohls, CEO of Inkscreen.
“Libel laws can be expanded to include altered videos that mightmisrepresent a public figure in a harmful way, providing the subjectsome kind of recourse,” he told TechNewsWorld. “It would also be prudent to pass laws requiring the labeling of certain categories of videos — political ads, for example — so that the viewer is aware that the content has been altered.”
Tech to Fight Tech
Instead of the government creating new laws, the tech industry could solve the deepfakes problem, even if defining them remains fuzzy. Providing access to technology to determine if a video has been manipulated could be a good first step.
“Several social platforms have taken steps to detect and removedeepfake videos with limited success as detection lags behind thespeed at which new technology emerges to create better, more realisticdeepfakes in an ever-diminishing period of time,” said Olson.
“The challenge remains on the difficult process of identifying andremoving deepfakes before they spread to the general public,” he said.
Social media is where these videos are spreading and where removal iscrucial. These platforms are in a good position to roll out new technology.
“Overall, Twitter and Facebook announcing plans to take action againstmalicious fake content is an excellent first step, and will increasescrutiny and skepticism of media uploaded to the Internet, especiallyby anonymous or unknown sources,” noted Ghosemajumder.
However, this is absolutely not a ‘silver bullet’ solution to thisproblem, for many reasons warned Ghosemajumder.
“The detection of fake media is a cat-and-mouse game. If manipulatedcontent is immediately flagged, then malicious actors will experimentagainst the system with variations of their content until they canpass undetected,” he explained.
“On the other hand, if manipulated content is not immediately flaggedor removed, it can be spread — and will quickly morph — causing damagein whatever period exists between creation and uploading and flagging, ina way that may not be possible to easily contain at that point,”Ghosemajumder suggested.
“Finally, the use of fake accounts andautomated fraud and abuse is the primary mechanism that maliciousactors use to spread disinformation,” he said. “This is one of the key areassocial networks need to address with the most sophisticated technologyavailable, not just home-built solutions.”
Several companies are exploring methods of combating deepfakes. Facebook, Microsoft and AWS launched the Deepfake Detection Challenge to encourage development of open source detection tools. [*Correction – Jan. 22, 2020]
“Without consistently flagging suspect digital content and labeling thesource of the doctored video or audio, technology will have littleimpact on the issue,” said Olson.
“Providing this context will help the consumer better understand theveracity of the message. Was it sent to me from an unknown thirdparty, or did I find it on a brand website during product research? This attribution information is what’s needed to counter deepfakes,” he said.
However, with a lot of manipulated content, it is often aboutmisdirection — and in this case, too much focus on the video itself could be problematic.
“It’s not just the visual stream that’s vulnerable to deepfakes, butalso the voice stream. Both can be altered or manipulated as part ofa deepfake,” said ID R&D’s Khitrov.
There is technology to help detect a deepfake’s manipulated audio, he noted.
“Liveness detection capabilities can identify artifacts that aren’taudible to the human ear but are present in synthesized, recorded orcomputer-altered voice,” explained Khitrov. “We can detect over 99 percent of audio deepfakes, but only where that technology is deployed.”
The Deep View
Those with a pessimistic take believe the technologyto create convincing deepfakes simply will outpace the technology tostop it.
“Our analogy is that there are viruses and there is antivirustechnology, and staying ahead of the bad guys requires constantiteration, and the same holds true for deepfake detection,” saidKhitrov.
However, “the AI-based technologies that the bad guys are using arevery similar to the technologies the good guys are using. So thebreakthroughs that are available to the bad guys are also being usedby the good guys to stop them,” he added.
A bigger threat is that “deepfake software is already freelyavailable on the Web, although it’s not that great yet,” saidComparitech’s Bischoff. “We will have to learn to be vigilant andskeptical.”
*ECT News Network editor’s note – Jan. 22, 2020: Our original published version of this story incorrectly stated that the Deepfake Detection Challenge was a project launched by The Media Trust. In fact, it is a joint effort of Facebook, Microsoft and AWS. We regret the error.