Emerging Tech

ACLU Blasts Clearview’s Facial Recognition Accuracy Claims

The American Civil Liberties Union earlier this week criticized facial recognition tool developer Clearview for making misleading claims about the accuracy of its product.

Clearview apparently has been telling law enforcement agencies that its technology underwent accuracy testing modeled on the ACLU’s 2018 test of Amazon’s Rekognition facial recognition tool.

For that test, the ACLU simulated the way law enforcement used Rekognition in the field, matching photos of all 535 members of the United States Congress against a database it built of 25,000 publicly available mugshots of arrestees.

Rekognition incorrectly matched 28 lawmakers with arrestees’ photos. The false matches disproportionately featured lawmakers of color.

Clearview’s Test Examined

Clearview’s accuracy test reportedly compared headshots from all 535 members of Congress, all 119 members of the California State Legislature, and 180 members of the Texas State Legislature against its database.

Clearview has a database of about 2.8 billion faces scraped from various social media and other public Internet websites.

Clearview’s test “couldn’t be more different from the ACLU’s work, and leaves crucial questions unanswered,” wrote Jacob Snow, a technology and civil liberties attorney at the ACLU. “Rather than searching for lawmakers against a database of arrest photos, Clearview apparently searched its own shadily assembled database of photos.”

The company’s accuracy test report, dated October 2019, was signed by three people: Jonathan Lippmann, chief judge of the New York Court of Appeals from 2009-2015; Nicholas Cassimatis, formerly head of the Intelligence Innovation Lab at Samsung Research America; and Aaron Renn, an urban policy analyst who formerly was a partner at Accenture.

The three rated Clearview 100 percent accurate. However, none of them has expertise in facial recognition.

There is no indication that Clearview submitted its product to rigorous testing, the ACLU’s Snow maintained. An algorithm’s accuracy is likely to be reduced in the real world because of photo quality, lighting, user bias and other factors.

“Imitation may be the sincerest form of flattery, but this is flattery we can do without,” Snow wrote.”If Clearview is so desperate to begin salvaging its reputation, it should stop manufacturing endorsements and start deleting the billions of photos that make up its database, switch off its servers, and get out of the surveillance business altogether.”

Bad Buzz

Clearview has generated considerable controversy with its ad campaigns and claims:

  • It claims to have helped the New York Police Department arrest suspects in at least two cases, which the NYPD denies;
  • Clearview’s claims to have a thousand or so police forces using its application have not been substantiated;
  • Twitter, Google, Facebook, YouTube, Venmo and LinkedIn have written the company demanding it stop scraping photos from their platforms;
  • Concerns about Clearview’s technology led EPIC and other organizations to write the Privacy and Civil Liberties Board demanding the suspension of facial recognition systems pending further review;
  • The Electronic Frontier Foundation has called for comprehensive federal privacy legislation around data collection;
  • Sen. Ed Markey, D-Mass., has written Clearview demanding information about its marketing to law enforcement agencies considering its technology; and
  • A lawsuit seeking class action status has been filed in an Illinois court alleging Clearview’s product breaches the state’s Biometric Information. Privacy Act (BISA).

Breaching BISA cost Facebook a US$550 million fine, a record for any privacy lawsuit.

Clearview “appears to be extremely dishonest with their claims and, as a company, they are untrusted,” observed Rob Enderle, principal analyst at the Enderle Group.

“Given that, you can’t really trust how good the tool is,” he told TechNewsWorld. “So you’ll take a lot of heat deploying it and then, if it doesn’t work, you’ll look like an idiot. That could be a career ender for those involved.”

Appropriate Use of Tech

The U.S. Federal Bureau of Investigation has ruled that its Next Generation Identification System (NGI) is exempt from one or more provisions of the Privacy Act of 1974.

That is one of the reasons facial recognition raises concerns. Another is that some schools have begun using facial recognition systems.

“Everybody’s concerned that the unrestricted use of facial recognition could be detrimental to personal freedom,” said Mike Jude, research director at IDC.

“Clearview is currently not regulated and, obviously, has a powerful incentive to promote the use of facial recognition,” he told TechNewsWorld.

False positives are always an issue, Jude noted. “Facial recognition is not foolproof. “It’s simply a case of garbage photos in, garbage identifications out.”

However, the technology “can be a very valuable tool in various areas, including retail,” depending on how it is used, he said.

“It would be unfortunate if we tossed the baby out with the bath water,” said Jude. “There will probably be laws that seek to control the use of facial recognition.”

The European Union, which initially considered a five-year ban on facial recognition in public places, instead decided to let member states deal with the issue.

French and Swedish data protection authorities have ruled out using facial recognition software in schools.

In the UK, police use the technology, which they have deployed in London.

In China, registering for a mobile phone service requires a face scan. Meanwhile, India is preparing to install a nationwide facial recognition system.

In the U.S., Oregon, New Hampshire and California have banned the technology, but only in police bodycams. Several cities, including San Francisco and Oakland, ban it altogether.

Facial recognition is coming whether we like it or not,” Enderle predicted. “This means there needs to be more focus on improving the training sets, and putting in place rules about where and when these tools can be used.”

Richard Adhikari

Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology. Email Richard.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels