Google unveiled its game plan for fighting disinformation on its properties at a security conference in Munich, Germany, over the weekend.
The 30-page document details Google’s current efforts to combat bad dope on its search, news, YouTube and advertising platforms.
“Providing useful and trusted information at the scale that the Internet has reached is enormously complex and an important responsibility,” noted Google Vice President for Trust and Safety Kristie Canegallo.
“Adding to that complexity, over the last several years we’ve seen organized campaigns use online platforms to deliberately spread false or misleading information,” she continued.
“We have twenty years of experience in these information challenges, and it’s what we strive to do better than anyone else,” added Canegallo. “So while we have more work to do, we’ve been working hard to combat this challenge for many years.”
Like other communication channels, the open Internet is vulnerable to the organized propagation of false or misleading information, Google explained in its white paper.
“Over the past several years, concerns that we have entered a ‘post-truth’ era have become a controversial subject of political and academic debate,” the paper states. “These concerns directly affect Google and our mission — to organize the world’s information and make it universally accessible and useful. When our services are used to propagate deceptive or misleading information, our mission is undermined.”
Google outlined three general strategies for attacking disinformation on its platforms: making quality count, counteracting malicious actors, and giving users context about what they’re seeing on a Web page.
Making Quality Count
Google makes quality count through algorithms whose usefulness is determined by user testing, not by the ideological bent of the people who build or audit the software, according to the paper.
“One big strength of Google is that they admit to the problem — not everybody does — and are looking to fix their ranking algorithms to deal with it,” James A. Lewis, director of the technology and public policy program at the Washington, D.C.-based Center for Strategic and International Studies, told TechNewsWorld.
While algorithms can be a blessing, they can be a curse, too.
“Google made it clear in its white paper that they aren’t going to introduce humans into the mix. Everything is going to be based on algorithms,” said Dan Kennedy, an associate professor in the school of journalism at Northeastern University in Boston.
“That’s key to their business plan,” he told TechNewsWorld. “The reason they’re so profitable is they employ very few people, but that guarantees there will be continued problems with disinformation.”
Hiding Behind Algorithms
Google may depend too much on its software, suggested Paul Bischoff, a privacy advocate at Comparitech, a reviews, advice and information website for consumer security products.
“I think Google leans perhaps a bit too heavily on its algorithms in some situations when common sense could tell you that a certain page contains false information,” he told TechNewsWorld.
“Google hides behind its algorithms to shrug off responsibility in those cases,” Bischoff added.
Algorithms can’t solve all problems, Google acknowledged in its paper. They can’t determine whether a piece of content on current events is true or false; nor can they assess the intent of its creator just by scanning the text on a page.
That’s where Google’s experience fighting spam and rank manipulators has come in handy. To counter those deceivers, Google has developed a set of policies to regulate certain behaviors on its platforms.
“This is relevant to tackling disinformation since many of those who engage in the creation or propagation of content for the purpose to deceive often deploy similar tactics in an effort to achieve more visibility,” the paper notes. “Over the course of the past two decades, we have invested in systems that can reduce ‘spammy’ behaviors at scale, and we complement those with human reviews.”
Adding context to items on a page is another way Google tries to counter disinformation.
For example, knowledge or information panels appear near search results to provide facts about the search subject.
In search and news, Google clearly labels content originating with fact-checkers.
In addition, it has “Breaking News” and “Top News” shelves, and “Developing News” information panels on YouTube, to expose users to authoritative sources when looking for information about ongoing news events.
YouTube also has information panels providing “Topical Context” and “Publisher Context,” so users can see contextual information from trusted sources and make better-informed choices about what they see on the platform.
A recent context move was added during the 2018 mid-term elections, when Google required additional verification for anyone purchasing an election ad in the United States.
It also required advertisers to confirm they were U.S. citizens or lawful permanent residents. Further, every ad creative had to incorporate a clear disclosure of who was paying for the ad.
“Giving users more context to make their own decisions is a great step,” observed CSIS’s Lewis. “Compared to Facebook, Google looks good.”
Serious About Fake News
With the release of the white paper, “Google wants to demonstrate that they’re taking the problem of fake news seriously and they’re actively combating the issue,” noted Vincent Raynauld, an assistant professor in the department of Communication Studies at Emerson College in Boston.
That’s important as high-tech companies like Facebook and Google come under increased government scrutiny, he explained.
“The first battle for these companies is to make sure people understand what false information is,” Raynauld told TechNewsWorld. “It’s not about combating organizations or political parties,” he said. “It’s about combating online manifestations of misinformation and false information.”
That may not be easy for Google.
“Google’s business model incentivizes deceitful behavior to some degree,” said Comparitech‘s Bischoff.
“Ads and search results that incite emotions regardless of truthfulness can be ranked as high or higher than more level-headed, informative, and unbiased links, due to how Google’s algorithms work,” he pointed out.
If a bad article has more links to it than a good article, the bad article could well be ranked higher, Bischoff explained.
“Google is stuck in a situation where its business model encourages disinformation, but its content moderation must do the exact opposite,” he said. “As a result, I think Google’s response to disinformation will always be somewhat limited.”