Germany on Monday began enforcing its new Network Enforcement Act, or NetzDG, which imposes a fine of up to 50 million euros (US$60 million) on social media networks that fail to remove hate-inciting posts promptly.
The controversial law, adopted in mid 2017, just entered its formal enforcement phase. It is designed to help combat hate and fake news at a time when ISIS-linked terror and anti-immigrant extremism have been on the rise in Germany.
Further, Russian hackers have been accused of trying to sow nationalist tendencies through social media attacks in Western countries facing elections.
The law calls for fines of up to 50 million euros on major social media sites like Facebook, Twitter, YouTube and other channels.
Facebook last month published a post in Germany saying that it shared the German government’s goal to fight hate speech and that it had made great progress in that fight, according to a translation of the text provided to the E-Commerce Times by company spokesperson Sarah Pollack.
Facebook said it had trained its Germany-based employees to comply with the law, according to the translated post. Facebook added that starting in January of this year, users would be able to report content that was considered unlawful under the criminal code, and that the company would provide a special form for such circumstances. Those posts would be handled differently from Facebook’s Community Standards reporting.
Twitter spokesperson Ian Plunkett referred our questions about the new policy to eco, Association of the Internet Industry, a Germany-based trade group. With more than 1,000 members, it is the largest in Europe. That organization opposed the German law, as well as a similar effort to block online content proposed by the European Union.
Twitter last month began enforcing updates to its safety policies, including a crackdown on posts that advocate violence or otherwise threaten people, and on accounts that are linked to groups that promote violence against civilians.
One of the first reported acts of enforcing the anti-hate speech law was against Beatrix van Storch, a leader of the Alternative for Germany party.
Storch confirmed on her Twitter page that she had been blocked temporarily by Twitter and Facebook after she apparently blasted local police officials for putting out a New Year’s message in Arabic.
As a matter of company policy, Twitter will not discuss individual actions for privacy and security reasons, Plunkett said, but he confirmed that the account we asked about was active on Tuesday.
Organizations like the Committee to Protect Journalists and the Global Network Initiative have raised concerns that the German law would privatize censorship, in effect, by forcing companies to overcorrect in order to avoid hefty financial penalties.
“It’s impossible to say what unintended consequences might arise from Germany’s new law, so it will be important for officials there to stay on top of the situation and make adjustments as they become necessary,” said Charles King, principal analyst at Pund-IT.
However, “it’s important to remember that Germany has a singular experience in the destructive potential of hate speech,” he told the E-Commerce Times, “as do its EU allies and other friends.”
Because of that history, the decision to take such action on hate speech must be respected, King said, adding that the social media sites directly impacted by the new law have failed to take enough corrective action on their own.
The new policy and related fines are not likely to have a huge impact on social media companies, King added, noting that many of them specifically tailor their policies and procedures to comply with individual markets that have different rules about free expression.
“Laws that delegate to social media companies the responsibility of enforcing hate speech standards threaten freedom of expression,” contended Drew Mitnick, policy counsel at Access Now.
“Companies under threat of large fines and with short timelines will tend to broadly take down content, threatening the removal of lawful expression,” he told the E-Commerce Times.
Courts need to decide whether hate speech laws like this one “satisfy human rights requirements that content removals are necessary for a legitimate government aim,” Mitnick said, as well as whether they are proportionate to that aim, and whether they meet standards for judicial oversight and public transparency.