Find and compare SEO and SEM software and advisors to expand your search engine presence and visibility.
Welcome Guest | Sign In

Russians Pose as Americans to Steal Data on Social Media

By John P. Mello Jr.
Mar 8, 2018 5:00 AM PT

Americans were targeted on social media by Russian agents on a mission to harvest personal information, The Wall Street Journal reported Wednesday.

The agents pretended to work for organizations promoting African-American businesses as a ruse to obtain personal information from black business owners during the 2016 presidential election campaign, according to the report.

Using names like "BlackMattersUS" and "Black4Black," the agents set up hundreds of accounts on Facebook and Instagram, the WSJ said.

As part of its efforts to address the abuse of its platform during the election, Facebook introduced a tool that would enable its members to determine if they had contact with Russian propaganda during that period. The tool doesn't address the problem of Kremlin agents masquerading as Americans, however.

Facebook did not respond to our request to comment for this story.

Defeating America Without Bullets

The Journal story came on the heels of President Donald Trump's Tuesday announcement that his administration was doing a "very, very deep" study of election meddling and would make "very strong" recommendations about the 2018 elections.

However, Adm. Michael Rogers, chief of the U.S. Cyber Command and head of the National Security Agency, last week told the Senate Armed Services Committee that the White House had not directed him to take any actions to counter potential Russian meddling in the 2018 elections.

"The impact of social media is very real," said Ajay K. Gupta, program chair for computer networks and cybersecurity at the University of Maryland.

"The lack of real attribution for social media content means that elections are being impacted by people who we don't know who they are," he told TechNewsWorld.

"Russians have said since the beginning of the Cold War they would be able to defeat America without firing a single bullet," Gupta added. "They couldn't do that as the U.S.S.R., but social media has given them another opportunity to try that."

Target of Opportunity

The latest revelation about Russian activity on social media during the elections lends credence to the idea that the Kremlin's goal is not to swing elections one way or another, but to weaken America's form of government.

One in four voters were considering staying away from the polls due to cybersecurity fears, according to a survey Carbon Black conducted last year, for example. If accurate, that could put the number who would not vote for that reason in the neighborhood of 55 million.

"This blended campaign of human intelligence and signals intelligence is dangerous for democracy," said Tom Kellermann, chief cybersecurity officer at Carbon Black.

Russia is into the long game, noted Tellagraff CEO Mark Graff.

"Hillary Clinton was a target of opportunity for the Russians in the 2016 election," he told TechNewsWorld.

"Their strategic goal was not to elect Donald Trump. The strategic goal was to disrupt American society, undermine our feelings of unity, undermine our faith in democracy," Graff maintained. "They've been trying to do that for over 50 years -- and now what they can do, using social media, is do it from the comfort of government buildings inside Russia."

What's a Social Network to Do?

Both Twitter and Facebook have made efforts to counter nation-state backed exploitation of their platforms, but the consensus is that more can be done.

"They must dynamically verify the identities of their users and filter illicit and inflammatory content," Carbon Black's Kellermann told TechNewsWorld.

"Facebook and Twitter are seemingly just learning how to combat this, and they both appear to be very late to the game," observed Brian Martin, director of vulnerability intelligence at Risk Based Security.

The social networks could deploy a number of measures, he told TechNewsWorld, ranging from monitoring the IP addresses of suspect accounts to refining their analyses of the language in posts, looking for key indicators of actors who don't speak English as their first language.

Users should have the option to flag suspected bots, so the social media companies could investigate and weed out bad actors, said Sherban Naum, senior vice president for corporate strategy and technology at Bromium.

Better Authentication

Credible news outlets should be given some kind of distinctive authentication, Naum also recommended.

Social media companies have certain "verified" users, but that appears to be inadequate. "Lots of bad guys are verified," he told TechNewsWorld.

"Twitter and Facebook could also publish trending information about bots and bad information so users can see what's trending that is legit and what's trending that is junk," Naum suggested.

What can consumers do to protect themselves?

Users should "approach social media with the same skepticism that they should be approaching email and scams," Risk Based Security's Martin advised.

"Someone offering you 100 million dollars is suspect, of course," he said.

"Someone that seems to have a 'magic bullet' showing a political figure is the next devil? Think about it more critically than you might otherwise," Martin cautioned. "Does the post have any evidence to back it up? Or is it just a compelling picture, that may have been doctored, and a catchy one-liner that invokes emotional responses?"

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Facebook Twitter LinkedIn Google+ RSS
How do you feel about accidents that occur when self-driving vehicles are being tested?
Self-driving vehicles should be banned -- one death is one too many.
Autonomous vehicles could save thousands of lives -- the tests should continue.
Companies with bad safety records should have to stop testing.
Accidents happen -- we should investigate and learn from them.
The tests are pointless -- most people will never trust software and sensors.
Most injuries and fatalities in self-driving auto tests are due to human error.