A coordinated barrage of social media attacks suggests the involvement of foreign state actors. By NATASHA KORECKI
The cyber propaganda — which frequently picks at the rawest, most sensitive issues in public discourse — is being pushed across a variety of platforms and with a more insidious approach than in the 2016 presidential election, when online attacks designed to polarize and mislead voters first surfaced on a massive scale.
Recent posts that have received widespread dissemination include racially inflammatory memes and messaging involving Harris, O’Rourke and Warren. In Warren’s case, a false narrative surfaced alleging that a blackface doll appeared on a kitchen cabinet in the background of the senator’s New Year’s Eve Instagram live stream.
Not all of the activity is organized. Much of it appears to be organic, a reflection of the politically polarizing nature of some of the candidates. But there are clear signs of a coordinated effort of undetermined size that shares similar characteristics with the computational propaganda attacks launched by online trolls at Russia’s Internet Research Agency in the 2016 presidential campaign, which special counsel Robert Mueller accused of aiming to undermine the political process and elevate Donald Trump.
An analysis conducted for POLITICO by Guardians.ai found evidence that a relatively small cluster of accounts — and a broader group of accounts that amplify them — drove a disproportionate amount of the Twitter conversation about the four candidates over a recent 30-day period.
Using proprietary tools that measured the discussion surrounding the candidates in the Democratic field, Guardians.ai identified a cohort of roughly 200 accounts — including both unwitting real accounts and other “suspicious” and automated accounts that coordinate to spread their messages — that pumped out negative or extreme themes designed to damage the candidates.
This is the same core group of accounts the company first identified last year in a study as anchoring a wide-scale influence campaign in the 2018 elections.
Since the beginning of the year, those accounts began specifically directing their output at Harris, O’Rourke, Sanders, and Warren, and were amplified by an even wider grouping of accounts. Over a recent 30-day period, between 2 percent and 15 percent of all Twitter mentions of the four candidates emanated in some way from within that cluster of accounts, according to the Guardians.ai findings. In that time frame, all four candidates collectively had 6.8 million mentions on Twitter.
“We can conclusively state that a large group of suspicious accounts that were active in one of the largest influence operations of the 2018 cycle is now engaged in a sustained and ongoing activity for the 2020 cycle,” Horvath said.
Amarnath Gupta, a research scientist at the San Diego Supercomputer Center at the University of California at San Diego who monitors social media activity, said he’s also seen a recent surge in Twitter activity negatively targeting three candidates — O’Rourke, Harris, and Warren.
That increased activity includes a rise in the sheer volume of tweets, the rate at which they are being posted and the appearance of “cluster behavior” tied to the three candidates.
“I can say that from a very, very cursory look, a lot of the information is negatively biased with respect to sentiment analysis,” said Gupta, who partnered with Guardians.ai on a 2018 study.
According to the Guardians.ai analysis, Harris attracted the most overall Twitter activity among the 2020 candidates it looked at, with more than 2.5 million mentions over the 30-day period.
She was also among the most targeted. One widely seen tweet employed racist and sexist stereotypes in an attempt to sensationalize Harris’ relationship with former San Francisco Mayor Willie Brown. That tweet — and subsequent retweets and mentions tied to it — made 8.6 million “potential impressions” online, according to Guardians.ai, an upper limit calculation of the number of people who might have seen it based on the accounts the cluster follows, who follows accounts within the cluster and who has engaged with the tweet.
Another racially charged tweet was directed at O’Rourke. The Twitter profile of the user where it originated indicates the account was created in May 2018, but it had authored just one tweet since then — in January, when the account announced it had breaking news about the former Texas congressman leaving a message using racist language on an answering machine in the 1990s. That tweet garnered 1.3 million potential impressions on the platform, according to Guardians.ai.
A separate Guardians.ai study that looked at the focus of the 200 account group on voter fraud and false and/or misleading narratives about election integrity — published just before the midterm elections and co-authored by Horvath, Zach Verdin and Alicia Serrani — reported that the accounts generated or were mentioned in more than 140 million tweets over the prior year.
Horvath asserts that the activity surrounding the cluster represents an evolution of misinformation and amplification tactics that began in mid-to-late 2018. The initial phase that began in 2016 was marked by the creation of thousands of accounts that were more easily detected as bots or as a coordinated activity.
The new activity, however, centers on a refined group of core accounts — the very same accounts that surfaced in the group’s 2018 voter fraud study. Some of the accounts are believed to be highly sophisticated synthetic accounts operated by people attempting to influence conversations, while others are coordinated in some way by actors who have identified real individuals already tweeting out the desired message.
Tens of thousands of other accounts then work in concert to amplify the core group through mentions and retweets to drive what appears, on the surface, to be organic virality.
Operatives with digital firms, political campaigns, and other social media monitoring groups also report seeing a recent surge in false narratives or negative memes against 2020 candidates.
A recent analysis from the social media intelligence firm Storyful detected spikes in misinformation activity over social media platforms and online comment boards in the days after each of the 2020 candidates launched their presidential bids, beginning with Warren’s announcement on Dec. 31.
Fringe news websites and social media platforms, Storyful found, played a significant role in spreading anti-Warren sentiment in the days after she announced her candidacy on Dece. 31. Using a variety of keyword searches for mentions of Warren, the firm reported evidence of “spam or bot-like” activity on Facebook and Twitter from some of the top posters.
Kelly Jones, a researcher with Storyful who tracked suspicious activity in the three days after the campaign announcements of Harris, Warren, Rep. Tulsi Gabbard (D-Hawaii), and Sen. Cory Booker (D-N.J.), said she’s seen a concerted push over separate online message boards to build false or derogatory narratives.
Among the fringe platforms, Storyful identified were 4Chan and 8Chan, where messages appeared calling on commenters to quietly wreak havoc against Warren on social media or in the comments section under news stories.
“Point out that she used to be Republican but switched sides and is a spy for them now. Use this quote out of context: ‘I was a Republican because I thought that those were the people who best supported markets,’” wrote one poster on the 4Chan message board.
“We’re seeing a lot of that rhetoric for nearly every candidate that comes out,” Jones said. “There is a call to action on these fringe sites. The field is going to be so crowded that they say ‘OK: Operation Divide the Left.’”
An official with the Harris campaign said they suspect bad actors pushing misinformation and false narratives about the California Democrat are trying to divide African Americans or to get the media to pay outsized attention to criticism designed to foster divisions among the Democratic primary electorate.
Teddy Goff, who served as Obama for America’s digital director, broadly described the ongoing organized efforts as the work of “a hodgepodge. It’s a bit of an unholy alliance.”
“There are state supporters and funders of this stuff. Russia. North Korea is believed to be one, Iran is another,” he said. “In certain cases, it appears coordinated, but whether coordinated or not, there are clearly actors attempting to influence the primary by exacerbating divisions within the party, painting more moderate candidates as unpalatable to progressives and more progressive candidates as unpalatable to more mainstream Dems.”
A high-ranking official in the Sanders campaign expressed “serious concerns” about the impact of misinformation on social media, calling it “a type of political cyber warfare that’s clearly having an impact on the democratic process.” The official said the Sanders campaign views the activity it’s already seeing as involving actors that are both foreign and domestic.
Both Twitter and Facebook, which owns Instagram, have reported taking substantial measures since 2016 to identify and block foreign actors and others who violate platform rules.
While Twitter would not specifically respond to questions about the Guardians.ai findings, last year the company reported challenging millions of suspect accounts every month, including those exhibiting “spammy and automated behavior.” After attempts to authenticate the accounts through email or by phone, Twitter suspended 75 percent of the accounts it challenged from January to June 2018.
In January 2019, Twitter published an accounting of efforts to combat foreign interference over political conversations happening on the platform. Earlier efforts included releasing data sets of potential foreign information operations that have appeared on Twitter, which were composed of 3,841 accounts affiliated with the IRA, that originated in Russia, and 770 other accounts that potentially originated in Iran.
“Our investigations are global and ongoing, but the data sets we recently released are ones we’re able to reliably attribute and are disclosing now,” a Twitter spokesperson said in a statement to POLITICO. “We’ll share more information if and when it’s available.”
Facebook says it has 30,000 people working on safety and security and that it is increasingly blocking and removing fake accounts. The company also says it has brought an unprecedented level of transparency to political advertising on its platform.
At this early stage, the campaigns themselves appear ill-equipped to handle the online onslaught. Their digital operations are directed toward fundraising and organizing while their social media arms are designed to communicate positive messages and information. While some have employed monitoring practices, defensive measures typically take a backseat — especially since so much remains unknown about the sources and the scale of the attacks.
One high-level operative for a top-tier 2020 candidate noted the monumental challenges facing individual campaigns — even the ones with the most sophisticated digital teams. The problem already appears much larger than the resources available to any candidate at the moment, the official said.
Alex Kellner, managing director with Bully Pulpit Interactive, the top digital firm for Hillary Clinton’s 2016 campaign, warns that campaigns that don’t have a serious infrastructure set up to combat misinformation and dictate their own online messaging will be the most vulnerable to attack in 2020.
“I think this is going to be a serious part of any successful campaign: monitoring this and working with the platforms to shut down bad behavior,” Kellner said.
Kellner said that even though platforms like Twitter and Facebook have ramped up internal efforts to weed out bad actors, the flow of fake news and misinformation attacks against 2020 candidates is already strong.
“All the infrastructure we’ve seen in 2016 and 2018 is already in full force. And in 2020 it’s only going to get worse,” Kellner said, pointing to negative memes attacking Warren on her claims of Native American heritage and memes surrounding Harris’ relationship with Brown.
The proliferation of fake news, rapidly changing techniques by malicious actors and an underprepared field of Democratic candidates could make for a volatile primary election season.