How easy is it to change people’s votes in an election? The answer, a growing number of studies conclude, is that most forms of political persuasion seem to have little effect at all.
This conclusion may sound jarring, especially when people constantly fret over the effects of the false news articles that flooded Facebook and other online outlets during the 2016 election. Observers speculated that these so-called fake news articles swung the election to Donald J. Trump. Similar suggestions of large persuasion effects, supposedly pushing Mr. Trump to victory, have been made about online advertising from the firm Cambridge Analytica and content promoted by Russian bots.
There’s a whole lot that remains to be discovered about the effects that these types of online activities have on people, though their expectations should be brought down a notch. Previous studies have found, for instance, that the effects of even television advertising (arguably a higher-impact medium) are very small. According to one credible estimate, the net effect of exposure to an additional ad shifts the partisan vote of approximately two people out of 10,000.
In fact, a recent meta-analysis of numerous different forms of campaign persuasion, including in-person canvassing and mail, finds that their average effect in general elections is zero.
Field experiments testing the effects of online ads on political candidates and issues have also found null effects. This should not surprise us, as persuading the human species is a damn-near impossible task. Voters are often influenced by varying, subjective factors, such as which party they often consider themselves to be part of and the state of the economy. “Fake news” and bots are likely to have vastly smaller effects, especially given how polarized our politics have become.
Here’s what you should look for in evaluating claims about vast persuasion effects from dubious online content:
How many people actually saw the questionable material. Many alarming statistics have been produced since the election about how many times “fake news” was shared on Facebook or how many times Russian bots retweeted content on Twitter. These statistics obscure the fact that the content being shared may not reach many Americans (most people are not on Twitter and consume relatively little political news) or even many humans (many bot followers may themselves be bots).
Whether the people being exposed are persuadable. Dubious political content online is disproportionately likely to reach heavy news consumers who already have strong opinions. For instance, a study I conducted with Andrew Guess of Princeton and Jason Reifler of the University of Exeter in Britain showed that exposure to fake news websites before the 2016 election was heavily concentrated among the 10 percent of Americans with the most conservative information diets — not exactly swing voters.
The proportion of news people saw that is bogus. The total number of shares or likes that fake news and bots attract can sound enormous until you consider how much information circulates online. Twitter, for instance, reported that Russian bots tweeted 2.1 million times before the election — certainly a worrisome number. But these represented only 1 percent of all election-related tweets and 0.5 percent of views of election-related tweets.
Similarly, my study with Mr. Guess and Mr. Reifler found that the average amount of articles on fake news websites visited by Trump supporters was 13.1; but, only 40 percent of his supporters visited such websites, and they represented only about 6 percent of the pages they visited on sites focusing on news topics.
None of these findings indicate that fake news and bots aren’t worrisome signs for American democracy. They can mislead and polarize citizens, undermine trust in the media, and distort the content of public debate. But those who want to combat online misinformation should take steps based on evidence and data, not hype or speculation.