Facebook’s Crisis Management Algorithm Runs on Outrage
One year after the Cambridge Analytica scandal, Mark Zuckerberg says the company really cares. Then why is there an endless cycle of fury and apology?
The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook Inc. began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower selected the option for “hate speech,” one of nine possible categories for objectionable content on Facebook.
For years nonprofits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the hate speech report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”
The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. False rumors circulated widely on Facebook claiming Muslims were putting sterilization pills in Buddhists’ food. In late February 2018 a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the midsize city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.

The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says, summing up the thinking.
But as she began looking into what had happened in Sri Lanka, Leinwand realized the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local nonprofits and would lead to “imminent violence.” When Facebook saw a similar string of sterilization rumors in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”—a sign that Facebook was capable of policing its platform.
But is it? It’s been almost exactly a year since news brokethat Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with Donald Trump’s 2016 presidential campaign. That revelation sparked an investigation by the U.S. Justice Department into the company’s data-sharing practices, which has broadened to include a grand jury. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and, when exposed, tried to downplay it with a handy phrase that Chief Executive Officer Mark Zuckerberg repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism, and extortion.”
If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism. Facebook made Leinwand and other executives available for interviews with Bloomberg Businessweek to argue that it’s making progress.
Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post—but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.
“The whole concept that you’re going to find things and fix them after they’ve gone into the system is flawed—it’s mathematically impossible,” says Roger McNamee, one of Facebook’s early investors and, now, its loudest critic. McNamee, who recently published a book titled Zucked, argues that because the company’s ability to offer personalized advertising is dependent on collecting and processing huge quantities of user data, it has a strong disincentive to limit questionable content. “The way they’re looking at this, it’s just to avoid fixing problems inherent with the business model,” he says.
Today, Facebook is governed by a 27-page document called Community Standards. Posted publicly for the first time in 2018, the rules specify, for instance, that instructions for making explosives aren’t allowed unless they’re for scientific or educational purposes. Images of “visible anuses” and “fully nude closeups of buttocks,” likewise, are forbidden, unless they’re superimposed onto a public figure, in which case they’re permitted as commentary.
The standards can seem comically absurd in their specificity. But, Facebook executives say, they’re an earnest effort to systematically address the worst of the site in a way that’s scalable. This means rules that are general enough to apply anywhere in the world—and are clear enough that a low-paid worker in one of Facebook’s content-scanning hubs in the Philippines, Ireland, and elsewhere, can decide within seconds what to do with a flagged post. The working conditions for the 15,000 employees and contractors who do this for Facebook have attracted controversy. In February the Verge reported that U.S. moderators make only $28,800 a year while being asked regularly to view images and videos that contain graphic violence, porn, and hate speech. Some suffer from post-traumatic stress disorder. Facebook responded that it’s conducting an audit of its contract-work providers and that it will keep in closer contact with them to uphold higher standards and pay a living wage.
Zuckerberg has said that artificial intelligence algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories in which context matters. “Hate speech is one of those areas,” says Monika Bickert, Facebook’s head of global policy management, in a June 2018 interview at company headquarters. “So are bullying and harassment.”

On the day of the interview, Bickert was managing Facebook’s response to the mass shooting the day before at the Capital Gazette in Annapolis, Md. While the massacre was happening, Bickert instructed content reviewers to look out for posts praising the gunman and to block opportunists creating fake profiles in the names of the shooter or victims, five of whom died. Later her team took down the shooter’s profile and turned victims’ pages into what the company calls “memorialized accounts,” which are identical to regular Facebook pages but place the word “remembering” above the deceased person’s name.
Crises such as this happen weekly. “It’s not just shootings,” Bickert says. “It might be that a plane has crashed, and we’re waiting to find out who was on the plane and whether it was a terror attack. There may be a protest, and people are alleged to have been injured.”
In December, after months of discussion, Facebook added new rules. #MeToo accusations are OK, as long as they don’t encourage retaliation. Challenges are also fine, as long as they don’t encourage bodily harm, which would seem to put condom snorting in a gray area. “None of these issues are black and white,” Bickert says.
In congressional testimony and elsewhere, Facebook has deployed a practiced set of responses to criticism about its content decisions. If interrogated about something on the site that was already forbidden by the Community Standards, executives will reassure the public that such content is “not allowed” or that there is “no place” for it. If there’s no rule yet, Facebook will usually explain that it is trying to fix the problem, was “too slow” to recognize it, and is taking responsibility. The company has said dozens of times that it was “too slow” to recognize Russia’s manipulation of the 2016 U.S. presidential election, Myanmar’s genocide, and ethnic violence in Sri Lanka. But “too slow” could be fairly interpreted as a euphemism for deliberately ignoring a problem until someone important complains.
“They don’t want to be held liable for anything,” says Eileen Carey, a tech executive and activist. Since 2013 she’s kept records of drug dealers posting pictures of pills on the web, some of them captioned as OxyContin or Vicodin. Many of these posts include a phone number or an address where interested users can coordinate a handoff or delivery by mail. They are, in effect, classified ads for illegal opioids.

Carey’s obsession started while she worked for a consulting firm that was helping Purdue Pharma remove counterfeit pills. Most tech companies—including Alibaba, Craigslist, and EBay—were quick to agree to take down these images when Carey alerted them. Facebook and Facebook-owned Instagram were the exceptions, she says.
Carey, who like Zuckerberg is from Dobbs Ferry, N.Y., and lives in the Bay Area, would sometimes end up at parties with Facebook executives, where she’d kill the mood by complaining about the issue. “I started sucking at parties once I started working on the whole getting-rid-of-fake-drugs-on-the-internet thing,” she says. Also since 2013, Carey has kept an eye on the issue, spending a few minutes most days searching for (and reporting) drugs for sale on Facebook and Instagram. Usually she got a dismissive automated response, she says. Sometimes, she got no response at all. At the time, technology companies were advocating at conferences and in research reports for harsher enforcement of drug sales on the anonymous dark web. In reality, Carey came to believe, most of the illicit purchases occur on the regular web, on social media and other online marketplaces. “People were literally dying, and Facebook didn’t care,” she says.
In 2018, Carey began tweeting her complaints at journalists and Facebook employees. In April, Guy Rosen, a Facebook vice president who was training the company’s AI software, sent her a message, asking for more examples of the kind of content she was talking about. “Do a search for #fentanyl as well as #oxys on IG [Instagram] and you’ll see lots of pics of pills, those accounts are usually drug dealers,” she wrote to Rosen. She sent over some Instagram posts of drugs for sale. “I reported these earlier and they are still there in the #opiates search—there are 43,000 results.”
“Yikes,” Rosen wrote back. “This is SUPER helpful.” Facebook finally removed the searchable hashtags from Instagram in April—a week after being criticized by Food and Drug Administration commissioner Scott Gottlieb and just a day before Zuckerberg testified before Congress.
Since then, Carey has kept her eye on news reports from Kentucky, Ohio, and West Virginia, where deaths from opioid overdoses have declined this year. Some articles speculate that the reason may be a rise in community treatment centers or mental health resources, but Carey has a different theory: “The only thing that really changed was the hashtags.”
Even so, Facebook’s drug problem remains. In September the Washington Post described Instagram as “a sizable open marketplace for advertising illegal drugs.” In response, Bickert published a blog post explaining that Facebook blocks hundreds of hashtags and drug-related posts and has been working on computer imaging technology to better detect posts about drug sales. She included a predictable line: “There is no place for this on our services.”
Zuckerberg, as CEO, chairman, founder, and controlling shareholder, has frequently faced questions about whether he deserves near-absolute power over the company’s products. He’s strongly resisted this suggestion, except in the realm of content moderation. “I’ve increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own,” he wrote in November. The company, he detailed, would establish “an independent body” to make a final call on disputes over what should stay up on Facebook, in a way that will be “transparent and binding.” The proposed group will include 40 “experts in diverse disciplines,” Zuckerberg wrote. “Just as our board of directors is accountable to our shareholders, this body would be focused only on our community.”
Another way to cede responsibility is to encourage users to try the company’s new encrypted messaging services, which are designed so not even Facebook can see what they’re saying to one another. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Zuckerberg wrote in his March 6 blog post. Some read it as victory for Facebook’s critics after the Cambridge Analytica scandal, but even Zuckerberg acknowledges a trade-off: The shift could make it easier for terrorists, drug pushers, and propagandists to run wild. On Facebook’s WhatsApp service, which is already encrypted, misinformation in India last year has led to panic in villages over suspected child abductors, causing some to stone or lynch uninvited visitors. WhatsApp couldn’t remove the content; it could only reduce the number of people a message can be shared with. Since that move, there’s been another “WhatsApp lynching,” according to the BBC.
In the meantime, Facebook executives have been trying to reassure the world. Bickert traveled to Sri Lanka in September, meeting with 60 civil society groups, hearing their concerns about, among other things, fake accounts. There was lots to talk about. “Their Community Standards are very specific and have, for example, things like how immediate a threat content is,” says Sanjana Hattotuwa, a senior researcher with the Centre for Policy Alternatives in Sri Lanka. But, he says, “the whole point about some of this content is to radicalize over a longer period of time.”
Recently, Hattotuwa’s group warned in a blog post that Facebook was too close with the country’s government, exchanging gifts and gaining favor with officials who’ve also been accused of spreading misinformation for political purposes. The post cited a tweet from an official in which Ankhi Das, Facebook’s public policy director for the region, gave former President Mahinda Rajapaksa, whose supporters were blamed by some for orchestrating anti-Muslim riots, a large painting from a local artist. Facebook says the gift was “communal art” with no cash value and that it gave paintings to other Sri Lankan leaders.
The overarching concern from Hattotuwa, Carey, and critics around the world is that Facebook is more interested in fixing the perception of its problems than the actual problems themselves. “Alex,” who asked that his real name not be used out of fear of retribution, was recently offered a job at Facebook in London focusing on the company’s policy programs. He’d worked on political campaigns and at a big tech company, and in one of about a dozen meetings he spoke to Victoria Grand, Facebook’s global head of policy programs.
Alex recalls asking her about the true nature of the role, in light of Facebook’s scandals: “Do you need someone to effect lasting change or to change the channel?” He says Grand told him that no candidate had ever asked that question before. She paused before continuing.
“Look, I think everybody wants to be idealistic and promise the former,” he remembers her saying. “But no one is going to listen to all the good stuff we do if we’re just stuck responding to the negative.” Facebook says Grand “remembers a very different conversation.” Alex declined the job offer.
(Updates with information about the Department of Justice investigation, Facebook’s use of AI, and Sri Lanka politician Mahinda Rajapaksa. A previous version of this story corrected Eileen Carey’s title in the 19th paragraph.)