Could Facebook Be Tried for Human-Rights Abuses?

The legal path is murky.

A black-and-white Facebook logo
Ralph Orlowski / Reuters

It’s almost quaint to think that just five years ago, Mark Zuckerberg cheerfully took credit for major pro-democracy movements during Facebook’s IPO launch. Contradicting his previous dismissal of the connection between social media and the Arab Spring, Zuckerberg’s letter to investors spoke not just about the platform’s business potential but also its capacity to increase “direct empowerment of people, more accountability for officials, and better solutions to some of the biggest problems of our time.” 2012 Facebook promised a rise of new leaders “who are pro-internet and fight for the rights of their people, including the right to share what they want and the right to access all information that people want to share with them.” Technically, that promise came true, though probably not how Zuckerberg imagined.

Today, the company has to reckon with its role in passively enabling human-rights abuses. While concerns about propaganda and misinformation on the platform reached a fever pitch in places like the United States in the past year, its presence in Myanmar has become the subject of global attention. During the past few months, the company was accused of censoring activists and journalists documenting incidents of and posting about what the State Department has called ethnic cleansing of the country’s Rohingya minority. Because misinformation and propaganda against the Rohingya apparently avoided the community-standards scrutiny afforded activist speech, and because of the News Feed’s tendency to promote already-popular content, posts with misinformation aiming to incite violence have easily gone viral. Experts describe Facebook’s role in the country as the de facto internet, which gives all of their actions and inactions on content even greater influence on politics and public knowledge.

Platform entanglement in human-rights abuses isn’t unique to Facebook. Earlier this year, YouTube deployed a content-moderation algorithm intended to remove terrorist content, inadvertently deleting archives of footage that Syrian activists had been collecting as war-crimes evidence. Some have made similar criticisms of American platforms operating abroad, including Twitter’s compliance with Turkish government requests for censorship, and similar acts by platforms in China.

The 2016 election has given platform regulation new traction in American policy circles, bringing debate in the United States closer to Europe’s. But beyond Western electoral politics, there remain hard legal questions with far more human lives at stake. While the violence in Myanmar predates Facebook’s presence in the country and absolutely can’t be fully laid at their feet, the platform is a central source of online information, and in Myanmar propaganda legitimizing crimes against humanity can have massive reach and influence. Dissident voices posting poems about the crisis have been declared in violation of vaguely defined community standards (and, while those decisions have sometimes been reversed, it’s usually after receiving media attention, which not all incidents are likely to receive). There’s been coverage and documentation of hate speech on the platform in Myanmar since 2014, which is to say Facebook has been aware of the issue for quite some time.

When asked about what resources the company has allocated to address misinformation and hate speech, Facebook spokesperson Ruchika Budhraja responded via email that “we have steadily increased our investment over the years in the resources and teams that assist in ensuring our services are used by people in Myanmar to connect in a meaningful and safe way.” Projects like illustrated print copies of their community standards translated into Burmese comics in 2015, partnerships with civil-society groups, and a Facebook safety page for Myanmar with PDFs of the illustrated community standards and safety guide are among the digital-literacy campaigns that, Budhraja wrote, “have reached millions of people, and we listen to local community groups so that we can continue to refine the way in which we implement and promote awareness of our policies to help keep our community safe.”

This is encouraging, though Facebook still has no full-time staff on the ground in Myanmar. Regardless of past and ongoing good-faith efforts from Facebook to try to address what’s happening now, the damage has been and continues to be done.

While advocates and journalists made compelling moral and ethical arguments for Facebook to take action this past fall, legal liability remained conspicuously absent. There’s no agreed-upon point at which a platform’s automated actions at scale rise to a potentially illegal level of complicity in crimes against humanity, and seemingly little agreement about what is to be done after that point is reached.

That’s partly because it’s extremely unlikely that Facebook (or the other big platforms) will ever be deemed legally liable or legally compelled to accountability for playing a significant role in human-rights abuses around the world. Even posing the question of a platform’s legal culpability for human-rights abuses seems a quixotic pursuit, based on the reaction when I’ve brought it up with attorneys and human-rights advocates. The apparent absurdity of pursuing legal accountability seems to have less to do with the innocence or guilt of platforms and more to do with the realities of human-rights law, which has a poor track record with regard to corporations in general and which is uniquely challenged and complicated by tech platforms in particular.

To assume one might hold a company legally responsible for human-rights abuses also assumes there’s a jurisdiction that can hear the case. The International Criminal Court isn’t really set up to try companies—and, even if it were, it can only bring cases against nations whose governments are signatories to the Rome Statute (the treaty that established the ICC). The United States signed but never ratified the treaty and formally withdrew its signature in 2002. The ICC technically can bring cases on behalf of member countries (as ICC prosecutor Fatou Bensouda argued this year when requesting to investigate the United States on behalf of member state Afghanistan), but Myanmar isn’t a member.

Filing a case in the country where human-rights abuses take place is difficult, because an oppressive regime’s court system is unlikely to actually offer a fair trial to victims. In the United States, noncitizens can file civil suits against American companies that have violated international treaties under the somewhat obscure 1789 Alien Tort Statute. But case law isn’t exactly in the victim’s favor with the statute. Companies tend to have more success, particularly in light of 2013’s Supreme Court case Kiobel vs. Royal Dutch Petroleum, which ruled that the statute technically doesn’t apply to actions taken outside the United States (though, seeing as Facebook has no full-time employees in Myanmar and the News Feed is worked on by engineers in the United States, there might still be an argument here). Given the overhead and exhaustion of legal battles with well-lawyered companies, and given companies’ preference to make bad PR quietly disappear, at best victims might settle out of court.

Assuming one settles the thorny jurisdiction question, it’s difficult to identify what crime Facebook has committed. Lack of foresight isn’t actually a crime against humanity. Furthermore, “the persecution and discrimination against the Rohingya has been going on for quite some time, and it would be happening if Facebook was not there at all,” pointed out Cynthia Wong, a senior researcher on internet and human rights at Human Rights Watch.

Then again, Wong noted, “I do think in general Facebook has not fully grappled with the harms that its platform can contribute to” when it serves as a population’s de facto internet and, therefore, central source for information. Acting as foundational communications infrastructure, of course, doesn’t make the company responsible for the content on the site, hateful as it may be. The operator of German printing presses wasn’t sentenced to execution at Nuremberg, the editor of Der Stuermer was, and Section 230 of the Communications Decency Act protects platforms from liability for content posted by users on the platform.

But comparisons of Facebook to the printing press start to seem flimsy when considering the power and influence of the News Feed. “When they start taking that step of targeting information, I think there’s an argument to potentially be made that they’re no longer just like any other publishing outlet but that they’re actually actively participating in who sees what and with what degree of impact,” said Alexa Koenig, the director of UC Berkeley’s Human Rights Center.

Facebook doesn’t make the content and isn’t liable for it beyond its own community standards, but it does make, manage, and moderate the systems that move certain content to the top of a user’s News Feed, in the service of keeping more users engaged with content on the platform and thus placing more eyeballs on ads. Algorithmic content curation and targeting, this argument goes, is still an act of curation and targeting. In this case, Facebook’s curation (or lack thereof) gave a greater platform and credibility to misinformation and propaganda advocating ethnic cleansing. In the case of graphic content documenting atrocities against Rohingya that were taken down by moderators, the decisions about what not to publish and interpretation of takedown policies are also decisions that go beyond passively posting content.

But is reverse-chron sorting or content moderation a particular right? Probably not. “You could say that they’re violating the right to truth or information, but those rights are really limited in scope,” explained Steven Ratner, a law professor at the University of Michigan. “There is no explicit right to accurate news in any [international human-rights] treaties.”

One term frequently invoked when platforms passively facilitate abuse or violence is Dictionary.com’s word of the year, “complicit”—which can be invoked in law, but not with the gossamer flourish of moral appeal. “Complicity” as a legal concept in international law tends to be used to hold accountable bureaucrats or underlings who were “just following orders” in a genocidal regime, and far harder to apply to companies. It requires proof that the accused knowingly aided and abetted (usually state, but possibly non-state) abuses, and in so doing either actively sought the abusive outcome or materially benefited from it.

This is part of the argument in recent lawsuits against platforms from victims of terrorist attacks and surviving families, which argue that platforms aren’t liable for the existence of terrorist content on their platforms, but that they are liable for profiting from their promotion of that content on their platform. Protections against content liability, the argument goes, don’t apply to automated advertisements.

Material gain from rights abuses was also at stake in Xiaoning et al. vs. Yahoo, the lawsuit filed in 2007 following Yahoo’s cooperation with the Chinese government’s requests for user data that led to the imprisonment and abuse of two dissidents. In that case, the evidence of intent and benefit were much more cut-and-dried than trying to argue the subtleties of what is and isn’t covered by Section 230. The case resulted in a settlement and the creation of a fund to support Chinese activists that recently became subject to a new lawsuit.

Even hard proof of complicity and material gain doesn’t always translate into a successful suit depending on the time lapsed since the abuses and the jurisdiction of the lawsuit. To this day, IBM has never been successfully sued for its role in the Holocaust, partly due to jurisdictional disputes and statutes of limitations. During World War II, several of IBM's European subsidiaries supplied the Nazi regime with punch-card technology that was used to facilitate the Final Solution in all its bureaucratic horror, from tracking the trains used to transport Jews to concentration camps to providing the rudimentary foundation for the infamous numbered tattoos at Auschwitz. (IBM has never disavowed these facts; it only disputes the claim that their New York headquarters had full knowledge of their subsidiaries’ actions.)

There are other legal concepts that might better capture Facebook’s impact and potential liability. Koenig brought up possibilities in American civil tort law. “I could see lawyers arguing that there’s been negligence due to these companies creating new vulnerabilities that give rise to a heightened duty of care.” Duty of care is a concept more pedantic than its poetic phrasing suggests—it’s a term for a person or company’s legal obligation to reasonably act to prevent foreseeable harms. In addition to negligence, Koenig noted that “perhaps most interesting is the possible application of a recklessness standard—that the companies knew or should have known that they were creating dependencies that led to new vulnerabilities that allowed criminal activities and other harms to occur.” The legal questions here would include whether or how Facebook understands itself as a form of core infrastructure in nation-states, what responsibilities infrastructural companies have to prevent or foresee harms, whether it can be reasonably argued that the infrastructure itself facilitates or enables harms, and whether Facebook’s efforts to promote digital literacy and increase awareness of community standards demonstrated sufficient effort to mitigate them.

Assuming that the laws currently on the books aren’t being used or can’t be used to hold companies accountable, what potential laws could? Approaches in the United States and Germany are more focused on specific platform dynamics in those countries—campaign-advertising disclosure or heavy fines for failing to take down hate speech.

As the director of the Dangerous Speech Project, the American University adjunct associate professor Susan Benesch understands the common reflex to try to use law to regulate online content—or to force platforms to do it—but believes there are a lot of good reasons for caution. For one thing, Benesch argued, laws against specific kinds of speech tend to be more often used against the marginalized than the powerful. For another, enforcement of censorship laws tends to be overzealous, especially when censorship is outsourced by governments to private actors.

Benesch doubts that Facebook or any other company has software sophisticated enough to automate the kind of nuanced comprehension needed to understand the harms of specific kinds of speech in a variety of social contexts, and would be likely to take down too much content. “Laws that compel people to do things exert enormous pressure toward overregulation. The software just isn’t ready to effectively discern at scale.”

Cynthia Wong at Human Rights Watch also expressed skepticism. “The challenge is that in many countries, governments themselves are part of the problem [of hate speech]. So the government is using Facebook to spread misinformation and hate speech against minorities. It’s not something you can lay completely at the feet of Facebook—it’s not their job to fix what governments are doing.”

Facebook’s actual job is serving their shareholders, which usually means getting more people on Facebook and selling more advertisements. Like most platforms, it prefers self-regulation and making promises to be a better actor over government regulation that might undermine that primary mission. But Benesch also pointed out that Facebook’s public-facing community standards for content moderation are pretty vague (“just like the Constitution,” she cheerfully noted) and their internal guidelines for content moderation are generally opaque (or were, until someone leaked them to ProPublica). It’s the vagueness of those policies that leads moderators to flag posts documenting Burmese military actions while misinformation continues to be posted by the ultranationalist Buddhist monk Ashin Wirathu, who was banned from publicly preaching in Myanmar because the vitriolic nature of his speech was fueling ethnic cleansing.

To be clear, it’s not as though anyone has publicly stated they want to sue Facebook over its actions or inactions in Myanmar—as reporting from this past fall indicated, most activists and Muslims in the country are more concerned with the state enacting or passively letting violence spread than the platform, though there’s definitely a desire to see Facebook do more and be more responsive and attentive. The reason to seek out a legal context for the company’s actions is because at the end of the day Facebook’s track record of being actually shamed into action by moral outcry is pretty weak, and in its idealized form law might be where such outcry can bear regulatory or financial consequence that might actually force the company to act. The reason to follow these legal dead ends to their utmost conclusion is to recognize that the formal and informal tools available (and the ones being proposed) are pretty ill-equipped for the actual task at hand.

To leave the power to regulate speech in the hands of a self-regulating Facebook has proven untenable for preventing harm; to re-situate that power with government is an easy recipe for perpetuating it. In both cases, power and agency remain out of the hands of the citizens directly impacted by these systems. And while Facebook promotes its work with civil society and governments to improve digital literacy and content moderation in countries around the world, these efforts are a cold comfort to people who have already lost their homes, families, and livelihoods—which, again, is not directly due to Facebook but is a reality they’re implicated in. Piecemeal additions of flagged terms and hiring more moderators to make dire judgement calls is better than nothing, but dwarfed by the scale and complexity of the problem.

Facebook acting as an unwitting propaganda engine for ethnic cleansing (or at best knowingly trying, but still failing, to avoid being a propaganda engine) is perhaps the worst-case scenario of a larger systemic problem of the ease with which platforms can be manipulated in the service of hate speech and misinformation. How many more tragedies like it can societies afford while platforms work toward the promise of nuanced content moderation operating at the same scale as content propagation?

The option of Facebook simply ceasing operations in areas where their platform is manipulated to enable human-rights abuses also seems an unsatisfying outcome. As Jonathan Zittrain of Harvard’s Berkman Klein Center for Internet & Society put it, “[Facebook] abdicating feels weird because quite often (and this is sometimes reflected in the law) if you’re in a position to do something, to alleviate a great harm—and you’re profiting, you’re not a bystander, you’re implicated or involved—we tend to think you have a responsibility to do something.” But no one can really agree about the “something” to be done. Zittrain compared it to the automotive industry: Instead of asking for a recall on defective cars that are known to cause crashes, “if a company could repair the car as it’s crashing—that’s what Facebook is being asked to do. Though tweaking speech on the fly, of course, has implications beyond improving product safety.”

To their credit, Facebook does seem to understand that it has a responsibility to address manipulation on the platform, and to its users in Myanmar, regardless of the fact that tensions between Rohingya Muslims and Buddhist nationalists have existed long before their market presence. “We are humbled by the many ways we see people use Facebook in Myanmar. Maintaining a safe community for people to connect and share on Facebook is absolutely critical to us,” Budhraja responded when asked if Facebook believes they have a responsibility to address manipulation of the platform in settings where state-supported human-rights abuses are taking place.

But when asked how the company frames that responsibility—as a moral, ethical, legal, or business concern—Facebook had no on-the-record response. The company’s public language of “open platforms” and “social infrastructure” to support sharing make for great Zuckerberg chestnuts, but like many other major tech platforms the company doesn’t seem capable of openly reckoning with or articulating what it means to be a powerful political actor in a major conflict—and this might have something to do with the fact that the only real consequences they face are bad PR and not loss of market share or legal liability.

The incoherence of platforms’ response to their very real public harms and the lack of imagination in accountability mechanisms might explain the doomed appeal of crafting a single coherent legal charge, something piercing and fixed like a diamond wrested from coal.

The pursuit of a legal frame speaks to a naïve hope (against all rational understanding of humanity’s irrational tendencies, against the absurdity of law itself) that if we can just construct a shared story of What Happened and Who’s Responsible, perhaps we will find words that give clarity to the pain of unspeakable things and perhaps we’ll stop those things from happening again. Of course, clarity isn’t necessarily justice either—the Kuala Lumpur War Crimes Tribunal may have convicted Tony Blair and George W. Bush for war crimes, but those convictions might be better understood as a critique of a human-rights law regime largely constructed by Western powers who rarely, if ever, have to be held to account for their own crimes. Which maybe points to the further absurdity of asking how to hold Facebook accountable for negligence or complicity in its foreign markets: In a time when the United States can barely come to terms with its own foundational legacy of genocides and crimes against humanity, how can anyone even begin to pursue accountability or reparations for the passive programmatic violence enabled by a single company operating abroad?

Yet to simply point at the unintentional automated wreckage or demand piecemeal repairs to the monstrous machinery of platforms feels like giving in to the fatigue and sorrow of a vacuous world in which everyone is sorry and no one is responsible. It is exhausting to live in a time when power is not only something companies abuse but a force inadvertently unleashed by well-intentioned engineering teams, as pure and innocent as Pandora faced with an irresistible box of technical puzzles or (depending on the particular hubris of the company in question) as benevolent as Pandora’s doomed brother-in-law Prometheus, simply trying to bring lowly mortals treasures that they were wholly unprepared to wield. What laws can punish the innovation—nay, the generosity—of endowing the world with such accursed gifts? So we throw up our hands and post our discontent, we mourn, we attend to the next crisis, and the next, and the next, and the next. We muddle through knowing that the gods of Silicon Valley are doing the best to tame monsters of their own conjuring, but can neither contain nor answer for continued collateral damage.

But at the end of the day, the corporate campuses of Menlo Park and Mountain View hold neither gods nor monsters. They hold merely, mostly, men—men who, for now, primarily answer to a court of public opinion for the violence and horror their platforms repeatedly enable. Men who will wax poetic about “community” and “social infrastructure” while essentially building a plausibly deniable advertising engine. Men who are, as they are in so many other sectors of public life right now, due for a real reckoning with real consequences.

Ingrid Burrington is a writer based in New York. Her work has appeared in Quartz, The Nation, and the San Francisco Art Quarterly.