Facebook and the Folly of Self-Regulation


Share

MY LATE COLLEAGUE, Neil Postman, used to ask about any new proposal or technology, “What problem does it propose to solve?”

When it comes to Facebook, that problem was maintaining relationships over vast time and space. And the company has solved it, spectacularly. Along the way, as Postman would have predicted, it created many more problems.

Last week, Facebook revealed the leaders and first 20 members of its new review board. They are an august collection of some of the sharpest minds who have considered questions of free expression, human rights, and legal processes.

They represent a stratum of cosmopolitan intelligentsia quite well, while appearing to generate some semblance of global diversity. These distinguished scholars, lawyers, and activists are charged with generating high-minded deliberation about what is fit and proper for Facebook to host. It’s a good look for Facebook—as long as no one looks too closely.

What problems does the new Facebook review board propose to solve?

In an op-ed in The New York Times, the board’s new leadership declared: “The oversight board will focus on the most challenging content issues for Facebook, including in areas such as hate speech, harassment, and protecting people’s safety and privacy. It will make final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram (which Facebook owns).”

Only in the narrowest and most trivial of ways does this board have any such power. The new Facebook review board will have no influence over anything that really matters in the world.

It will hear only individual appeals about specific content that the company has removed from the service—and only a fraction of those appeals. The board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies. It has no influence on the sorts of harassment that regularly occur on Facebook or (Facebook-owned) WhatsApp. It won’t dictate policy for Facebook Groups, where much of the most dangerous content thrives. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook.

This board has been hailed as a grand experiment in creative corporate governance. St. John’s University law professor Kate Klonick, the scholar most familiar with the process that generated this board, said, “This is the first time a private transnational company has voluntarily assigned a part of its policies to an external body like this.”

That’s not exactly the case. Industry groups have long practiced such self-regulation through outside bodies, with infamously mixed results. But there is no industry group to set standards and rules for Facebook. One-third of humanity uses the platform regularly. No other company has ever come close to having that level of power and influence. Facebook is an industry—and thus an industry group—unto itself. This is unprecedented, though, because Facebook ultimately controls the board, not the other way around.

We have seen this movie before. In the 1930s the Motion Picture Association of America, under the leadership of former US postmaster general Will Hays, instituted a strict code that prohibited major Hollywood studios from showing, among other things, “dances which emphasize indecent movements.” The code also ensured that “the use of the [US] flag shall be consistently respected.” By the 1960s, American cultural mores had broadened, and directors demanded more freedom to display sex and violence. So the MPAA abandoned the Hays code and adopted the ratings system familiar to American moviegoers (G, PG, PG-13, R, NC-17).

It’s a good look for Facebook—as long as no one looks too closely.

One reason the MPAA moved from strict prohibitions to consumer warnings was that American courts had expanded First Amendment protection for films, limiting how local governments could censor them. But all along, the MPAA practiced an explicit form of self-regulation, using a cartel that represented the interests of the most powerful studios to police behavior and represent the industry as a whole to regulators and the public.

No one can look at the history of American film and seriously argue that either method of self-regulation really served the public. Standards have been sloppily and inconsistently enforced. Through both the Hays code and the rating system, the MPAA limited artistic expression and the representation of lesbian, gay, and transgender issues and stories. But it sure helped Hollywood by keeping regulators at bay.

Relevant to the Facebook comparison, the MPAA applies American standards of decency to set its ratings, while the motion picture industry is a transnational power. Studios are much more sensitive to the demands of the government of the People’s Republic of China than they are to the US Senate. The same can be said of Facebook: Using American diction about “free expression” and American ways of thinking to influence a global company is folly. It’s one of the core errors that Facebook made internally years ago.

Many industries and professional associations have used cartel power to self-regulate, or at least create the appearance of doing so. The American Bar Association grants accreditation to law schools and thus dictates the content and quality of legal education. It also establishes an ethical code for practicing lawyers. This is substantial power beyond the reach of the state.

But, as we have seen in the global mining and textile industries, declarations of labor safety and wage standards don’t mean much in practice. Self-regulation is an excellent way to appear to promote particular values and keep scrutiny and regulation to a minimum.

When self-regulation succeeds at improving conditions for consumers, citizens, or workers, it does so by establishing deliberative bodies that can act swiftly and firmly, and generate clear, enforceable codes of conduct. If one movie studio starts dodging the ratings process, the MPAA and its other members can pressure theaters and other delivery channels to stop showing that studio’s films. The MPAA can also expel a studio, depriving it of the political capital generated by the association’s decades of campaign contributions and lobbying.

The Facebook board has no such power. It can’t generate a general code of conduct on its own, or consider worst-case scenarios to advise the company how to minimize the risk of harm. That would mean acting like a real advisory board. This one is neutered from the start because someone had the stupid idea that it should perform a quasi-judiciary role, examining cases one by one.

We know the process will be slow and plodding. Faux-judicial processes might seem deliberative, but they are narrow by design. The core attribute of the common law is conservatism. Nothing can change quickly. Law is set by courts through the act of cohering to previous decisions. Tradition and predictability are paramount values. So is stability for stability’s sake.

But on Facebook, as in global and ethnic conflict, the environment is tumultuous and changing all the time. Calls for mass violence spring up, seemingly out of nowhere. They take new forms as cultures and conditions shift. Facebook moves fast and breaks things like democracy. This review board is designed to move slowly and preserve things like Facebook.

This review board will provide a creaking, idealistic, simplistic solution to a trivial problem. The stuff that Facebook deletes creates an inconvenience to some people. Facebook makes a lot of mistakes, and dealing with the Facebook bureaucracy is next to impossible. But Facebook is not the whole Internet, let alone the whole information ecosystem. And Facebook is not the only way people communicate and learn things (yet).

The most notable anecdote that inspired the idea for this board involved the 1972 photograph of 9-year-old Kim Phúc running away from a US napalm attack in Vietnam. When, in 2016, the Norweigian newspaper Aftenposten included the image in a story, Facebook asked the newspaper to remove or pixelize the image because it violated the general rule against nudity on the site. After much uproar, Facebook restored the image. So, ultimately, the controversy did not matter. Problem solved. And even without Facebook, there are hundreds of sources of the same image and deep accounts of its historical significance. Since then, Facebook has tried to be both more aggressive in its content-removal practices and more thoughtful about the standards it uses. The review board is a high-profile extension of that effort.

The Boolean question of whether, say, a photograph that someone posted remains “on Facebook” is trivial. That question is a vestige of an 18th-century model of “free speech,” and it ignores differences of power and how speech works in the real world. It was a bad model for assessing the health of communication more than 200 years ago. It’s absurd now, in the age of opaque algorithms.

The initial review board includes no one with expertise in confronting the algorithmic amplification of propaganda, disinformation, or misinformation. It has no anthropologists or linguists. Of the 20 members, only one, Nicolas Suzor of Queensland University of Technology in Australia, is an internationally regarded scholarly expert on social media. In other words, it was established and appointed to favor one and only one value: free expression. As important as this value is, the duty of protecting both Facebook users and the company itself demands attention to competing values such as safety and dignity.

This board is also stacked with a disproportionate number of Americans who tend to view these issues through American legal history and conflicts. The original 20 includes five Americans, none of whom have any deep knowledge of how social media operate around the world.

The very idea that the board could make the slightest bit of difference to any of the life-or-death conflicts that play out on Facebook every day is absurd.

In contrast, the board has only one member from India—the country with more Facebook users than any other. India is home to more than 22 major languages and 700 dialects. The majority-Hindu nation has more Muslim citizens than any other country except Indonesia, along with millions of Buddhists, Christians, Jews, and Bahai. Facebook and WhatsApp have been deployed by violent Hindu nationalists (aligned closely with the ruling BJP Party of Prime Minister Narendara Modi, the most popular politician on Facebook) to terrorize Muslims, Christians, journalists, scholars, and anyone who criticizes the central government’s efforts to make India a brutal, nationalistic theocracy.

Is this board prepared to consider the breadth and depth of the problems that Facebook amplifies in India, let alone in Pakistan, Sri Lanka, Bangladesh or Myanmar? The lone board member from India, Sudhir Krishnaswamy, is an esteemed legal scholar and civil rights advocate. But how many of those 22 languages does he know? Would he be able to parse the linguistic and cultural nuance of an ethnic slur expressed in Marathi, the language of 83 million people in the state of Maharashtra; or Sinhalese, the major language of 17 million in the Republic of Sri Lanka?

Given that there are almost 300 million regular Facebook users in a country with 1.2 billion people, how would Krishnaswamy guide the process of choosing among the thousands of complaints that are sure to come from this growing and agitated population? The very idea that the board could make the slightest bit of difference to any of the life-or-death conflicts that play out on Facebook every day is absurd.

Just ask yourself, “What about this board’s authority could save lives in Myanmar?” The answer is, nothing. “What about this board’s authority could minimize coordinated attacks on the workings of democracies around the world?” The answer is, nothing. “What about this board’s authority could limit coordinated harassment of activists, journalists, and scholars by major political parties?” The answer is, nothing. “What about this board’s authority could circumscribe Facebook’s ability to record and use every aspect of your movements and interests?” The answer is, of course, nothing.

Ultimately, this board will influence none of the things that make Facebook Facebook: global scale (2.5 billion users in more than 100 languages), targeted ads (enabled by surveillance), and algorithmic amplification of some content rather than other content. The problem with Facebook is not that a photograph came down that one time. The problem with Facebook is Facebook.


Leave a reply