Regulating Online Content Moderation
Citation: 106 Geo L.J. (2018)
The Supreme Court held in 2017 that “the vast democratic forums of the Internet in general, and social media in particular,” are “the most important places ...for the exchange of views.” Yet within these forums, speakers are subject to the closest and swiftest regime of censorship the world has ever known. This censorship comes not from the government, but from a small number of private corporations—Facebook, Twitter, Google—and a vast corps of human and algorithmic content modera-tors. The content moderators’ work is indispensable; without it, social media users would drown in spam and disturbing imagery. At the same time, content moderation practices correspond only loosely to First Amendment values. Leaked internal training manuals from Facebook reveal content moderation practices that are rushed, ad hoc, and at times incoherent.
The time has come to consider legislation that would guarantee meaningful speech rights in online spaces. This Article evaluates a range of possible approaches to the problem. These include (1) an administra-tive monitoring-and-compliance regime to ensure that content modera-tion policies hew closely to First Amendment principles; (2) a “personal accountability” regime handing control of content moderation over to users; and (3) a relatively simple requirement that companies disclose their moderation policies. Each carries serious pitfalls, but none is as dangerous as option (4): continuing to entrust online speech rights to the private sector.