FTC Chair Andrew Ferguson begged Donald Trump for his job by promising he would “end Lina Khan’s politically motivated investigations.” And, yet, one of his first orders of business upon getting the job was to… kick off a politically motivated investigation regarding “big tech censorship,” which he (falsely) claimed was potentially illegally targeting conservative speech and violating the policies and promises of these platforms.
It was an odd decision for many reasons, not the least of which is that it seemed to be discussing not just a fantasy world scenario that never existed, but even if it had ever existed, it certainly no longer did. The biggest social media platforms of the day are now all controlled by the ultra-rich who lined up (literally) behind Donald Trump and have agreed to do his bidding. ExTwitter is owned by Elon Musk, Donald Trump’s largest donor and his right-hand man in destroying the government. Mark Zuckerberg is now running content policy changes by Trump’s top advisor Stephen Miller.
If there is any “bias” in content moderation, it is very much in favor of MAGA Trump views. Which, to be clear, is their right to do under the First Amendment.
But the entire premise of the inquiry seemed to simply misunderstand nearly everything about content moderation. So, yesterday, the Copia Institute filed our comment with the FTC highlighting the myriad problems and misunderstandings that the FTC seemed to embrace with this inquiry.
The crux of our argument:
The FTC’s inquiry into “platform censorship” fundamentally misunderstands three critical realities about online expression:
First, as the Supreme Court recently affirmed in Moody v. NetChoice, government scrutiny of platform moderation decisions directly violates First Amendment protections of private editorial discretion. It would violate it even if any platform were a legitimate chokepoint for information, but such is far from the case. We live in an era of unprecedented speech abundance, where anyone can reach global audiences through countless online channels, and anyone can consume information through countless online channels. The premise of investigating “censorship” ignores this surfeit of options in how we communicate, where we’ve moved away from a world of gatekeepers who limit speech to one of intermediaries who enable it, and indeed threatens to reverse that important, speech-fostering progress.
Second, content moderation ultimately enables, rather than constrains, more speech. For all the talk of certain websites being “the modern public square,” it is the wider open internet itself that should be seen as that public square. The metaphor only works in so much as the internet can facilitate such a wide variety of online expression through differentiated and competing offerings and communities. The multitude of platforms built upon that open internet make all that possible, so long as they are free to serve as private venues that cultivate distinct communities through their editorial choices. These choices are constitutionally protected editorial judgments that allow different platforms to serve different needs and communities.
Which is why, third, government interference with platform moderation would paradoxically reduce speech opportunities by threatening the entire ecosystem of services that make online expression possible. From content hosts to payment processors to infrastructure providers, countless specialized intermediaries enable platforms like ours to serve an ever growing and changing set of communities. Regulatory scrutiny of editorial decisions would force many of these services to refuse to facilitate all sorts of lawful speech, if not shut down or stop supporting user content entirely.
As both a content creator and platform operator who relies on this complex web of intermediary services to advance our own speech interests, we see this inquiry as a threat to our own expressive freedom as well as that of countless others. It is fundamentally misguided and we urge the FTC to terminate it immediately before damaging the very same speech interests it ostensibly claims to protect.
We then go into much greater detail on all three points. You can read the whole thing if you want, but I wanted to call out a few key things. Lots of comments address — as we did — the obvious First Amendment problems, but there were a few points we thought were unique.
For example, the entire premise that there’s a “censorship” problem is bizarre, given just how much the internet — through its variety of private platforms — now enables and encourages speech. We’re in a golden age of speech, not some censorial hellhole:
Historically, if you wanted to express yourself beyond those in the narrow geographical vicinity around you, you were dependent on gatekeepers and had to hope that some publisher, printer, editor, record label, studio, or other media middleman would be willing to distribute your expression, promote it, and help you monetize it. Those gatekeepers ultimately allowed only a minuscule percentage of expression to reach public audiences, and an even smaller percentage of that content was successfully promoted and monetized.
The rise of the internet changed the role of intermediaries from being mostly about gatekeeping expression to being mostly about enabling it, and as a result expression has on the whole proliferated, even though the intermediaries still have the right and ability to filter what messages they facilitate. As the Supreme Court noted in the Moody majority, the fact that the new platforms “convey the lion’s share of posts” does not change their rights under the First Amendment.
It remains bizarre to me that, in this much more expansive speech universe, so many people act as though their speech is restricted. To highlight this absurdity, we point to how ridiculous it would be if this same inquiry were directed at traditional media:
This notion misunderstands the nature of content moderation and how it is no different than editorial discretion, which is constitutionally incapable of being policed, no matter how it is marketed. For instance, when Fox News used to claim that its coverage is “Fair & Balanced” everyone recognized that it would be an absurd abuse of the First Amendment for the FTC to investigate whether or not that coverage is either “fair” or “balanced” as a potential “unfair practice” because of how inherently subjective such editorial discretion is.
Consider a more direct parallel: if the New York Times decides to reject an op-ed submission, it would be constitutionally farcical for the FTC to investigate whether their editorial decisions properly align with their stated mission of “all the news that’s fit to print.” These decisions are inherently subjective editorial judgments protected by the First Amendment and not for the government to interfere with.
Also, we highlight that content moderation rules are inherently subjective and can’t be any other way. Ask multiple people how to deal with specific content moderation decisions and they will all give you different answers. So much of the misunderstandings around content moderation are based on the myth that there is a single right answer to questions regarding moderation.
The same is true of content moderation. It is no different than the practices of any news media organization, in which editorial policies may be put in place, but where subjective editorial judgment calls are made every day. Online platforms must make these decisions on a scale far beyond what any traditional media outlet experiences. We have coined the eponymous “Masnick’s Impossibility Theorem” in recognition that there is never going to be an objectively “correct” way to moderate content. No matter how moderation may be intended, it simply cannot translate to perfect practice, let alone one all would agree is “perfect,” which is why the freedom to decide needs to be out of the government’s hands entirely.
We have empirically demonstrated the inherent subjectivity that inevitably informs moderation decisions through our “You Make the Call” event, where we challenged policy experts, regulators, and industry professionals to apply the same content moderation policy to multiple examples. The results of the exercise were telling: even with clearly articulated policies, experienced professionals consistently reached different conclusions about appropriate moderation actions. In every single case we presented, participants split their votes across all available options, highlighting the impossibility of “objective” content moderation.
Every person may also evaluate content against a policy differently. We have further demonstrated this tendency with two interactive online games the Copia Institute has created, allowing people to test their own abilities to do content moderation, both at the moderator level and at the level of running a trust & safety team.
We probably should have pointed out that even the FTC inherently recognizes this. After all, it was moderating and restricting access to many of the comments that came in, claiming they were “inappropriate.”
And finally, as a service that regularly relies on a large number of third-party intermediaries to host, distribute, promote, and monetize our speech, we wanted to make clear that these efforts would inevitably limit ours (and others’) ability to speak, by destroying the intermediary services we rely on.
As both a content creator and platform operator, we rely on dozens of specialized intermediary services to reach our audience: social media for community engagement, podcast and video hosts for content distribution, chat services for communication, crowdfunding for monetization, and cloud services for infrastructure. Each of these services maintains their own editorial policies that align with their unique communities and business goals.
If government agencies could second-guess these editorial decisions, the impact would be severe and immediate:
*Service differentiation would become impossible. Communities focused on specific interests — from knitting to weightlifting — could no longer maintain their distinct character through specialized content policies.**Compliance costs would force smaller platforms to shut down. Even basic content hosting would require extensive legal review and documentation of every moderation decision. Not only would the direct compliance costs be ruinous for many smaller services, the uncertainty and risk of liability would lead many to decide it would not be worth the hassle to facilitate anyone’s online speech at all.*Innovation would stagnate. Entrepreneurs who might launch new specialized platforms would be deterred by the inability to shape their services around their communities’, and customers’, needs.
The result? A dramatic reduction in online speech options. Content creators like us would face fewer channels for distribution and engagement. Communities would lose their specialized spaces. And the vibrant ecosystem of online expression would collapse into a handful of generic, risk-averse platforms.
In short, it would be a disaster for speech, and lead to an information environment significantly more censorial than the world we currently live in where a private company can freely choose to enforce its own rules as makes the most sense for it.
Thousands of comments were submitted to the FTC (though, admittedly, many of them are angry screeds from people about how their conspiracy theories and threats of violence were moderated and just how unfair it all is). I have little faith that anyone at the FTC will take our comment seriously.
But they should. What they are looking to do would be an outright disaster for free speech. And, yes, that might be Ferguson’s real goal. Just like FCC Chair Brendan Carr, he may wish to use the language and trappings of “free speech advocacy” to make himself a government censor. But, we should use the tools at our disposal today to call that out, and try to prevent that kind of actual censorship from being allowed.
From Techdirt via this RSS feed