Connect with us

Hi, what are you looking for?

Prime Webinar GroupPrime Webinar Group

Editor's Pick

Content Moderation, Competition, and Claims of Social Media Censorship

Jennifer Huddleston

social media

On February 20, the Federal Trade Commission (FTC) released a request for information (RFI) on “Technology Platform Censorship.” The last section of the RFI asks, “Were platforms’ adverse actions made possible by a lack of competition? Did the practices and policies affect competition?” Such questions indicate that, once again, the connection between antitrust and concerns about content moderation may be under consideration. Yet this misunderstands the appropriate use of antitrust enforcement, the likely outcomes for content moderation under such enforcement, and the ways the current liability protection of Section 230 actually encourages competition in content moderation.

Antitrust Enforcement is Not an Appropriate Tool for Concerns About Content Moderation

Antitrust enforcement and competition policy are powerful tools that can result in significant government intrusion into a market. The current US standards of enforcement support such intervention only in specific per se cases or when there is sufficient evidence of harm to consumer welfare. Neither of these scenarios is true when it comes to content moderation policy, nor would they support intervention into an otherwise competitive market for concerns about content moderation.

Elon Musk’s X has a different content moderation strategy than Facebook or Reddit. Despite being general-purpose social media platforms, they have found distinct roles in the markets for both consumers and advertisers. This is not to mention the plethora of other user-generated, content-based sites that might compete for more specific audiences, whether it be BlueSky, President-Trump-owned Truth Social, or more targeted sites like Goodreads or Tripadvisor. Users may choose a social media site in part for its content moderation policies, as seen when changes lead users to seek substitutes that are readily available. If anything, the ongoing evolution of content moderation policies by platforms illustrates the continuing competition in this regard.

Using antitrust policy beyond its intended and objective purposes risks harming consumers rather than helping them. In the last administration, Chair Lina Khan’s FTC sought to deviate from these objective standards in favor of policy goals or their belief in what the market should look like. This resulted in losses on cases brought by the FTC that relied on creative claims or unrealistic market definitions, such as in future markets in virtual reality fitness. A theory that points to disliked content moderation decisions as a violation of antitrust laws would likely similarly fail in the courts.

Such deviations from traditional economic-based analysis also risk allowing the use of antitrust in the future for policy objectives it is ill-designed to achieve. That could allow significant government intervention into competitive markets. The result is a shift away from a focus on consumers to a focus that emphasizes competitors or the government’s preferred policies at the expense of the benefits they would have otherwise had from the market.

Antitrust Remedies Would Not Resolve Content Moderation Concerns

Say a content moderation-based antitrust case was successful in court. Would the remedies actually improve content moderation the way those concerned about censorship believe? In short, it is unlikely, and it could even make social media experiences worse.

As Adam Thierer and first wrote about in 2019, there is no reason to believe that structural remedies would resolve other tech policy concerns like “censorship.”

First, there is no reason to believe that these two separate platforms still trying to retain their audience would change their content moderation policies just by their separation. Just separating a company would not change what a platform feels best serves either its audience or advertisers in the type of content it seeks to curate.

But things could actually get worse rather than better. These now-separate platforms might have fewer resources to respond to more obvious problematic content like animal cruelty or spam, thus diminishing the consumer experience. They would also be less able to invest in or deploy state-of-the-art tools to respond to novel problems that might arise, like the viral “Tide Pods” challenge or AI-generated sexual content. 

Additionally, rather than resolving content moderation concerns, a forced separation might result in more intense content moderation based not on what audiences want but on what advertisers want be able to afford to keep the platform afloat.

Section 230 and Current Content Moderation Policy Encourages Competition

In conversation around social media “censorship,” Section 230 is often the elephant in the room. This 1996 law has two parts that create liability protection for platforms hosting user-generated content from litigation related to their users’ content and their content moderation decisions in most cases. Critics of the law will often say it is a handout to big tech, but Section 230’s liability protection actually encourages competition in the market of social media and other user-generated content.

FTC

While Section 230 is important to large platforms engaging in content moderation at scale, it also enables small platforms to host user-generated content without the fear that a bad actor or a wrong decision could result in business-ending litigation costs. Even if proven justified in court, the cost of defending a case can easily reach more than six figures and derail a potential new entrant into the market. Such protection is particularly critical for those whose platforms might allow the discussion of controversial ideas or serve marginalized communities.

While content moderation should not be thought of as an antitrust or competition issue, it is important to recognize how the legal frameworks that support private actors’ content moderation decisions can support competition. Changes that risk the possibility of litigation around content moderation decisions risk locking in the decisions of the largest players who can afford to engage in litigation and could result in many platforms further limiting discussion or voices for fear of potential lawsuits. Once again, the consumers would be the ones ultimately harmed by such changes.

Conclusion

There is much further discussion to be had on claims of social media “censorship”; however, it is important to separate any such claims from conversations around competition policy or antitrust in the technology sector. The use of antitrust enforcement for such a policy approach risks removing the important, consumer-focused objective standard in ways that could allow further intervention into a wide array of markets and violate the First Amendment but is also unlikely to provide a remedy for the alleged concerns.

Despite positioning itself in the request as seeking to protect consumers against “censorship,” the reality is that government action that dictates the decisions private actors can make regarding the content they host or amplify on their platforms raises far more significant concerns about government censorship and the First Amendment than the actions of the platforms themselves. The current policy approach to content moderation under Section 230 has allowed competition for platforms hosting user-generated content to flourish. Changes could limit rather than encourage that competition.

You May Also Like

Tech News

Image: Cath Virginia / The Verge, Getty Images Without going into detail about what might happen to the $52 billion in subsidies from the...

Tech News

Illustration: The Verge Google said today that it plans to update Google Maps to reflect President Trump’s January 20th executive order to change the...

Tech News

Image: Cath Virginia / The Verge Chinese startup DeepSeek claims its AI models can match the performance of those made by OpenAI and Meta...

Politics

Former President Joe Biden doled out a flurry of pardons during his final days in office, but he did not issue a pardon for...