Republicans Shouldn’t Scapegoat the Open Internet

Republicans want to punish political censorship, but messing with Section 230 will only make things worse

John Kristof
Arc Digital

--

Sen. Josh Hawley (R-Mo.) | Photo: Bill Clark (Getty)

Top Republicans think popular tech companies are censoring conservatives online, and they claim to know how to make it stop. The Trump administration is circulating drafts of an executive order that would restrict Section 230 of the 1996 Communications Decency Act, enabling government intervention into websites’ practices.

The White House is following conservative ideologues toward internet regulation. Toward the end of the Big Tech hearings on July 16, Sen. Ted Cruz (R-Texas) suggested Congress reevaluate Section 230. Sen. Josh Hawley (R-Mo.) has introduced legislation that would revoke large tech companies’ “immunity” unless they can prove to the Federal Trade Commission (FTC) that their content policies are politically neutral.

Section 230 defined some crucial internet boundaries, legally distinguishing online content platforms from the users that put content on those platforms. Contrary to what these politicians say, Section 230 actually has nothing to do with censorship, or even “immunity,” especially since websites remain responsible for taking down dangerous or illegal content. Republican efforts to toss it are more about punishment than prevention.

But Section 230 is far more crucial to Americans’ everyday internet experiences than these political leaders realize. It is what allows spontaneity and user-driven content, which has transformed our economy and our entertainment. Messing with it could have catastrophic consequences, kill careers, disconnect friends, and increase the censorship its opponents want to stop.

The Impossible Task of Content Moderation

Social experience and the internet have become inextricable. When a village in India began to receive its first smartphones, the first things people there learned to do were to take selfies, use social media, and watch videos on YouTube.

But people will inevitably use the internet for bad. For an extreme example, a murderer posted photos of his victim on several online platforms, and they remained online long after the murderer posted them. Even after the websites shut down his accounts, users shared screenshots of the images, while the platforms struggled to keep up.

That was a disgusting incident. But disgust isn’t sufficient justification for regulation. Before changing the laws, we should consider how difficult it is to monitor massively popular social content. It’s mainly an issue of volume. Instagram users post 95 million photos and videos every day. YouTubers upload over 500 hours of video every minute — it would take a lifetime to watch all the content posted in a single day. A fifth of all webpage views in the U.S. happen on Facebook. The idea that even the largest companies can monitor all this content with human eyes, much less steer the content we produce, is grossly naïve.

Instead of throwing human hours and brainpower at monitoring, these companies rely heavily on computer algorithms to filter out offensive, dangerous, and illegal posts. YouTube, for instance, has AI that detects offending content. According to Google, it successfully catches and takes down 98 percent of extremist or violent posts before any users see them.

This automation usually works well for its intended goal. But it has also been overbearing at times, and the companies have had to tune and update it regularly. Algorithmic censorship is a double-edged sword: stricter moderation could help prevent illegal activity, but it also could restrict acceptable or even socially beneficial content.

For example, YouTube recently modified its AI to more effectively restrict videos containing “instructional hacking and phishing [illicitly obtaining sensitive information]” and “showing users how to bypass secure computer systems.” This July, YouTube — through its AI — issued a community strike against a cybersecurity education YouTube channel run by the organization Hacker Interchange. The channel instructs viewers how to research and test their security systems, helping them prevent illegal hacking. After fans and other video creators protested, YouTube publicly acknowledged its mistake and lifted Hacker Interchange’s suspension.

Punishing YouTube accounts without cause is a big deal for people who make their living on the platform. So, should AI moderation be pulled back? Probably not. If it were, social sites would have to become much more reliant on human moderators.

We Don’t Want More Human Moderators

More human moderators would have horrible consequences of its own. When social media users “flag” a video for violating policy, or if an algorithm isn’t wholly certain an item should be removed, human moderators review it to see whether it should be taken down. The Wall Street Journal called this the worst job in tech, and with good reason.

“You’d go into work at 9 am every morning, turn on your computer, and watch someone have their head cut off,” said a former content moderator for Facebook. “Every day, every minute, that’s what you see.” In addition to terrorist killings, moderators regularly watch footage of murder, suicide, sexual assault, animal abuse, and other traumatizing sights.

This kind of workload inevitably takes a toll on the human psyche. “Every day people would have to visit psychologists,” the former moderator added. “Some couldn’t sleep, or they had nightmares.” Indeed, psychologists warn that repeated exposure to such intense content can lead to secondary trauma, a condition similar to PTSD that shares many of the symptoms.

Other flagged content includes conspiratorial, neo-Nazi, and otherwise dangerous information. Some moderators charged with watching this kind of content have reported that they came to embrace the fringe views they reviewed. Casey Newton of The Verge spoke to one moderator who had begun to doubt the Holocaust. Another became paranoid and now believes 9/11 was an inside job.

Some moderation might always require the involvement of human judgment, but there’s a clear argument for letting AI take as much of the workload as possible.

Meanwhile, social media and social publishing sites will continue to face backlash for leaving dangerous content on their platforms for too long. And their users will criticize the AI moderation systems for demonetizing and striking down material injudiciously. Caught between these competing pressures, these websites also have to maintain the spontaneous, user-generated atmosphere that makes them valuable parts of millions of people’s daily lives. The balance a large platform must work towards is a delicate and dynamic one, and they’re unlikely to ever strike it perfectly.

Section 230 Lets Platforms Strive for Balance

Natural market pressures already push online platforms to improve their content moderation. Advertisers have made clear to YouTube that they don’t want their commercials running in front of videos that are “brand unsafe,” such as graphic content and conspiratorial beliefs. YouTube also faces pressure from customers, such as parents who don’t want the site promoting dodgy content alongside family-friendly videos.

Advertisers and viewers are the two ingredients to YouTube’s sustainability, so it continuously updates its AI to better identify and moderate the content of its hundreds of thousands of hours of video uploaded every day, to varying levels of sanction. Ultimately, it is in platforms’ interests to make their websites places where their users can find healthy engagement and entertainment.

Removing Section 230 and making internet platforms liable to state regulators for offending content would shift the incentive structures at play dramatically. If these websites were legally culpable for illicit content put on their websites, they would have to err on the side of assuming the worst from user-generated uploads.

To protect themselves, content platforms certainly will broaden and be less forgiving — they couldn’t afford to give users the benefit of the doubt. AI filters would have to grow in size and scrutiny. And since AI moderation still struggles to notice crucial nuances between, say, an educational history of the Axis powers and genuinely anti-Semitic messaging, a lot of innocent content will be blocked. In extreme cases, platforms could try to prevent some of these moderation issues altogether by running their own background checks to verify whether they should trust an individual as a responsible user.

This is the irony in Trump’s, Hawley’s, and Cruz’s proposals to “defend” conservatives online. In an effort to prevent censorship, they would create an internet that censors all content that could be considered at all unsafe. Attacking Section 230 would be shooting themselves in the foot.

Don’t Change What You Don’t Know

Without Section 230, things would be worse. Regulatory reviews under the proposed new regime would be too subjective, too delayed, and too vulnerable to political influence. To protect themselves, tech companies would have to assume they’ll be liable for more rather than less of what appears on their sites, and that would mean they’ll err way too far on the side of caution.

This would be a tragedy. Section 230 has shaped an internet culture of free expression, which has generated opportunities to share enjoyment and creativity, both for income and fun. A stricter, less user-driven internet would destroy careers and dreams as well as entertainment value that matters to millions of people. And Republicans’ misguided attempts to foster neutrality in online political coverage would, ironically, force platforms to favor mainstream material. The result would be more content that tends to be either not conservative or anti-conservative.

Most likely, President Trump and Senators Cruz and Hawley haven’t considered the complexities of the online ecosystem they’re threatening to upset. Perhaps they want some healthier version of the current internet rather than to permanently change it. But whatever their intentions, politicians should not regulate what they do not understand.

Section 230 is too important to our economy and culture to recklessly rip away.

John Kristof is an economics writer whose works have appeared in Arc Digital, The American Conservative, The Daily Caller, and elsewhere. Follow him on Twitter #@jmkristof.

--

--

John Kristof
Arc Digital

An education and fiscal policy researcher who has published political and economic commentary in a variety of outlets. See also johnkristof.com and @jmkristof.