Android Apple Breaking News Facebook Google Internet iOS IoT iPhone Linux MAC OS Windows 7 Windows 8

Why the most controversial US internet law is worth saving

US president Donald Trump and his Democratic opponent, Joe Biden, agree on at least one issue: the arcane federal law known as Section 230 of the Communications Decency Act. On September 8, Trump tweeted that Republican lawmakers should “repeal Section 230, immediately.” With similar urgency, Biden had told the New York Times last December that “Section 230 should be revoked, immediately.”

Enacted in 1996 to bolster the nascent commercial internet, Section 230 protects platforms and websites from most lawsuits related to content posted by users. And it guarantees this immunity even if companies actively police the content they host.

By legally insulating online businesses, Section 230 has encouraged innovation and growth. Without the law, new internet companies would have more difficulty getting aloft, while established platforms would block many more posts in response to heightened litigation risks. Pointed political debate might get removed, and free expression would be constricted.

But many people have rightly questioned whether internet companies do enough to counter harmful content, and whether Section 230 effectively lets them off the hook. On Capitol Hill, at least a half-dozen bills have been introduced to curtail the law in various ways.

Driving this debate is the widely felt sense that the major social-media platforms—Facebook and its subsidiary Instagram; Twitter; and YouTube, which is owned by Google—do not properly manage the content they host. Evidence includes the spread of false information about elections and covid-19, conspiracy theories like QAnon, cyber-bullying, revenge porn, and much more. 

There are real problems with the way Section 230 is worded today, but that doesn’t mean lawmakers should toss the whole thing out. Its core ought to be preserved, primarily to protect smaller platforms and websites from lawsuits. At the same time, the law should be updated to push internet companies to accept greater responsibility for the content on their sites. Moreover, the US needs a specialized government body—call it the Digital Regulatory Agency—to ensure that this responsibility is fulfilled. I argue for these positions in a new report for the NYU Stern Center for Business and Human Rights.

Revoke or reform?

Drafted in an era of optimism about the internet, Section 230 established a distinctly laissez-faire environment for online business. In the mid-1990s, few anticipated the overwhelming pervasiveness of today’s social-media behemoths—or the volume and variety of deleterious material they would spread. 

This doesn’t mean all critiques of Section 230 are created equal. President Trump’s hostility to the law stems from his contention that platforms censor conservative speech. In an executive order he signed in late May, he singled out Twitter for having added warning labels to some of his tweets. The order called for a multi-agency assault on Section 230, involving the commerce and justice departments, the Federal Communications Commission, and the Federal Trade Commission. This appears to violate the Constitution, as the president seeks to punish Twitter for exercising the company’s First Amendment right to comment on his tweets. 

Meanwhile, Senator Josh Hawley, a Missouri Republican, has introduced legislation that would encourage individuals to sue platforms for making content decisions in “bad faith”—an unsubtle invitation to conservatives who feel they’ve been the targets of politically motivated slights. In fact, there’s scant evidence of systematic anti-right bias by social-media platforms, according to two analyses by The Economist and a third by a researcher at the conservative American Enterprise Institute. 

Other skeptics say Section 230 allows platforms to profit from hosting misinformation and hate speech. This is Biden’s position: that by providing a shield against litigation, the law creates a disincentive for companies to remove harmful content. In a December 2019 conversation with the New York Times editorial board, Biden responded to questions about Section 230 with pique at Facebook for failing to fact-check inaccurate Trump campaign ads about him. The law “should be revoked because [Facebook] is not merely an internet company,” he said. “It is propagating falsehoods they know to be false.”

Biden’s mistake, though, is urging revocation of Section 230 to punish Facebook, when what he really seems to want is for the company to police political advertising. He has said nothing publicly in the intervening months indicating that he has altered this position. 

Several more nuanced, bipartisan reform proposals do contain ingredients worth considering. A bill cosponsored by Senators John Thune, a South Dakota Republican, and Brian Schatz, a Hawaii Democrat, would require internet companies to explain their content moderation policies to users and provide detailed quarterly statistics on which items were removed, down-ranked, or demonetized. The bill would amend Section 230 to give larger platforms just 24 hours to remove content determined by a court to be unlawful. Platforms would also have to create complaint systems that notify users within 14 days of taking down their content and provide for appeals. 

More smart ideas come from experts outside government. A 2019 report (pdf) published by scholars gathered by the University of Chicago’s Booth School of Business suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.

Quid pro quo

In my view, lawmakers should adopt the quid pro quo approach for Section 230. It provides a workable organizing principle to which any number of platform obligations could be attached. The Booth report provides examples of quids that larger platforms could offer to receive the quo of continued immunity. One would “require platform companies to ensure that their algorithms do not skew towards extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on content moderation methods, advertising practices, and which content is being promoted and to whom.

Retooling Section 230 isn’t the only way to improve the conduct of social-media platforms. It would also be worth creating a specialized federal agency devoted to the goal. The new Digital Regulatory Agency would focus on making platforms more transparent and accountable, not on debating particular pieces of content. 

For example, under a revised Section 230, the agency might audit platforms that claim their algorithms do not promote sensational material to heighten user engagement. Another potential responsibility for this new government body might be to oversee the prevalence of harmful content on various platforms—a proposal that Facebook put forward earlier this year in a white paper. 

Facebook defines “prevalence” as the frequency with which detrimental material is actually viewed by a platform’s users. The US government would establish prevalence standards for comparable platforms. If a company’s prevalence metric rose above a preset threshold, Facebook suggests, that company “might be subject to greater oversight, specific improvement plans, or—in the case of repeated systematic failures—fines.” 

Facebook, which is already estimating prevalence levels for certain categories of harmful content on its site, concedes that the measurement could be gamed. That’s why it would be important for the new agency to have a technically sophisticated staff and meaningful access to company data.

Reforming Section 230 and establishing a new digital regulator may turn, like so much else, on the outcome of the November election. But regardless of who wins, these and other ideas are available, and could prove useful in pushing platforms to take more responsibility for what’s posted and shared online. 

Paul M. Barrett is the deputy director of the NYU Stern Center for Business and Human Rights.

Powered by WPeMatico

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.