Wednesday, June 5, 2024

It’s Time to Restore Some Sanity to the Internet

 


It’s Time to Restore Some Sanity to the Internet

Why we need a 180 on Section 230.

Social media app icons for Facebook, TikTok, Snapchat, X, Youtube, and Instagram appear protected by a bubble.

Mother Jones; Unsplash; Getty

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

There is a vigorous debate as to whether Section 230 of the Communications Decency Act should be repealed, reformed, or replaced. Close to 30 bills have been introduced in Congress proposing to repeal or revise Section 230, the almost accidental law that protects platforms from liability and helped launch the internet, and Congress continues to hold hearings on reform. Politicians of all stripes—from Amy Klobuchar to Josh Hawley—have called for changes to the law. Even Mark Zuckerberg, whose Facebook platform depends on Section 230, has agreed that changes are probably needed. There’s been no shortage of news about the law and the controversy over it, but unless you are a lobbyist or a lawyer, you may not know what’s at stake or why it’s so important to take a hard look at changing Section 230 now.

As many—perhaps most—people know, Section 230 provides sweeping immunity from liability for online publishers in disseminating content that originates from others. I’ve practiced law for more than 30 years, mostly defending news media, like Mother Jones, and other clients in legal actions arising from their distribution of content, including in defamation, invasion of privacy, and copyright infringement cases. I have successfully invoked Section 230, to my clients’ benefit.

However, over the last decade, I have reluctantly come to the conclusion that something needs to be done about Section 230—that the law is now doing more harm than good. Since its enactment, there has been a steady and unremitting degradation of social discourse. Online harassment is epidemic. Misinformation and foreign interference in elections is commonplace.

These ills are not due exclusively to Section 230. But meaningful social discourse requires, at a minimum, a willingness to engage, and some degree of shared, reasonably accurate information. It also requires some assurance that engagement will not unleash responses so vile as to chill speech itself. Thanks largely to Section 230, it is increasingly doubtful that these conditions will continue to exist, if, indeed, they exist today.

There is no way to put the genie of the internet back in the bottle. But there is no need to do so. The internet is capable of granting our wishes without destroying us, if it is subjected to reasonable limitations—limitations that Section 230 virtually eliminates.

So how did this awkwardly named law that dictates how we experience the internet come to pass? “Section 230” is shorthand for a provision of the 1996 Communications Decency Act. It was part of a compromise adopted by Congress to encourage “interactive computer service providers” (ISPs) to police the internet and block or remove illegal or offensive content. Congress’ motivation was simple: It wanted to make it illegal to distribute obscene or offensive content to children. This would have required ISPs to monitor the content they distribute. However, shortly before the CDA was introduced, a New York court held that an ISP that monitored and moderated content could be subject to liability for claims based on the content it allowed to be posted. If followed by other courts, that decision would have meant that ISPs could subject themselves to liability by engaging in the very policing of content Congress wanted to encourage.

So Congress added Section 230. It has two primary components. First, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These 26 words mean, in essence, that an ISP (or user of an ISP) is not subject to liability for distributing information that originates from anyone other than the ISP (or the user) itself. A central result of this provision is that platforms and websites can’t be held liable for content and comments other people post on them.

The second provision provides publishers immunity for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This means that an ISP can block, filter, or remove content it considers offensive, as long as this is done in “good faith.” This is sometimes referred to as moderator immunity.

The provisions of the Communications Decency Act that prohibited distributing obscene or indecent content to minors were subsequently held unconstitutional by the United States Supreme Court. However, the CDA included a provision that if any portion of the Act was found to be invalid, the other portions would remain in effect. Thus, the immunity provisions (and certain other provisions) of the CDA survived.

The upshot is that while ISPs are not subject to liability for content simply because they monitor or remove it, they also have no legal incentive to do so. They are pretty much immune from liability for distributing content provided by others¾no matter how harmful it may be—as long as doing so does not constitute a federal crime or violate intellectual property rights. This is true even if they know content is defamatory, invasive of privacy, fraudulent, extortionate, or otherwise unlawful. What this means in practice is that social media platforms and other ISPs largely refrain from monitoring or removing content, even when its false or damaging nature is brought to their attention. Other than child pornography, content that clearly violates intellectual property rights, and perhaps sex trafficking, they don’t care because they don’t have to.

Most of the companies that leap to mind when we think of “Big Tech” did not exist at time the CDA was enacted in 1996. Google was founded in 1998. Facebook in 2004. Twitter in 2007.  Their existences were all facilitated—if not enabled—by Section 230. But their phenomenal success and cultural ubiquity were not anticipated. Nor were the profound negative consequences that have resulted from divorcing the distribution of content from any meaningful responsibility.

No one with any actual experience using social media can reasonably dispute that platforms like Facebook and X (to say nothing of the likes of 4Chan or Omegle) have a dramatically polarizing effect. The degradation of social discourse, the loss of civility and the retreat of communities into insular tribes, is the most dangerous result. Surveys indicate that a great majority of the U.S. population perceives an extraordinary lack of civility in social and political life, and attributes it largely to social media. And no wonder: Social media algorithms promote and reward content that prompts high levels of reaction, and people are naturally drawn to, and react most strongly to, content that is emotionally charged. Thus, as the Wall Street Journal reported Facebook employees told senior executives, “The company’s algorithms weren’t bringing people together. They were driving people apart. ‘Our algorithms exploit the human brain’s attraction to divisiveness,’ read a slide from a 2018 presentation. ‘If left unchecked,’ it warned, Facebook would feed users ‘more and more divisive content in an effort to gain user attention & increase time on the platform.’”

Heated rhetoric is hardly a recent development in social discourse. But an entire industry, used by hundreds of millions of Americans, that creates and drives a vicious cycle of extremism in social discourse, is. And Section 230 enables its worst excesses by ensuring that the platforms can promulgate algorithms promoting the worst content without concern for liability.

This degradation of social discourse is married with an increase in “disinformation” and “misinformation.” Studies indicate that the dissemination of false information through social media has become a serious problem. For example, one study found that “falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster.” Because Section 230 immunizes ISPs from virtually any liability for the publication of false information, it obviously exacerbates this problem.

Cyber-bullying, a weak term for a category of behavior that includes doxing, revenge porn, “sextortion,” and other horrific forms of online harassment, is perhaps the worst and most direct consequence of Section 230. And it’s not just random twitter trolls. As noted in Ars Technica: “Section 230 can protect online forums where users post revenge pornography, coordinate mass harassment campaigns, and spread terrorist propaganda. It’s hard to imagine a site like 4Chan—an anonymous message board known for hosting a range of vile content—surviving for 17 years without Section 230.”  Then there is the deliberate and systematic exploitation of social media platforms to distribute false information and engage in social engineering in an effort to affect the outcome of elections in the U.S.

And all of the problems caused by Section 230 are exacerbated because platforms allow users to post anonymously, making it nearly impossible to shame or sue those who create the malicious content.

Supporters of Section 230 argue that the elimination of publisher immunity will not prevent the dissemination of all false, disparaging, or inflammatory content. That’s true. No measure consistent with the First Amendment could possibly accomplish that. But we can reduce the extent¾and consequences¾of such content.

When Congress enacted Section 230, it stated that a primary purpose was “to promote the continued development of the Internet and other interactive computer services and other interactive media.” As one thoughtful analyst of Section 230, Santa Clara law professor Eric Goldman, has put it: “Section 230 can be characterized as a type of legal privilege. Section 230 ‘privileges’ online publishers over offline publishers by giving online publishers more favorable legal protection. This leads to a financial privilege by reducing online publishers’ pre-publication costs and post-publication financial exposure. These legal and financial privileges make sense in the 1990s context, where most major newspapers were de facto monopolies in their local communities and many online publishers of third-party content were small hobbyists.”

At the time of passage, these immunities from liability were seen as necessary to protect new businesses entering into web-based ventures, and the continued development of a then-nascent internet. But today the 3 largest internet companies in the U.S. have a market cap of almost $5.1 trillion, and 5.35 billion people—more than half the world’s population—are online. The internet is hardly a newborn. It more closely resembles a healthy—and rebellious—teenager.

Section 230 is not essential for the protection of freedom of expression. Thanks to the First Amendment, and its interpretation by the courts, freedom of expression has flourished in the United States for well over two centuries. This remained true even as new media, such as the telegraph, radio, and television, transformed communication as radically as the internet.

Some argue that Section 230 is nonetheless still essential to protect new entrants into the online business world, otherwise the financial hurdles to challenging incumbents would be insurmountable. But the tech industry is sufficiently vigorous to continue generating new business models without Section 230, and the risk tolerance of online entrepreneurs is notoriously high. Furthermore, amid the devastation of traditional print and broadcast media since the advent of the internet, there has been an explosion of new online news sites. They are not protected by Section 230 for the material they originate, so their existence demonstrates that Section 230 probably isn’t necessary to ensure that new businesses will continue to emerge on the internet.

Supporters of Section 230 have also raised cogent concerns regarding the potential consequences of its elimination, principally the following.

1.         The Moderator’s Dilemma: Section 230’s repeal would mean that ISPs could once again be subjected to the “moderator’s dilemma:” either monitor and moderate to reduce the amount of harmful content they distribute and accept liability for unlawful content that evades detection, or avoid liability by refraining from moderation.

Supporters of Section 230 assert that the likely result would be less moderation, and so more unlawful content. But that is unlikely. Prior to Section 230, under the First Amendment and state law, distributors of content (publishers, bookstores, and movie theaters) were not subject to liability unless they could be shown to have had knowledge that content was, in fact, unlawful. That was (and is) a very demanding standard. It would establish a “notice-and-takedown” regime, in which liability could only be imposed if unlawful content was brought to the attention of a distributor who then failed to remove or otherwise remedy it. Thus, ISPs would have a strong incentive to remove unlawful content, but would still enjoy protection from liability for unknowingly distributing unlawful content.

Some worry that this puts ISPs in the position of having to determine whether or not content is actually unlawful. In many cases this isn’t difficult, but in some it can be. In this situation, ISPs could probably avoid liability simply by removing any content that is the subject of an objection. This leads to the next concern.

2.         The Heckler’s Veto: If Section 230 were repealed, supporters believe that the subjects of accurate (or at least legally protected) but unwelcome attacks or disclosures would be able to suppress such information by simply sending a notice claiming that it was unlawful (defamatory, private, confidential, etc.), even if it was not. No doubt, in a “notice-and-takedown” regime, some abuse of this kind would likely occur. But it occurs now, despite Section 230. Moreover, procedural mechanisms, such as providing for counter-notification and restoration of removed content (without liability) can likely prevent much of this kind of abuse.

3.         Censorship: Supporters of Section 230 also worry that repeal could mean the restriction of controversial expression. In this scenario, ISPs wouldn’t retreat from monitoring and moderation but go into overdrive, refusing to distribute any content perceived as risky or controversial.

But the distinction between publishers and distributers—which predates Section 230 and is still the law—would provide protection for ISPs and relieve them from the need to censor in advance. A case from the 1990s applied this distinction to the early internet, holding that a business that provided a platform for material posted by others was not subject to liability for distributing it, unless and until the unlawful nature of the content was brought to its attention and it failed to take action. On the other hand, the platform user (that is, the one who posted the material) was responsible for its as its publisher, and could be subject to liability. Thus, even without Section 230, ISPs could rely upon this established law to avoid liability for unknowingly distributing unlawful content. This means that ISPs would have little incentive to engage in censorship by filtering content at the outset. Indeed, their incentive would be to refrain from monitoring and moderating content unless and until its unlawful nature was brought to their attention.

In sum, neither the benefits conferred by Section 230 nor the consequences of its repeal or reform appear to be sufficient to justify the considerable harm it is causing. Is there another way?

As noted at the outset, many bills have been introduced proposing to repeal or reform Section 230. These bills have taken many different approaches. Other approaches have been proposed by organizations and people outside of Congress. Here are some highlights.

1.         Repeal: Repeal would eliminate the immunities afforded by Section 230 for both content distribution and content blocking or removal. Repeal of the former would deprive ISPs of a significant existing legal protection. The consequences of repealing the latter probably would not have great legal consequences, because the First Amendment probably already protects the ISPs most likely to face liability for blocking or removing content. ISPs can also protect themselves through their ability to require their users to waive such claims contractually, through their terms of service.

2.         Create Exceptions: Instead of repeal, some have called for adding to Section 230’s exceptions, so that ISPs would be subject to liability—criminal and/or civil—for certain categories of claims. These proposals follow the approach taken in the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (“FOSTA”), which eliminated ISP immunity for violations of federal and state criminal statutes regarding sex trafficking.

3.         Knowledge or Notice and Takedown: Many proposals would tinker with the way Section 230 works. A proposed bill called the Platform Accountability and Consumer Transparency Act (“PACT Act”), for example, would require ISPs to establish a complaint process and would permit them to remove content based on complaints or the ISP’s own decisions, while providing an “appeal process” for users who post content that is removed. It also provides that ISPs may be held liable if they fail to stop illegal activity or remove unlawful content. But to trigger that potential liability, the notice provided must include “a copy of the order of a Federal or State court under which the content or activity was determined to violate Federal law or State defamation law.” The PACT Act exempts “Internet infrastructure services” (essentially, ISPs that do not actively engage in content search or delivery or provide services directly to Internet users, but merely provide services that enable other ISPs to do so, such as GoDaddy) from potential liability. In addition, ISPs would have no affirmative duty to monitor conduct or content on their services or to seek out illegal activities. A version of this regime is currently in place in many countries in Europe, and because all multinational ISPs doing business in the European Union must and do already comply with it, it stands to reason this reform is neither logistically nor financially infeasible.

4.         Permit Injunctions: A less prominent pathway may be reflected in the recent United States-Mexico-Canada Agreement (“USMCA”), an amendment to NAFTA that appears to open a door for certain causes of action against an ISP by allowing injunctions that would prevent the continued distribution of content. It would be possible to revise Section 230 to expressly permit actions against ISPs and/or users for injunctive relief to prevent the continued dissemination of unlawful content. In fact, it appears that actions for injunctions directed to users would be permitted under the PACT Act.

My own proposal for reforming Section 230 would be similar to the approach taken by the PACT Act, with some significant modifications. The primary components of a reformed version of Section 230 should be as follows:

1.         No Affirmative Duty to Monitor: Unlike the European eCommerce Directive, Section 230 does not expressly state that ISPs have no duty to actively monitor their users’ content or activities. Of course, the absence of such a duty is implicit in providing nearly complete immunity. However, if the immunity granted by Section 230 is more limited, as it should be, the law should be clear that it does not impose a duty to monitor the conduct or content of users or other “information content providers.”

2.         Retain the Immunity for Good Faith Removal or Blocking of Content: This immunity provided by Section 230 should be retained, but, for the sake of transparency, with an added requirement that ISPs adopt and maintain an acceptable use policy and remove content only in accordance with that policy. This appears to be the approach taken by the PACT Act.

3.         Permit Liability to Be Imposed on ISPs That Receive Sufficient Notice but Fail to Remove Unlawful Content: Rather than permitting ISPs to continue to distribute indisputably false and harmful material, as is presently the norm, Section 230 should be modified to establish a notice-and-takedown regime. While the requirements for such notifications should be reasonably stringent, they should not require a court order or decision, which would result in the law providing relief only for the wealthy. Rather, similarly to the DMCA, the complaining party should be required to submit a statement under penalty of perjury specifying the unlawful content, and the reasons it is unlawful. If an ISP receives such a notice, it should be obligated to promptly remove, disable access to, or otherwise terminate the unlawful content or other activity. An ISP that receives notice and fails take action within, say, 72 hours, would be subject to suit. This does not mean that the ISP would necessarily be held liable, only that it could be sued. The ISP could still defend itself by showing that the content was not unlawful.

4.         Permit Counter-Notification: Upon receipt of a takedown notification, ISPs should be required to provide notice to the user or other content provider that originated the content at issue. If that user or other provider wanted to stand by the material, it could provide a counter-notification, signed under penalty of perjury, attesting to and explaining the legality of the content, providing the identity of and contact information for the user or content provider and, if they were outside the U.S., consenting to jurisdiction (so that the person or entity that provided the takedown notification could contact the originating party and, if necessary, litigate over the disputed material). If the ISP received an adequate counter-notification, it would be required to restore or maintain the dispute material unless and until it received a court order holding that the content was in fact illegal. As long as it acted in accordance with these requirements, and ISP would be immune from liability.

5.         Immunity for Most “Pipeline” ISPs: As noted, the PACT Act would retain immunity from liability for ISPs that are not the primary providers of content but, rather, operate infrastructure that is used by ISPs that are. Because they are not primarily responsible for the activities that are typically the subject of disputes, such “pipeline ISPs” should remain immune from liability—as long as the party who seeks to address unlawful online content or activities has a meaningful remedy. Unfortunately, this often is not the case. Miscreants often use ISPs located in countries beyond the reach of U.S. authorities and courts. In such circumstances, the only way to address unlawful content or activities is by seeking relief from the pipeline ISPs that permit them to reach Internet users in the U.S. Thus, where the ISP that hosts or is otherwise primarily responsible for making the unlawful content or activity available on the Internet is located outside the U.S., pipeline ISPs should be subject to the same rules as other ISPs.

The question is not whether Section 230 needs to be reformed, but how to go about it. We can retain the primary benefits of Section 230 while restoring a degree of responsibility and restraint to the internet, and a measure of accuracy and civility to social discourse and debate. That’s why it’s time for a 180 on Section 230.

MOTHER JONES

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Did Lauren Boebert Just Expose Herself As A TOTAL Moron?

BIBBITY BOEBERT flunked her GED 4 times & was given a 'compassionate pass.' The family is a DYSFUNCTIONAL SOAP OPERA!   No infor...