Marshall Auerback – The Battle Over Free Speech Online Is a Volcano That’s Ready to Blow

It cannot be done via executive fiat and likely entails the involvement of the FCC as well as the revival of the fairness doctrine.

Marshall Auerback is a market analyst and commentator

This article was produced by Economy for All, a project of the Independent Media Institute

Social Media | Social Media Advertising | Social Ads | Beyond Fifteen

Donald Trump threatened to close Twitter down a day after the social media giant marked his tweets with a fact-check warning label for the first time. The president followed this threat up with an executive order that would encourage federal regulators to allow tech companies to be held liable for the comments, videos, and other content posted by users on their platforms. As is often the case with this president, his impetuous actions were more than a touch self-serving and legally dubious absent a congressionally legislated regulatory framework.

Despite himself, Trump does raise an interesting issue—namely whether and how we should regulate the social media companies such as Twitter and Facebook, as well as the search engines (Google, Bing) that disseminate their content. Section 230 of the Communications Decency Act largely immunizes internet platforms from any liability as a publisher or speaker for third-party content (in contrast to conventional media).

The statute directed the courts to not hold providers liable for removing content, even if the content is constitutionally protected. On the other hand, it doesn’t direct the Federal Communications Commission (FCC) to enforce anything, which calls into question whether the FCC does in fact have the existing legal authority to regulate social media (see this article by Harold Feld, senior vice president of the think tank Public Knowledge, for more elaboration on this point). Nor is it clear that vigorous antitrust remedies via the Federal Trade Commission (FTC) would solve the problem, even though FTC Chairman Joe Simons suggested last year that breaking up major technology platforms could be the right remedy to rein in dominant companies and restore competition.

In spite of Simons’ enthusiasm for undoing past mergers, it is unclear how breaking up the social media behemoths and turning them into smaller entities would automatically produce competition that would simultaneously solve problems like fake news, revenge porn, cyberbullying, or hate speech. In fact, it might produce the opposite result, much as the elimination of the “fairness doctrine” laid the foundations for the emergence of a multitude of hyper-partisan talk radio shows and later, Fox News.

Given the current conditions, the Silicon Valley-based social media giants have rarely had to face consequences for the dissemination of misinformation, or outright distortion (in the form of fake news), and have profited mightily from it.

Congress has made various attempts to establish a broader regulatory framework for social media companies over the past few years, in part by imposing existing TV and radio ad regulations on social media companies, introducing privacy legislation in California, as well as having congressional hearings featuring Facebook, Twitter and Google, where their CEOs testified on social media’s role in spreading disinformation during the 2016 election. But an overarching attempt to establish a regulatory framework for social media efforts has seldom found consensus among the power lobbies in Washington, and, consequently, legislative efforts have foundered.

As the 2020 elections near, the GOP has little interest in censoring Donald Trump. Likewise, Silicon Valley elites have largely seized control of the Democratic Party’s policy-making apparatus, so good luck expecting the Democratic Party to push hard on regulating big tech, especially if their dollars ultimately help to lead the country to a Biden presidency and a congressional supermajority. As things stand today, there’s not even a hint of a regulatory impulse in this direction in Joe Biden’s camp. As for Donald Trump, he can fulminate all he likes about having Twitter calling into question the veracity of his tweets, but that very conflict is red meat for his base. Trump wants to distract Americans from the awful coronavirus death toll, which recently topped 100,000, civil unrest on the streets of America’s major cities, and a deep recession that has put 41 million Americans out of work. A war with Twitter is right out of his usual political playbook.

By the same token, social media companies cannot solve this problem simply by making themselves the final arbiter of fact-checking, as opposed to an independent regulatory body. Twitter attaching a fact check to a tweet from President Trump looks like a self-serving attempt to forestall a more substantial regulatory effort. Even under the generous assumption that social media giants had the financial resources, knowledge, or people to do this correctly, as a general principle, it is not a good idea to let the principal actors of an industry regulate themselves, especially when that arbiter is effectively one person, as is the case at Facebook. As Atlantic columnist Zeynep Tufekci wrote recently, “Facebook’s young CEO is an emperor of information who decides rules of amplification and access to speech for billions of people, simply due to the way ownership of Facebook shares are structured: Zuckerberg personally controls 60 percent of the voting power.” At least Zuckerberg (unlike Twitter’s Jack Dorsey) has personally acknowledged that “Facebook shouldn’t be the arbiter of truth of everything that people say online… Private companies probably shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.”

One thing we can quickly dismiss is a revival of the old fairness doctrine, which, until its abolition in 1987, required any media companies holding FCC-broadcast licenses to allow the airing of opposing views on controversial issues of public importance. That doctrine first came under challenge in 1969 on First Amendment grounds in the case of Red Lion Broadcasting Co., Inc. v. Federal Communications Commission. Dylan Matthews explained in the Washington Post that “[t]he Court ruled unanimously that while broadcasters have First Amendment speech rights, the fact that the spectrum is owned by the government and merely leased to broadcasters gives the FCC the right to regulate news content.” In theory, the idea that the broadcast spectrum is still owned by the government and merely “leased” to private media could arguably be extended to the internet broadband spectrum, so that social media companies and digital platforms, like broadcast media companies, would have to abide by a range of public interest obligations, some of which may infringe upon their First Amendment freedoms. “However,” Matthews went on to point out, “First Amendment jurisprudence after Red Lion started to allow more speech rights to broadcasters, and put the constitutionality of the Fairness Doctrine in question.” It is unlikely that this would change, especially given the configuration of the Supreme Court led by Justice John Roberts, which has tended to adopt a strongly pro-corporate bias in the majority of its rulings.

The FCC still retains some discretion to regulate conventional media on the basis of public interest considerations, but Philip M. Napoli, James R. Shepley Professor of Public Policy in the Sanford School of Public Policy at Duke University, has argued that “the FCC’s ability to regulate on behalf of the public interest is in many ways confined to the narrow context of broadcasting.”

Consequently, there would likely have to be some reimagination of the FCC’s concept of public interest, so as to justify expanding their regulatory remit into the realm of social media. Napoli has suggested that:

“Massive aggregations of [private] user data provide the economic engine for Facebook, Google, and beyond.…

“If we understand aggregate user data as a public resource, then just as broadcast licensees must abide by public interest obligations in exchange for the privilege of monetizing the broadcast spectrum, so too should large digital platforms abide by public interest obligations in exchange for the privilege of monetizing our data.”

That still would mandate changes being initiated by Congress. As things stand today, existing legal guidelines for digital platforms in the U.S. fall under Section 230 of the Communications Decency Act. The goal of that legislation was to establish some guidelines for digital platforms in light of the jumble of (often conflicting) pre-existing case law that had arisen well before we had the internet. The legislation broadly immunizes internet platforms from any liability as a publisher or speaker for third-party content. By contrast, a platform that publishes digitally can still be held liable for its own content, of course. So, a newspaper such as the New York Times or an online publication such as the Daily Beast could still be held liable for one of its own articles online, but not for its comments section.

While the quality of public discourse has suffered mightily for the immunity granted by Section 230, the public doesn’t have much power to do anything about it. There is, however, a growing coalition of business powers that have bristled for many years at their inability to hold these platforms accountable for the claims made by critics and customers of their products, and to prevent the expansion of Section 230 into international trade agreements, as it had already seeped into parts of the new USMCA agreement with Mexico and Canada. A New York Times story about the fight explained that “companies’ motivations vary somewhat. Hollywood is concerned about copyright abuse, especially abroad, while Marriott would like to make it harder for Airbnb to fight local hotel laws. IBM wants consumer online services to be more responsible for the content on their sites.” At this point, it is necessary to be prepared for the sophistication and capacity of business lobbies in Washington to initiate a national controversy like the recent headlines of struggles at Twitter and Facebook with Trump to serve a long-term regulatory goal.

Oregon Senator Ron Wyden, an advocate of Section 230, argued that “companies in return for that protection—that they wouldn’t be sued indiscriminately—were being responsible in terms of policing their platforms.” In other words, the quid pro quo for such immunity was precisely the kind of moderation that is conspicuously lacking today. However, Danielle Citron, a University of Maryland law professor and author of the book Hate Crimes in Cyberspace, suggested there was no quid pro quo in the legislation, noting that “[t]here are countless individuals who are chased offline as a result of cyber mobs and harassment.”

In addition to the issues of intimidation or targeted groups cited by Citron, there are additional problems, such as the dissemination of content designed to interfere with the function of democracy (seen in evidence in the 2016 presidential election), that can otherwise disrupt society. This is not a problem unique to the United States. Disinformation was spread during the Brexit referendum, for starters. Another overseas example is featured in a Wall Street Journal article that reported in June, “[a]fter a live stream of a shooting spree at New Zealand mosques last year was posted on Facebook, Australia passed legislation that allows social-media platforms to be fined if they don’t remove violent content quickly.” Likewise, Germany passed its NetzDG law, which was designed to compel large social media platforms, such as Facebook, Instagram, Twitter, and YouTube, to block or remove “manifestly unlawful” content, such as hate speech, “within 24 hours of receiving a complaint but have up to one week or potentially more if further investigation is required,” according to an analysis of the law written by Human Rights Watch.

It is unclear whether Section 230 confers similar obligations in the U.S.

Given this ambiguity, many still argue that the immunity conferred by Section 230 is too broad. Last year, Republican Senator Josh Hawley introduced the Ending Support for Internet Censorship Act, the aim being to narrow the scope of immunity conferred on large social media companies by Section 230 of the Communications Decency Act. The stated goal of the legislation was to compel these companies to “submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.”

Under Hawley’s proposals, for example, Google or Bing would not be allowed to arbitrarily limit the range of political ideology available. The proposed legislation would also require the FTC to examine the algorithms as a condition of continuing to give these companies immunity under Section 230. Any change in the search engine algorithm would require pre-clearance from the FTC.

Hawley’s proposal would fundamentally alter business models of social media companies that today depend on huge volumes of user-generated content. But it certainly will not solve the problem of fake news, which has emerged as an increasingly controversial flashpoint in the discussion on how to regulate social media. The problem with Hawley’s proposal is that it could potentially require digital platforms to engage in further content moderation. Ironically, then, efforts to retain the platform’s political neutrality could well create disincentives against moderation and in fact encourage platforms to err on the side of extremism (which might inadvertently include the dissemination of misinformation).

Public Knowledge’s Harold Feld noted that Section 230 does not exempt the application of federal or state criminal laws, such as sex trafficking or illegal drugs, with respect to third-party content protection. But he recognized that it by no means constitutes a complete solution to the problems raised here. In his book The Case for the Digital Platform Act, he proposes that Congress create a new agency with permanent oversight jurisdiction over social media. Such an agency could “monitor the impact of a law over time, and… mitigate impacts from a law that turns out to be too harsh in practice, or creates uncertainty, or otherwise has negative unintended consequences.” To maintain ample flexibility and democratic legitimacy, Feld proposed that the agency have the “capacity to report to Congress on the need to amend legislation in light of unfolding developments.”

Regulating the free-for-all on social media is unlikely to circumscribe our civil liberties or democracy one way or another, so the First Amendment enthusiasts can breathe easy. The experiment in letting anybody say whatever he/she wants, true or false, and be heard instantly around the world at the push of a button has done less to serve the cause of free speech, or enhance the quality of journalism, than it has to turn a few social media entrepreneurs into multi-hundred millionaires or billionaires.

We managed to have the civil rights revolution even though radio and TV and Hollywood were regulated, so there is no reason to think that a more robust series of regulations of social media will throw us back into the political dark ages or stifle free expression. Even with the dawn of the internet era, major journalistic exposes have largely emerged from traditional newspapers and magazines, online publications such as the Huffington Post, or curated blogs, not random tweets, or Facebook posts. Congress should call Trump’s bluff on social media, by crafting regulation appropriate for the 21st century.

That may have to wait until after the 2020 election, but it is a problem that won’t go away.

BRAVE NEW EUROPE brings authors at the cutting edge of progressive thought together with activists and others with articles like this. If you would like to support our work and want to see more writing free of state or corporate media bias and free of charge, please donate here.

Be the first to comment

Leave a Reply

Your email address will not be published.


*