Section 230 In the Age of Social Media

Supreme Court decision could end its unbridled reign

Jack Nargundkar
5 min readOct 10, 2022

In its October 4, 2022 report, “India’s Tech Regulation Onslaught Poses Dilemma for U.S. Companies,” the Wall Street Journal presents a very interesting predicament posed by the Indian government’s “plans for the formation of a Grievance Appellate Committee, or GAC, to hear users’ complaints against technology platforms’ content-moderation decisions.” On the one hand, we have what the WSJ concludes as, “The risk for American technology companies is that however much they give in to Indian government demands to stay in the market could set precedents in other markets around the world.” On the other hand, a GAC that appears to be constituted “of the government, by the government, and for the government” does not bode well for the functioning of a democracy whose commitment should primarily be to a government “of the people, by the people, and for the people.”

Having said that, however, we have learned some hard lessons in the United States of the egregious role played by American social media companies in their dissemination of misinformation and disinformation on various fronts including domestic politics, global terrorism, and international geopolitics. From the domestic politics standpoint, this activity increased dramatically during the 2016 presidential election cycle, endured through the pandemic, and continues to-date. Even as the content of popular social media platforms is constantly being labeled as “fake news,” which might or might not be accurate — this sort of frivolous tagging is a problem as well because the truth often gets lost in the process.

Nonetheless, we are faced with a classical quandary between honoring the boundaries of free speech and the ability of a government to limit the harm, especially violence, caused by these social media platforms that — wittingly or unwittingly — push those boundaries. In this regard, the Justice Department’s ongoing review — of Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability based on third-party content and for the removal of content in certain circumstances — is likely to provide much needed clarity in this matter. Per the DOJ:

“First, it addresses unclear and inconsistent moderation practices that limit speech and go beyond the text of the existing statute. Second, it addresses the proliferation of illicit and harmful content online that leaves victims without any civil recourse. Taken together, the Department’s legislative package provides a clear path forward on modernizing Section 230 to encourage a safer and more open internet.”

But even as the Justice Department proceeds with its review, the Supreme Court (SCOTUS) began its new term earlier this week by agreeing to take up a case on content policing that directly impacts Section 230 and the protections that it has thus far afforded to social media companies. In an October 5, 2022 report, “Supreme Court takes case on content policing: Here’s how a Section 230 ruling could impact social media,” ABC News suggests that the Supreme Court’s ruling in this case “could dramatically change how those platforms operate,” and alter the “legal liability that could result from content posted by users.”

Per the ABC News report, a SCOTUS decision could have material impacts in two different aspects of the way social media companies manage content on their platforms. The primary engine that drives social media companies is their recommendation algorithm. So, if SCOTUS eliminates legal protection for recommended content, it “could significantly alter the type of posts that appear before users on Facebook’s News Feed or Twitter’s timeline” and the report goes on to caution, “Posts that could concern social media sites after the ruling include libelous comments and instructions for committing criminal acts, not just the terrorist propaganda at issue in the Supreme Court case.” The second and possibly more contentious impact from a SCOTUS ruling could force social media companies to rely on posting more professionally generated content. While this would reduce the liability for social media companies, it would strike a blow against individually generated content and hence a blow against free speech for the common man.

In a more nuanced discussion in Vox’s October 6, 2022 article, “A new Supreme Court case could fundamentally change the internet,” Ian Millhiser provides valuable insight into this very complex case going back to why Section 230 with its “two protections to websites that host third-party content online” was enacted even before social media companies existed. It’s these twin safeguards that fundamentally shaped the internet’s development and led to the birth of social media. Social media companies have since evolved into behemoths that manage content across the content value chain — creation, curation, distribution — just like newspapers, magazines, TV, radio, etc. do, but have been treated differently primarily because they came into being at the dawn of the web-based internet during a time when its full potential was still being realized. After a detailed analysis of the pros and cons of Section 230 and its impact on the proliferation of social media content, Mr. Millhiser concludes, “In an ideal world, Congress would step in to write a new law that strikes a sensible balance between ensuring that important websites continue to function, while also maybe including some safeguards against the promotion of illegal content.” But he recognizes that because of the highly polarized nature of our politics, Congress is not going to make this happen any time soon. Nonetheless, making social media companies’ accountable for the content that traverses their respective networks has now become an imperative need for SCOTUS to determine. It seems highly ironical to me that a techno-phobic court — that still does not allow live video cameras during its court proceedings — is set to determine the accountability and liability of social media companies, if any, in one of the most complex, technology-driven “content value chain” issues of our lifetime.

In any event, coming full circle from where I began this commentary, whether it’s the Indian government’s GAC or a privately run social media company trying to moderate content published on its platform — there is no easy answer. The stakes are getting increasingly worse — social media is degenerating into a set of anti-social platforms that are deepening divisions and polarizing societies across the globe. Even autocratic governments all over the world have recognized the awesome power of social media in its ability to instantly impact public opinion, which is why they seek to control it.

Technology has invariably preceded societal norms and often rocked the boat of established governmental regulations. So, it’s incumbent upon technology companies to be a part of the solution, if they don’t want one forced upon them either by disparate governments around the world and/or their respective courts. We must find a resolution to Section 230 that works for business and society, at least in the world’s democratic nations. If SCOTUS gets it wrong, the future of social media — as we’ve come to love, hate, or disregard it — will then depend on Congress, which must coordinate with our social media/technology companies to find an optimum solution. The resulting “new and improved” Section 230 should define communications decency and standards for the foreseeable future and include internet technologies that incorporate Web 3.0, blockchain, metaverse, et al. American innovation created the internet, the world wide web, and social media — let’s not allow foreign paranoia to strangle the freedom and productivity that these technologies have unleashed across the planet.

--

--

Jack Nargundkar

Jack Nargundkar is an author, freelance writer, and marketing consultant, who writes about high-tech, economics, foreign policy and politics.