This week, the Supreme Court is hearing two cases that could upend the way we’ve come to understand freedom of speech on the internet. Both Gonzalez v. Google and Twitter v. Taamneh ask the court to reconsider how the law interprets Section 230, a regulation that protects companies from legal liability for user-generated content. Gizmodo will be running a series of pieces about the past, present, and future of online speech.
When I was little, my mom laid down her two golden rules: “don’t be mean” and “don’t hurt others.” As I grew up, I did my best to uphold these values and stay away from haters, first in the real world and then in the digital one. In the early aughts, when people were slowly starting to adopt the internet en masse, this was largely easy. However, as the years went by and social media began to rise, it became harder. Hate, extremism, vitriol, harassment, and misinformation filled up screens—the internet was inescapable and awful.
As part of my job, I cover what goes on in online communities across the internet, which involves some pretty horrible content. You have high-profile people spouting misinformation about antidepressants, covid-19, and “herbal abortion teas” that in some cases are literal poisons. There’s also a lot of hate—hate towards the Jewish community, hate towards experts who attempt to correct misinformation, and hate for someone literally breaking their back in a horrible accident. And that’s only the tip of the iceberg.
It seemed crazy to me that platforms could get away with allowing content so vile, and in many cases dangerous, on their platforms. It’s not like they can’t legally do something about it. Under Section 230, a provision in the Communications Decency Act of 1996, online platforms are allowed to moderate objectionable content. Most importantly, though, Section 230 gives platforms a shield that frees them from legal liability for a lot of content that users post.
Given social platforms’ sketchy track record in content moderation, it seemed to me like Section 230 was a cushy law they didn’t deserve. Don’t get me wrong: I’m not against free speech. I mean, look at my profession. But I do think that the nauseating, harassing, and dangerous speech on the internet can be harmful to us as individuals and as a society. Therefore, when the Supreme Court agreed to take up a case related to Section 230, I saw it as a good thing.
G/O Media may get a commission
The Supreme Court will hear oral arguments in Gonzalez v. Google on Tuesday. The case was brought forward by the family of Nohemi Gonzalez, a 23-year-old American college student who was one of nearly 130 people killed in Paris in 2015 by members of ISIS. Gonzalez’s family argues that Google aided ISIS when it recommended the terrorist group’s videos on YouTube, a violation of federal anti-terrorism law. Google, meanwhile, claims Section 230 protects it from such claims. The court is expected to deliver its decision on the case this summer.
Let them strike it down, I thought dramatically. Maybe, that was the incentive companies needed to clean up their swampy platforms. I’m far from the only person who wants to see Section 230 gone. Both Democrats and Republicans dislike the provision, although for different reasons. President Joe Biden has called for reforming Section 230 and removing platforms’ liability shield, while former President Donald Trump wanted to throw it out altogether.
Despite my strong feelings about how Section 230 has contributed to the internet’s toxic landscape, today I’m here to tell you that I don’t think Section 230 should be repealed. I came to this conclusion after speaking with Jeff Kosseff, a cybersecurity professor at the U.S. Naval Academy and author of “The Twenty-Six Words That Created the Internet,” which analyzes Section 230 in-depth and presents the costs and benefits of protecting online platforms.
Kosseff is widely considered one of the most preeminent Section 230 experts out there. When I shared my concerns about Section 230 and the state of the internet, he told me he agreed that “there are substantial harms out there” that need to be addressed. However, he doesn’t think Section 230 is responsible for most of our complaints.
Overall, speaking with Kosseff helped me separate Section 230 from the angry public discourse on both sides of the spectrum.
That doesn’t mean I think Section 230 is perfect. Even Kosseff is in favor of modest amendments. I’ve come to think of the internet like a house, with Section 230 as its foundation. It’s a good base, but the house also needs things like a frame and a roof. It needs to be cared for and maintained, repaired, and even modified over time—or else it all comes crashing down.
Check out our full Q&A with Kosseff below.
This interview has been edited for length and clarity.
What would you say to people like myself who believe that the Internet under Section 230 has turned into a dangerous swamp that can, in some cases, threaten lives?
Jeff Kosseff: I fully agree that there are substantial harms out there and they’re a serious problem that we need to address. But the [question] is: Is Section 230 to blame for them? And I think you have to look at why Section 230 was passed in the first place.
There was a case that [Section 230] was specifically addressing that said that the way the platforms reduce their liability is not to do any moderation. Section 230 addresses that by saying ‘we’re going to remove this disincentive and let the platforms come up with moderation policies and procedures that best serve their users. So, I think that when you’re blaming Section 230 for the internet and all of the harms on there, I think you have to break it up into what you’re looking at.
There are certain things that Section 230 does protect platforms from liability [from] and it’s primarily been defamation. That’s been the main issue. [When it comes to sex trafficking,] there is actually an explicit exception in Section 230 for sex trafficking. There’s also an explicit exception that’s always been in Section 230 for federal criminal law. There’s also an explicit exception for intellectual property law. And all of those things kind of get conflated.
Then there’s also a lot of what’s known as lawful but awful content. And the bottom line is that the First Amendment protects a whole lot of really bad stuff. With or without Section 230, the government can’t impose penalties for that content. That’s stuff like misinformation, hate speech—that stuff that Section 230 is often blamed for. But Section 230 actually facilitates the ability of platforms to develop policies to block content without becoming liable for everything.
So, I fully agree with you that there’s a lot of really harmful stuff out there. I just think that it would be too easy to attribute all of that to Section 230 when the bottom line is that we’re not Europe. In Europe, they have things like hate speech laws, which themselves have been abused. There have been politicians who have gotten content taken down in Europe that’s [critical of] them under the hate speech laws. There are a number of countries all around the world that for the past five years have passed misinformation laws, and what they use that for is to take down speech and punish speech that criticizes the official government line.
But that’s also not a Section 230 issue in the United States. Under our current First Amendment precedent, we could never have a misinformation law. Perhaps the Supreme Court would radically reinterpret the First Amendment, but I think that would be a really bad place to go. Because while I think misinformation is a real problem, I think it’s a bigger problem to give the government the ability to define what’s misinformation.
Gizmodo: That’s a great point. I know some people, myself included, probably blame Section 230 for a lot of harmful things that aren’t Section 230’s fault. Do you have any ideas on why people have turned Section 230 into a scapegoat for everything that’s wrong with the internet?
Kosseff: Well, I’m sure it has nothing to do with the fact that there is a book titled, “The 26 Words That Created the Internet.” That does not play any role in this subject. So, I think Section 230 is responsible for the business models of platforms, large and small, that host user content. They frankly could not exist in their current format without Section 230. They could exist, but they’d be very different. So because of that, everything bad that happens on the platforms is attributed to Section 230, when in fact, Section 230 often is part of the solution.
There are some specific types of cases where Section 230 is a problem or does pose a barrier for plaintiffs. Though it tends to be things like defamation [or] certain types of harassment if it rises to the level of actually being a viable action. But even then, there are still First Amendment protections for the platforms that are really hard to overcome. I mean, defamation, even against the person who posted, is a really hard claim to bring in the United States. Even without Section 230. It becomes even harder when you’re bringing it against the platform that distributed that content.
Gizmodo: I wanted to ask you a question about the origins of Section 230 and how the situation in the ‘90s differs from what we have now. One of the original intentions of Section 230 was to prevent fledgling technology companies from being slammed with tons of lawsuits over user-generated content that would just be impossible to handle. Today we have Facebook, Google, Amazon, etc. ruling the internet. These companies have armies of lawyers and money at their disposal.
In other words, the times have changed and it seems like keeping Section 230 in place to protect these companies isn’t necessary anymore because they can certainly fend for themselves. What do you think about that?
Kosseff: In D.C., for the past few years, you can’t go anywhere without seeing a Facebook campaign about reforming Section 230. It’s not because Facebook is this benevolent company that suddenly realizes that reforming Section 230 is in the best interests of society. It’s because a company the size of Facebook or Google would be able to absorb the costs of litigation. They might change their behaviors a little bit depending on what the rules end up being. But the reason, I think at least, is because repealing Section 230 would not harm Facebook and Google nearly to the extent that it would harm companies that want to be the next Facebook and Google, which is already a really tall task.
I think there’s a misconception that Section 230 is that Big Tech Law, and Big Tech certainly does benefit from Section 230, there’s no doubt. But Section 230 is existential for mid-sized and small platforms that host user content. A site like Yelp would have to fundamentally shift how it does business if Section 230 were repealed. Because Yelp, while it’s a decent-sized company, is not Facebook-size.
And Glassdoor. They’re probably one of the top examples. They get sued all the time and they rely on Section 230 because they basically are a place where employees go to expose what it’s really like to work at a company. I don’t know how that exists without 230. Wikipedia—that’s a nonprofit that takes contributions from people all around the world. That’s an organization that absolutely couldn’t exist without Section 230. [Wikipedia co-founder] Jimmy Wales says that Section 230 is really existential for Wikipedia. I think that if Section 230 only applied to Facebook and Google, I would agree with your point. But it applies to like a local news website that hosts user comments. It’s not just something for big companies.
Gizmodo: In your book, you’ve mentioned that you’re in favor of modestly amending Section 230, but not completely throwing it out. What would a modestly amended version look like and how would it work?
Kosseff: I think part of it is we need a little more clarity as to what our goals are here. I speak with members of Congress and staffers all the time. I speak with members of both parties, and I’ll say that their goals are fundamentally different. We have primarily Democrats who have similar concerns that you have about harmful content and want to sort of use Section 230 as a way to reduce harmful content. But then I speak with a lot of other members, primarily Republicans, who are concerned that the platforms have been unfairly censoring conservative speech. Their general argument is why are we giving protection to these companies that are blocking our power of speech more than others? And is there a way to amend Section 230 to reduce what they believe to be censorship?
I think that to reconcile those views, I think it’s really hard to find anything that’s crafted politically [that would be] practical to pass. My big concern is about material that rises to the very high level of being defamatory. I speak with a lot of people who have had really awful things that really ruin their lives posted online and their goal isn’t to get money from the platforms. Their goal is to get the content taken down. What Section 230 says is that you can’t sue a platform for content that a user posted.
So, if I went on Facebook and defamed you, you couldn’t sue Facebook, but you would be able to sue me. Section 230 doesn’t prevent you from showing me defamation. If you were to sue me for defamation and the court were to rule that it was defamatory, which again, is really hard to get to because there are constitutional protections, statutory protections, and common law protections. But let’s say that it was adjudicated to be defamatory. We’ve had some courts say even in that situation, Section 230 would prevent a platform from being required to remove that content that’s adjudicated to be defamatory. I don’t think that should be the case. I think that in a case where in a lawsuit between the poster and the subject, if it actually is adjudicated defamatory or otherwise outside of the scope of First Amendment protection, I think that Section 230 shouldn’t protect a platform from having to take it down.
Gizmodo: Speaking of taking content down, I’ve been reporting on online misinformation, especially in the age of covid-19, for a while now. That’s primarily the reason that I went to my editor and I said, ‘Well I think Section 230 should be repealed.’ I am of course not against free speech, but I’ve seen the harm that this type of misinformation can do. It just seems like platforms aren’t doing enough to moderate this type of content. What do you think?
Kosseff: I have a book coming out later this year just on the topic. It’s about why the First Amendment protects a great deal of all speech and why it should continue to. I think COVID is a good example. Think about what the government’s line about COVID was in March and April of 2020 and then see how it evolved.
Remember, it was ‘don’t wear masks’ and ‘masks don’t work.’ [They said] the idea that COVID could have come from a lab is complete misinformation—absolute misinformation. There’s no possibility. Platforms actually enforced that policy and if people posted that they would take the content down or kick the users off. Our understanding of truth evolved with more knowledge.
When you have the government stepping in and saying, ‘This is the government line. Anything else is misinformation and we are prohibiting it.’ That is really dangerous. There are a lot of countries that have that. In the past five years, and it accelerated under COVID, we’ve seen a lot of fairly authoritarian countries adopt fake news laws in really dangerous ways that prevent our knowledge from evolving.
Now, it’s obviously not a perfect situation when you put all of that power in the hands of centralized platforms. Facebook’s idea of what [qualifies as] misinformation might not be much better than the government’s idea of misinformation. But there’s a difference because Facebook cannot deprive you of your liberties. Facebook cannot throw you in prison. Facebook can take you off of its site, which can be damaging, but the consequences are different. And you also, theoretically at least, have other choices.
That’s kind of the entire foundation of what Section 230 is based on. It’s an idea that there will be multiple platforms that might have different policies and the market will eventually determine what wins out. Now, obviously, we have barriers to that where we don’t have this perfect competition or anything close to it. Frankly, I think the best development in social media in recent years has been the fediverse. I think that addresses some of [these issues] because that allows different instances that might have different rules. I think that’s closer to a market-based system. The problem is that it’s a really terrible platform so it’s not very usable. I tried using it and it was awful.
The reason why I spent the past two years writing a book about misinformation is that I worry, and we have real concerns about misinformation, but I worry that we would address those concerns by giving the government more power over speech. Once you give up that power, you don’t get it back and different people may be in power.
I’m not going to call out the person, but I was on a panel with a congressional staffer whose boss had been proposing a bill about misinformation. I was a Democrat, and I said, ‘This is really dangerous because you end up giving every presidential administration in perpetuity the ability to regulate what is true and what is false.’ And the staffer said, ‘Well, we’re doing it while Biden’s president.’ And I was thinking, ‘But even if you agree with him, he’s not always going to be president. There’s going to be someone after him and that might be someone you disagree with.’ And that’s the whole point. We don’t want our speech to be governed based on who we agree with. We should not have someone who has the power of law enforcement behind them determining what’s true and what’s false. That’s when we get into really dangerous authoritarian territory.
Gizmodo: Going back to content moderation. I know it’s very hard work. I’m not discounting that. But platforms don’t seem to have made much progress in content moderation. That makes me wonder whether a new law or a new agency or even repealing Section 230 would be required to compel them to do a better job. Can you address this?
Kosseff: I would strongly question the premise that you’re presenting, which is that platforms have not made progress. They’re far from perfect. But the job of moderating this amount of content at scale with limited information is impossible.
They’ve made some big screw-ups on both keeping stuff up and taking stuff down in hindsight. I don’t know if I were making the decision at the time that I would be able to do any better. I’m not giving them a free pass because again, they always can do better. But I mean, they have gigantic teams of people who deal with this. They’re doing a lot. They have contractors, they have staffers, and lawyers. They’re kind of trying to make a lot of really difficult calls where there’s often not going to be a right answer where people are going to agree. Things like misinformation, things like hate speech, which one person may say ‘this violates your hate speech policies’, but someone else would say, ‘this is our legitimate political expression.’
And then you have a private company at the center and it doesn’t really want to be doing this. Frankly, these private companies are advertising companies. They want to be selling ads to people and vacuuming up all your personal data. They don’t want to be making calls about free speech. They have no desire to, but they have to. That’s part of their product. I think that creating a regulatory commission, I think it depends on what you’re regulating. If you’re talking about privacy, I think absolutely, sign me up. I think that we need privacy regulation. And I think that would address a lot of the problems that people have. That’s not a Section 230 issue. The failure to have any meaningful privacy laws in the United States is really shameful. And when I say meaningful, I do not mean the GDPR.
I mean something that actually says, ‘there are certain types of things that you cannot do.’ Right now, if you wanted to, you could go to a data broker and buy a huge set of geolocation data from people’s cell phones. That is incredibly dangerous and we don’t have restrictions on that. But that’s not Section 230, that’s not about user content. That sort of thing often gets thrown into the mix. But I don’t think that you should have a regulatory commission over online speech. Obviously, if it’s outside of the scope of First Amendment protection, then absolutely the government has a role in things like child sex abuse material. There’s no doubt. The DOJ has enforcement, I think they need more enforcement along those lines, but that’s also not covered by section 230. That’s federal criminal law.
Gizmodo: This Supreme Court has already created waves with its past rulings. Do you think there’s a chance that the court could make significant decisions that affect Section 230 when it hears the cases and ultimately issues its decisions?
Kosseff: I’ve been a lawyer long enough to know not to predict how the Supreme Court justices are going to rule, especially in this case where the only justice who’s ever said anything publicly about Section 230 is Justice Thomas. I’m more comfortable predicting that if anyone wants to reinterpret Section 230 it would be Justice Thomas just because we have something that he said. I don’t think we really know at all how the other justices are going to rule. They could do anything from maintaining the status quo of Section 230 to very narrowly addressing personalized algorithms to entirely reinterpreting 230 and saying that the lower courts have gotten it wrong for more than a quarter century. But we just don’t know.
Gizmodo: What message would you send to the general public at this time regarding Section 230 and the issues surrounding it?
Kosseff: Well, I think that what I would point out is kind of what we started with. When discussing Section 230, we first have to look at whether the problem we’re talking about actually stems from Section 230 or if it’s something that we’re just angry about [when it comes to] Big Tech. There’s a lot of stuff we’re angry about with Big Tech that is not related to Section 230. It’s either First Amendment protected or it’s a privacy issue or it’s a business decision that has nothing to do with user content. I think that we need a much more focused debate about what Section 230 does and does not do.
اكتشاف المزيد من نص كم
اشترك للحصول على أحدث التدوينات المرسلة إلى بريدك الإلكتروني.