Section 230 in Those Public Entries

  • Feb. 15, 2023, 11:15 a.m.
  • |
  • Public

I joined Nebula earlier this year, and most of the people I follow on YouTube are also on Nebula, which means there’s exclusive content there that isn’t on YouTube. One of them is a recent LegalEagle (run by Devin Stone, a for-really-real lawyer) about the Section 230 lawsuit, Gonzalez v Google, before the Unsupreme Court and how the ruling could destroy the internet as we know it. LegalEagle’s video is not up on YouTube at the time I’m writing, so here’s a link to a Verge article about the lawsuit. If and when LegalEagle’s video is posted on YouTube, I’ll link it here.

The video has posted:

So, background: Section 230 of the Communications Decency Act basically says that websites cannot be held liable for the content posted on them. It also advocates the use of algorithms to “diversify” viewers’ streams. (Stick a pin in that, we’re coming back to it.)

The lawsuit, Gonzalez v Google, was brought by the family of Nohemi Gonzalez, who was murdered in an ISIS attack (one of a dozen, international, coordinated attacks on that day) in Paris on November 13, 2015. 129 other people around the world were also murdered in the course of these attacks; ISIS took responsibility for these attacks by, among other channels, posting to YouTube about it. Nohemi’s family sued Google -which owns YouTube- claiming that the company should have done more to keep ISIS and other terroristic content off the website. (There is another lawsuit being decided on, Twitter v Taamneh, but it’s based directly on the same facts as Gonzalez, so I’m not going to reiterate it here.)

According to Devin’s (LegalEagle) video, repealing Section 230 would effectively make companies like Google responsible for all content posted to their various websites, and could spell the end of both user-generated content and algorithms. As Devin points out, we generally don’t hold utilities responsible for what their users do while using them: If, for example, a Verizon customer leaves threatening voicemails or sends threatening texts to their ex, Verizon isn’t held liable for the messages or for providing the service. Plus, as we saw when the Orange Shitgibbon signed FOSTA (Fight Online Sex Trafficking Act)-SESTA (Stop Enabling Sex Trafficking Act) into law, all it did was take the titties of consenting adults off Tumblr and cause that weird as fuck, QAnon-pushed, Wayfair child trafficking conspiracy theory. Not to mention that FOSTA-SESTA has had absolutely no effect in stopping sex trafficking online.

Additionally, Devin’s defense of YouTube’s algorithms is solid: Over a million videos per day are uploaded to the site. Assuming that it ever stopped, it would take almost 18,000 years to watch every single video. And let’s just be real, a vast majority of those videos are crap. The greatest strength and greatest weakness of YouTube is that anyone with access to a computer and video equipment can post on it. Occasionally you get brilliance; your Natalie Wynns and Lindsay Ellises and LegalEagles and Jimquisitions, but more often you get Nostalgia Critic and his knock-offs. And if the algorithm wasn’t randomized, you’d have to go through every video chronologically, which would be a nightmare at the very least.

All that said… I don’t think I have ever been more conflicted on a case like this before. I believe in free speech, but at the same time, freedom of speech does NOT mean “freedom from consequences.” I don’t think that everyone should be allowed to barf out whatever shit crosses their mind and use the “it’s a free country, I can say whatever I want!” defense. Or, worse and something I saw all the fucking time when I was still on the bird app, “1ST AMENDMENT ABSOLUTIST!” Because that is simply not how it works; the First Amendment doesn’t protect all free speech -which these chucklefucks would know if they had the relatively low level of reading comprehension it takes to understand what 1A actually says (stick a pin in this, too)- and nor should it. The First Amendment protects the rights of the individual to criticize the government, to protest the government, to oppose the government. It says fuck-all about interpersonal speech or communications on privately-owned websites.

And then there’s the simple fact that third party tools are used in committing crimes literally all the time. You know how the NRA loves to say “but the guns were purchased legally!” whenever a mass shooting happens? Who gives a shit, those legally-purchased guns were used in an illegal manner. I bought my house completely above-board, but if I started cooking meth in it, then I’m still using my legally-purchased house in the manufacture and distribution of an illegal substance. Or, if I get drunk, drive my legally-purchased car into an assisted-living facility, and then start grabbing needles and screaming “I’M SERVING HAROLD SHIPMAN REALNESS, HUNTY!”, I’ve still used that legally-purchased car to commit at least three crimes that I can name (DWI, destruction of property, attempted murder). If I bought a hammer and brained someone with it, I still committed murder; no jury outside Wisconsin is going to be like, “Well, she definitely obliterated that guy’s skull, but she bought the hammer at Lowe’s and didn’t even ask for a discount, Not Guilty.” Whether you obtained the tools through legal channels doesn’t matter, if you use them to commit a crime.

Also, YouTube’s algorithm is both fucky -to be as kind as possible about it- and aided by humans. Which means that a lot of genuinely problematic content not only gets onto YouTube, but stays there, while content that’s trying to combat it gets flagged and taken down. Best example I can think of: A few months ago, Ann Reardon (How to Cook That) included a segment in one of her videos on the dangers of fractal wood burning, which had dozens of tutorial videos on YouTube alone (and likely hundreds of tutorials on TikTok), telling viewers not to do it and to discourage anyone they knew who wanted to try it. She was perfectly within her rights to do it; fractal wood burning was, at the time she posted that video, responsible for 34 reported deaths in the US (and only god knows how many unreported or misattributed ones), and her channel reaches a pretty wide audience. This is what should correctly be called “harm reduction”. But, YouTube banned her video while also keeping up all of the tutorials on fractal wood burning. Which makes absolutely zero sense. Ann pointed out in the follow-up video that if her video had been banned and the tutorials were also banned, that would be overzealous, but at least it would show that YouTube was taking viewers’ safety seriously. But that’s not what happened. Which means that (a) someone with an agenda reported Ann’s original video and/or (b) the algorithm flagged it for some “disturbing” imagery.

The way I understand how algorithms work, it’s probably a lil’ column A, lil’ column B, but mostly column A. And I don’t actually fault the moderators for that; YouTube’s human moderators are notoriously underpaid and underprepared for the job. At least one former moderator, identified as “Jane Doe,” sued YouTube, claiming that the job gave her PTSD from watching videos featuring extreme violence, murder, child rape, suicides, abortions, and animal mutilation. And Jane Doe is not the only one; the Verge also reported on moderators who exclusively monitor the “violent extremism” category, and how psychologically taxing it is to see videos like that day in and day out. If we’re talking about videos that suggest, or even show a tutorial, on how to do something as stupid and deadly as fractal wood burning, that’s not going to catch many peoples’ attention, nor will it strike the moderators as something they should ban. But how far is it from “let a video on how to do a stupid, deadly thing get posted” to “suing YouTube because my child/spouse died while trying to do a fractal wood burning”? Enabling a crime is often an actionable offense; i.e. when someone is charged with being an accessory to a crime. By that legal measure, YouTube could be considered the ultimate accessory before the fact.

This is where my conflict comes in: I agree with Devin, I don’t think Section 230 should be repealed. That strikes me as a massive overreaction and over-correction, along the lines of FOSTA-SESTA, which, again, have been basically legally useless to their stated purpose. And I also agree with Devin that getting rid of the YouTube algorithm entirely would be an absolute horrorshow (and god only knows how much truly illegal and horrific videos would end up being viewed by hundreds, thousands, even millions of people before being flagged and removed).

But… Something has to be done. I also agree with the Gonzalez family that YouTube isn’t doing enough to keep actual terrorists off the platform. And I’m not even talking about ISIS; I’m talking about incel and blackpilling videos. I’m talking about the NRA. I’m talking about Tucker Carlson and Steven Crowder, and other channels that push white supremacy and domestic terrorism as societal goods. And yes, the literal Nazis.This is the content that, because it’s not viscerally objectionable, slips past both the algorithm and the mods. Plus, the algorithm is way too easily manipulated by searching for even criticisms of those channels; I’ve had my algorithm, which is trained to mostly leftist content and video game longplays, completely fucked by watching an HBO Vice video about “a day in the life of an incel,” been recommended so many blackpill videos that I’ve had to clear out both my search and viewing history. If the algorithm worked the way most people think it does, that shouldn’t happen, but the algorithm doesn’t work that way. I’ll be honest, I’m not an expert in algorithms and how they work -I probably understand how it works about as much as most people do- but it really shouldn’t be that easy for an algorithm I have spent years -honest-to-god, years- carefully training and curating to be completely ruined by watching one video. So I can’t help but think that the algorithm might be part of the problem.

And that’s not even mentioning the misinformation that gets spread all over YouTube and Facebook and the bird app, which, again, isn’t enough to automatically alert the mods and, because the mods themselves might not be aware that it’s incorrect, the mods won’t remove. I want to be absolutely clear, everyone has been snookered by misinformation, myself included.

Compounding the misinformation problem is the fact that many people lack the reading and media comprehension level to understand the news they consume. Per statistics gathered between 2012 and 2017 by the Department of Education, half of all US adults have a sub-6th grade reading level. If reading that didn’t make your blood run cold, let me explain why that’s a problem. Most reputable newspapers write their articles as a sixth-grade level of comprehension. Half of all US adults lack the reading comprehension to read a newspaper. The lower your level of reading comprehension, the less likely you are to read at all. Meaning that this half of the US population is going to turn to TV for their news, and since they aren’t educated enough to understand that the person who screams the loudest isn’t the most reliable, they tend to get their information from FOX and InfoWars and other far-right channels. And while you absolutely could point fingers at the US education system and its many, many, many shortcomings for this (correct), that’s really not the point, nor would it help.

Put it this way: I have a college education. My reading comprehension is at least at the post-graduate level. I regularly read research papers and legal briefs, which is not something that all college-educated people can do. I have been taught how to properly read these papers, and I continue to build my research skills through every possible channel. I spend at least six hours every day reading news stories, fact-checking them, vetting their sources and their authors, and thinking critically not just about what the article directly says, but what it might be indirectly saying and what the agenda behind it could be. My reading comprehension has been very high since I was a child (I started kindergarten at a fifth-grade reading level, and it just got higher from there) and I have worked tirelessly to make my level of media literacy as high as it can possibly be. I say none of this as a flex. I say it to point out that, as hyper-literate as I am, as media literate as I am (which is the far more valuable skill, let’s be honest), as good as my research skills are, I have still been taken in by misinformation, because there is simply too much misinformation out there and not enough hours in the day to combat even a fraction of it. If it happens to me, with all my knowledge and skill, what fucking chance does anyone who reads at one-third my level have?

And again, half of all US adults read at below the sixth-grade level. This is not their fault and I don’t blame for it; the Powers That Be have a vested interest in keeping as much of the population as under-educated as possible, because an educated populace is the biggest threat to tyranny and kleptocracy. They know it. I, and the people reading this post, know it. But we are in the minority, and as long as that is true, and the majority of people lack both the reading comprehension and media literacy to combat misinformation, we’re all fucked.

So, what should be done? Truth is, I don’t know. I don’t think that repealing Section 230 of the Communications Decency Act is the right thing; it’s a severely under-thought overreaction to a serious problem that, to be clear, absolutely needs to be addressed. We do need to do more about harmful content on the internet. We do need to crack down on these websites and YouTube channels that push vulnerable people towards extremism. We do need to more heavily monitor incel forums and other sites where disenfranchised young men gather. And above all, we have got to do something about the unfuckingbelievable amount of misinformation there is on the internet. If we don’t do something, and soon, then honestly? The internet as we know it is probably doomed anyway. Even if Section 230 isn’t repealed, how long will it take for corporate interests to completely eradicate user-generated content?

This is an extremely complicated problem, and I don’t think any one person can come up with a solution. We will need lawyers, sociologists, psychologists, educators, philosophers, and above all that, time, to come up with even a serviceable solution, and unfortunately, we don’t have much time. SCROTUM will be ruling on Gonzalez on February 21st, so unless a group of all the people I just mentioned can come together and work around the clock and, by some miracle, come up with even a barely-workable solution, I think we are just plain fucked.

Cheers to us, Denizens of the Box of Prose and other user-generated online content. It was beautiful while it lasted.


Last updated February 15, 2023


Park Row Fallout February 15, 2023

It is certainly an issue. I think some content moderation regarding safety, copyright, and personal privacy is valid. But I think finding Google responsible for the content via this lawsuit would fly in the face of justice (at least as it currently exists). To go back to some of the examples you were using... if you drink to intoxication and crash your car... the victim cannot sue the Beverage Producer or the Car Company for your actions. If you buy a hammer and brain a guy, his survivors cannot sue Lowe's over your actions. And worst of all (a common legal discussion) when a Six Year Old shoots his teacher, she can't sue the gun manufacturer over the damages. So... just from that cold analytical place, doing away with 230 and finding Google responsible and punishing it would be far outside of current legal precedent (not that I believe our current SCOTUS even cares about precedent, sadly).

Pretend Mulling Park Row Fallout ⋅ February 25, 2023

I had to wait and see, but it looks like SCROTUM might actually make the right decision? It's not often that I agree with Kagan and Brown-Jackson and Kavanaugh and Thomas, but...

https://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html

You must be logged in to comment. Please sign in or join Prosebox to leave a comment.