The Supreme Court will soon hear a case that -- according to most articles I've read -- could upend "Section 230", the law that protects Internet platforms from consequences of user-contributed content. For example, if you post something on Facebook and there's some legal problem with you, that falls on you, as the author, and not on Facebook, who merely hosted it. This law was written in the days of CompuServe and AOL, when message boards and the like were the dominant Internet discourse. While there's a significant difference between these platforms and the phone company -- that is, platforms can alter or delete content -- this still feels like basically the "common carrier" argument. This makes sense to me: you're responsible for your words; the place you happened to post it in public isn't.
Osewalrus has written a lot about Section 230 over the years -- he explains this stuff better and way more authoritatively than I do. (Errors are mine, credit is his, opinions are mine.)
When platforms moderate content things get more complicated, and I'm seeing a lot of framing of the current case that's rooted in this difference. From what I understand, that aspect is irrelevant, and unless the Supreme Court is going to be an activist court that legislates, hosting user-contributed content shouldn't be in danger. But we live in the highly-polarized US of 2023 with politically-motivated judges, so this isn't at all a safe bet.
The reason none of that should matter is that the case the court is hearing, Gonzales vs. Google, isn't about content per se. It's about the recommendation algorithm, Google's choice to promote objectionable content. This is not passive hosting. That should matter.
The key part of Section 230 says:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. (47 U.S.C. § 230(c)(1)).
The court can rule against Google without affecting this clause at all. The decision shouldn't be about whether Google is the "publisher" or "speaker". Rather, in this case Google is the advertiser, and Section 230 doesn't appear to cover promotion at all.
I'm not a lawyer, and I'm not especially knowledgeable about Section 230. I'm a regular person on the Internet with concerns about the proper placement of accountability. Google, Twitter, Facebook, and others choose to promote user-contributed content, while platforms like Dreamwidth, Mastodon, and many forums merely present content in the order in which it arrives. That should matter. Will it? No idea.
Moderation is orthogonal. Platform owners should be able to remove content they do not want to host, just like the owner of a physical bulletin board can. In a just world, they would share culpability only if objectionable content was brought to their attention and they did not act. At that point they've said it's ok, as opposed to saying nothing at all because nobody can read everything on a platform of even moderate size. This is how I understand the "safe harbor" provision of the Digital Millennium Copyright Act to work, and the same principle should apply. In a just world, as I said, which isn't the world we live in. (I, or rather my job title, am a registered agent for DMCA claims, and I have to respond to claims I receive.)
I really hope that the court, even a US court in 2023, focuses on the key points and doesn't use this case to muck with things not related to the case at hand.