Blog: October 2022

Most of these posts were originally posted somewhere else and link to the originals. While this blog is not set up for comments, the original locations generally are, and I welcome comments there. Sorry for the inconvenience.

Social media and moderation

I've participated in a lot of online communities, and a lot of types of online communities, over the decades -- mailing lists, Usenet, blogging platforms like Dreamwidth, web-based forums, Q&A communities... and social media. With the exception of blogging platforms, where readers opt in to specific people/blogs/journals and the platform doesn't push other stuff at us, online communities tend to end up with some level of moderation.

We had (some) content moderation even in the early days of mailing lists and Usenet. Mostly1 this was gatekeeping -- reviewing content before it was released, because sometimes people post ill-advised things like personal attacks. Mailing lists and Usenet were inherently slow to begin with -- turnaround times were measured in hours if you were lucky and more typically days -- so adding a step where a human reviewed a post before letting it go out into the wild didn't cost much. Communities were small and moderation was mostly to stop the rare egregiously bad stuff, not to curate everything. So far as I recall, nobody then was vetting content that way, like declaring posts to be misinformation.

On the modern Internet with its speed and scale, moderation is usually after the fact. A human moderator sees (or is alerted to) content that doesn't fit the site's rules and handles it. Walking the moderation line can be tough. On Codidact2 and (previously) Stack Exchange, I and my fellow moderators have sometimes had deep discussions of borderline cases. Is that post offensive to a reasonable person, or is it civilly expressing an unpopular idea? Is that link to the poster's book or blog spam, or is the problem that the affiliation isn't disclosed? How do we handle a case where a very small number of people say something is offensive and most people say it's not -- does it fail the reasonable-person principle, or is it a new trend that a lot of people don't yet know about? We human moderators would examine these issues, sometimes seek outside help, and take the smallest action that corrects an actual problem (often an edit, maybe a word with the user, sometimes a timed suspension).

Three things are really, really important here: (1) human decision-makers, (2) who can explain how they applied the public guidelines, with (3) a way to review and reverse decisions.

Automation isn't always bad. Most of us use automated spam filtering. Some sites have automation that flags content for moderator review. As a user I sometimes want to have automation available to me -- to inform me, but not to make irreversible decisions for me. I want my email system to route spam to a spam folder -- but I don't want it to delete it outright, like Gmail sometimes does. I want my browser to alert me that the certificate for the site I'm trying to visit isn't valid -- but I don't want it to bar me from proceeding anyway. I want a product listing for an electronic product to disclose that it is not UL-certified -- but I don't want a bot to block the sale or quietly remove that product from the seller's catalogue.

These are some of the ways that Twitter has been failing for a while. (Twitter isn't alone, of course, but it's the one everyone's paying attention to right now.) Twitter is pretty bad, Musk's Twitter is likely to be differently bad, and making it good is a hard problem.3

Twitter uses bots to moderate content, and those bots sometimes get it badly wrong. If the bots merely flagged content for human review, that would be ok -- but to do that at scale, Twitter would need to make fundamental changes to its model. No, the bots block the tweets and auto-suspend the users. To get unsuspended, a user has to delete the tweets, admit to wrongdoing, and promise not to do it "again" -- even if there's nothing wrong with the tweet. The people I've seen be hit by this were not able to find an appeal path. Combine this with opaque and arbitrary rules, and it's a nightmare.

Musk might shut down some of the sketchier moderation bots (it's always hard to know what's going on in Musk's head), but he's already promised his advertisers that Twitter won't be a free-for-all, so that means he's keeping some bot-based moderation, probably using different rules than last week's. He's also planning to fire most of the employees, meaning there'll be even fewer people to review issues and adjust the algorithms. And it's still a "shoot first, ask questions later" model. It's not assistive automation.

A bot that annotates content with "contrary to CDC guidelines" or "not UL-certified" or "Google sentiment score: mildly negative" or "Consumer Reports rating: 74" or "failed NPR fact-check" or "Fox News says fake"? Sure, go for it -- we've had metadata like the Good Housekeeping seal of approval and FDA nutrition information and kashrut certifications for a long time. Want to hide violent videos or porn behind a "view sensitive content" control? Also ok, at least if it's mostly not wrong. As a practical matter a platform should limit the number or let users say which assistance they want, but in principle, fine.

But that's not what Twitter does. Its bots don't inform; they judge and punish. Twitter has secret rules about what speech is allowed and what speech is not, uses bots to root out what they don't like today, takes action against the authors, and causes damage when they get it wrong. There are no humans in the loop to check their work, and there's no transparency.

It's not just Twitter, of course. Other platforms, either overwhelmed by scale or just trying to save some money, use bots to prune out content. Even with the best of intentions that can go wrong; when intentions are less pure, it's even worse.

Actual communities, and smaller platforms, can take advantage of human moderators if they want them. For large firehose-style platforms like Twitter, it seems to me, the solutions to the moderation problem lies in metadata and user preferences, not heavy-handed centralized automated deletions and suspensions. Give users information and the tools to filter -- and the responsibility to do so, or not. Take the decision away, and we're stuck with whatever the owner likes.

The alternative would be to use the Dreamwidth model: Dreamwidth performs no moderation that I'm aware of, I'm free to read (or stop reading) any author I want, and the platform won't push other content in front of me. This works for Dreamwidth, which doesn't need to push ads in front of millions of people to make money for its non-existent stockholders, but such slow growth is anathema to the big for-profit social networks.


  1. It was possible to delete posts on Usenet, but it was spotty and delayed. ↩︎

  2. The opinions in this post are mine and I'm not speaking for Codidact, where I am the community lead. ↩︎

  3. I'd say it's more socially hard than technically hard. ↩︎

B'reishit: generations

D'var torah given in the minyan yesterday morning.

Ten generations.

At the beginning of this parsha, God created humanity as the pinnacle of creation, and declared it tov meod -- very good. Before even the first Shabbat, Adam had transgressed the divine will and been expelled from the garden, but that didn't merit further destruction. Adam and Chava produced children and their descendants began to fill the earth, as commanded. It might not have been tov meodany more, but it was apparently still ok with God.

Ten generations later, at the end of this same parsha, things have descended to the point where God is ready to blot it all out. The world had become corrupt and lawless, filled with wickedness and violence.

Ten generations isn't a lot. Many of us are blessed to have known three or four generations of our families, maybe more. As a child I met a great-grandparent and my niece now has a child -- that's six. It's hard to imagine that the distance from my grandparents to my grand-niece spans half the distance from tov meod to unredeemable evil.

And yet... it's been roughly ten generations since the founding of the United States. The US didn't start out as tov meod -- slavery was normal, native peoples were badly mistreated, and sexism and racism were the way of the world. But the people of that generation also pursued values we would call at least tov: basic freedoms of speech and assembly and religion and personal autonomy, protections from government abuses, and fostering a society where people could live securely and pursue happiness.

Ten generations later, how are we doing? We've made progress in some areas, but we've also done a lot of harm. We've pursued the destruction of the planet we were given to care for, there is widespread corruption and injustice from local jurisdictions all the way up to the international level, crusaders on both the left and the right seek to blot out perspectives they disagree with, and we've become a polarized, combative, and intolerant society. I'm going to focus on this last one, both because it's the one we can do the most about at an individual level and because I want to avoid the appearance of political advocacy in a tax-exempt synagogue right before an election.

Within just a single generation, we've become more polarized, more isolated in our bubbles, and more certain that we are right and anybody who doesn't agree with us completely is evil. We could blame social media for filtering what we see, but aren't we complicit? There was Internet before Twitter and there was mass media before the Internet, and we've always tended to gravitate toward people like us, haven't we? And yet, we used to more easily have civil conversations with people we disagreed with; we used to be better at respectful discourse and its give-and-take. Going farther back, Beit Hillel and Beit Shammai disagreed with each other on almost everything, yet they found common ground in the study hall, maintained friendships, and intermarried. They taught each other's positions, not just their own, to their students. They disagreed, vehemently, without being disagreeable.

Very few issues in our society are cut-and-dried. We can't stay in echo chambers, only hearing perspectives we already agree with, and expect to get anywhere. We need to be open to diversity. Diversity means people and ideas that aren't exactly like us. Diversity means complexity. It means setting aside the goal of "winning" in favor of the goal of understanding the human beings we're interacting with. It means having civil conversations that are nuanced and complex. It means being open to new ideas. It means asking questions rather than jumping to the conclusions that would be most convenient for us, like "he's a bigot" or "she hates America" or "you're not capable of understanding". The results won't align completely with any side's talking points, but they just might help us move forward together constructively.

Try it. Try having a conversation with someone who disagrees with you on something. It doesn't have to be something extreme and emotional.
Try asking the person to explain the reasoning.
Try asking questions.
Try to understand, and resist the urge to prepare your counter-arguments while half-listening for keywords to pounce on.
Assume your conversational partner is as principled, ethical, and thoughtful as you are.
Assume good intentions.
See how long you can keep it up. Then ask yourself: based on what I've learned, do I need to re-evaluate anything in my own thinking?

It's hard, isn't it? But what's the alternative? Can we afford to continue our descent? What comes after "uncivil"? How many generations do we have before our society is unredeemable?

Ten generations of social decay, hatred, and violence led from Adam to Noach. But that wasn't the end. After the flood, another ten generations led from Noach to Avraham. After sinking to the depths of evil, society climbed back toward tov.

Our society hasn't sunk as far as Noach's generation -- yet. We do not need to reach bottom, when only the divine promise prevents the heavens and the depths opening up again, in order to start climbing back up. At Yom Kippur we confessed to many sins including sinat chinam, baseless hatred, and we also said that we can return from our errors. We can turn from ways that are uncivil or worse – individually, one interaction at a time. We are not obligated to complete the work, but neither are we free from trying. Let's see how far we can get together.

Holidays

My synagogue has gone through some changes in the last couple years, on top of the changes forced on all of us by the pandemic. Last year we hired a new rabbi and this year we hired a new cantor, and in-person services are more of a thing than they were, so lots of stuff is new together.

The rabbi and the cantor work well together. I already knew this from the morning minyan, but it also carried over to the formal high-holy-day services with all their extra stuff. Later, when all the holidays are over (they aren't yet), I want to ask the rabbi about some of the choices he made, but it was generally fine. It was nice to be together again.

I was asked to read torah, even though I said I'd pretty much have to memorize it because of the vision issues that are why I stopped reading torah on Shabbat. The readings for Rosh Hashana aren't that long, so I could memorize it, and anyway I don't know the special trope for the day so I was going to have to learn the music by rote anyway. That all went fine. I had the last aliyah and I noticed that other people were translating after their readings, so I followed suit on the spur of the moment. Later I realized that most of the others were reading translations, not doing it on the fly. (I'm not fluent in Hebrew, but I knew this part.) Ironically, I did need to look at the scroll for that part and there were some stumbles as a result, but on Yom Kippur several people stopped me to tell me how much they liked my RH reading, with specific compliments. Wow.

We have programming all day on Yom Kippur so you don't have to leave if you don't want to. The "learning" slot had two class options, fewer than in the past but I think this worked together. I went to a very good class on the Vidui (confessional) prayer, taught by someone who used to be our associate rabbi 15-20 years ago. (He moved away for another pulpit and returned to Pittsburgh a couple years ago, taking an educational position rather than a pulpit.) We did a close reading of the text compared to the translation in our prayerbook and talked a lot about the word aval.

In some years I've gotten to the end of Yom Kippur on a high, feeling scrubbed clean and energized and stuff. That didn't happen this year. I think some of that is due to some liturgical choices they made. I wonder how much of it is due to having finally been to a traditional Yom Kippur service (last two years) and now I'm more keenly aware of the differences.

For festivals we combine with another congregation and Sukkot was there not here. "There" is a two-mile walk each way for me, so I went to Beth Shalom, a Conservative congregation that also has an occasional musical Shabbat evening service that I've gone to. The people there were very welcoming, the service was complete and yet efficient, and the leaders and speakers were good. I was surprised to be offered an honor (carrying the first torah scroll). I had pleasant conversations with several people I didn't know at the kiddush after. I wonder if I should try to go there next Yom Kippur.

We've been able to have most of our meals in the sukkah this week, though a couple got rained out. This late in the year I didn't have expectations.

Ugly CSA week 12 (final)

The final week of the 412 Rescue Ugly CSA:

  • 1 spaghetti squash
  • 1 large cucumber
  • 5 medium-large yams
  • 1 large tomato
  • 8 Bosc pears
  • 3 heads garlic

Weight: about 10.5 pounds.

Definitely all stuff I can use! Winter squashes and root veggies are my favorite season.

The end-of-season survey included a question about interest in a winter share. Winter shares are uncommon, but the one my previous CSA did (before they shut it down) was very nice. I'd be happy to join one this year.