Blog: Tech

Most of these posts were originally posted somewhere else and link to the originals. While this blog is not set up for comments, the original locations generally are, and I welcome comments there. Sorry for the inconvenience.

Some Twitter-related links

If you are using your Twitter account to sign in to other sites ("the "sign in with Google/Facebook/Twitter/etc" system), you should stop doing that now. Also, if you are using SMS for two-factor authentication with Twitter, that same article has advice for you. Some parts of their 2FA setup have stopped working, and apparently SMS validation is now unreliable.

There is an outstanding thread -- on Twitter, natch -- about the kinds of things that SREs (site reliability engineers, the people who keep large systems running) worry about. Parts of large systems fail all the time; in a healthy setup you'll barely notice. Twitter is, um, not healthy.

Debirdify is a tool for finding your Twitter friends on the Fediverse (Mastodon), for those who've shared that info. It looks for links in pinned tweets and Twitter profile ("about") blurbs.

I'm at https://indieweb.social/@cellio, for anyone else who's there. I'm relatively new there, like lots of other folks, but so far the vibe takes me back to the earlier days of the Internet -- people are friendly, help each other, presume good intent, and have actual conversations. It is not Twitter; some intentional design choices appear to encourage constructive use and hinder toxicity. I hope to write more about Mastodon later.

The trust thermocline

John Bull wrote a post (in tweet-sized pieces, naturally) that rings true for me, and he gave a name for the phenomenon we're seeing with Twitter, saw with LiveJournal, and partially saw with Stack Overflow. The thread starts here on Twitter and here on Mastodon (the Fediverse). Selected quotes:

One of the things I occasionally get paid to do by companies/execs is to tell them why everything seemed to SUDDENLY go wrong, and subs/readers dropped like a stone. So, with everything going on at Twitter rn, time for a thread about the Trust Thermocline.

So: what's a thermocline?

Well large bodies of water are made of layers of differing temperatures. Like a layer cake. The top bit is where all the the waves happen and has a gradually decreasing temperature. Then SUDDENLY there's a point where it gets super-cold.

The Trust Thermocline is something that, over (many) years of digital, I have seen both digital and regular content publishers hit time and time again. Despite warnings (at least when I've worked there). And it has a similar effect. You have lots of users then suddenly... nope. [...]

But with a lot of CONTENT products (inc social media) that's not actually how it works. Because it doesn't account for sunk-cost lock-in.

Users and readers will stick to what they know, and use, well beyond the point where they START to lose trust in it. And you won't see that.

But they'll only MOVE when they hit the Trust Thermocline. The point where their lack of trust in the product to meet their needs, and the emotional investment they'd made in it, have finally been outweighed by the physical and emotional effort required to abandon it. [...]

Virtually the only way to avoid catastrophic drop-off from breaching the Trust Thermocline is NOT TO BREACH IT.

I can count on one hand the times I've witnessed a company come back from it. And even they never reached previous heights.

Social media and moderation

I've participated in a lot of online communities, and a lot of types of online communities, over the decades -- mailing lists, Usenet, blogging platforms like Dreamwidth, web-based forums, Q&A communities... and social media. With the exception of blogging platforms, where readers opt in to specific people/blogs/journals and the platform doesn't push other stuff at us, online communities tend to end up with some level of moderation.

We had (some) content moderation even in the early days of mailing lists and Usenet. Mostly1 this was gatekeeping -- reviewing content before it was released, because sometimes people post ill-advised things like personal attacks. Mailing lists and Usenet were inherently slow to begin with -- turnaround times were measured in hours if you were lucky and more typically days -- so adding a step where a human reviewed a post before letting it go out into the wild didn't cost much. Communities were small and moderation was mostly to stop the rare egregiously bad stuff, not to curate everything. So far as I recall, nobody then was vetting content that way, like declaring posts to be misinformation.

On the modern Internet with its speed and scale, moderation is usually after the fact. A human moderator sees (or is alerted to) content that doesn't fit the site's rules and handles it. Walking the moderation line can be tough. On Codidact2 and (previously) Stack Exchange, I and my fellow moderators have sometimes had deep discussions of borderline cases. Is that post offensive to a reasonable person, or is it civilly expressing an unpopular idea? Is that link to the poster's book or blog spam, or is the problem that the affiliation isn't disclosed? How do we handle a case where a very small number of people say something is offensive and most people say it's not -- does it fail the reasonable-person principle, or is it a new trend that a lot of people don't yet know about? We human moderators would examine these issues, sometimes seek outside help, and take the smallest action that corrects an actual problem (often an edit, maybe a word with the user, sometimes a timed suspension).

Three things are really, really important here: (1) human decision-makers, (2) who can explain how they applied the public guidelines, with (3) a way to review and reverse decisions.

Automation isn't always bad. Most of us use automated spam filtering. Some sites have automation that flags content for moderator review. As a user I sometimes want to have automation available to me -- to inform me, but not to make irreversible decisions for me. I want my email system to route spam to a spam folder -- but I don't want it to delete it outright, like Gmail sometimes does. I want my browser to alert me that the certificate for the site I'm trying to visit isn't valid -- but I don't want it to bar me from proceeding anyway. I want a product listing for an electronic product to disclose that it is not UL-certified -- but I don't want a bot to block the sale or quietly remove that product from the seller's catalogue.

These are some of the ways that Twitter has been failing for a while. (Twitter isn't alone, of course, but it's the one everyone's paying attention to right now.) Twitter is pretty bad, Musk's Twitter is likely to be differently bad, and making it good is a hard problem.3

Twitter uses bots to moderate content, and those bots sometimes get it badly wrong. If the bots merely flagged content for human review, that would be ok -- but to do that at scale, Twitter would need to make fundamental changes to its model. No, the bots block the tweets and auto-suspend the users. To get unsuspended, a user has to delete the tweets, admit to wrongdoing, and promise not to do it "again" -- even if there's nothing wrong with the tweet. The people I've seen be hit by this were not able to find an appeal path. Combine this with opaque and arbitrary rules, and it's a nightmare.

Musk might shut down some of the sketchier moderation bots (it's always hard to know what's going on in Musk's head), but he's already promised his advertisers that Twitter won't be a free-for-all, so that means he's keeping some bot-based moderation, probably using different rules than last week's. He's also planning to fire most of the employees, meaning there'll be even fewer people to review issues and adjust the algorithms. And it's still a "shoot first, ask questions later" model. It's not assistive automation.

A bot that annotates content with "contrary to CDC guidelines" or "not UL-certified" or "Google sentiment score: mildly negative" or "Consumer Reports rating: 74" or "failed NPR fact-check" or "Fox News says fake"? Sure, go for it -- we've had metadata like the Good Housekeeping seal of approval and FDA nutrition information and kashrut certifications for a long time. Want to hide violent videos or porn behind a "view sensitive content" control? Also ok, at least if it's mostly not wrong. As a practical matter a platform should limit the number or let users say which assistance they want, but in principle, fine.

But that's not what Twitter does. Its bots don't inform; they judge and punish. Twitter has secret rules about what speech is allowed and what speech is not, uses bots to root out what they don't like today, takes action against the authors, and causes damage when they get it wrong. There are no humans in the loop to check their work, and there's no transparency.

It's not just Twitter, of course. Other platforms, either overwhelmed by scale or just trying to save some money, use bots to prune out content. Even with the best of intentions that can go wrong; when intentions are less pure, it's even worse.

Actual communities, and smaller platforms, can take advantage of human moderators if they want them. For large firehose-style platforms like Twitter, it seems to me, the solutions to the moderation problem lies in metadata and user preferences, not heavy-handed centralized automated deletions and suspensions. Give users information and the tools to filter -- and the responsibility to do so, or not. Take the decision away, and we're stuck with whatever the owner likes.

The alternative would be to use the Dreamwidth model: Dreamwidth performs no moderation that I'm aware of, I'm free to read (or stop reading) any author I want, and the platform won't push other content in front of me. This works for Dreamwidth, which doesn't need to push ads in front of millions of people to make money for its non-existent stockholders, but such slow growth is anathema to the big for-profit social networks.


  1. It was possible to delete posts on Usenet, but it was spotty and delayed. ↩︎

  2. The opinions in this post are mine and I'm not speaking for Codidact, where I am the community lead. ↩︎

  3. I'd say it's more socially hard than technically hard. ↩︎

"What's your contribution on the Internet?"

Somebody on Dreamwidth asked (as part of a research project):

How do you make yourself useful to other people on the internet? What's your contribution to the internet?

That's not how I generally think about my activity online, but I said a few things in the moment:


Since the early days of Usenet I've been using the net to learn (self-enrichment), teach or share my knowledge and experience (I hope this helps others), and get to know people who are not like me and who I would never have met otherwise. I like to think that I have similarly contributed to others meeting diverse people from different cultures and contexts. The reasons were originally self-focused, but that's changed over time and with experience.

More actively, after close to a decade contributing to another Q&A network (asking, answering, curating, helping newcomers, moderating), I'm now working on an open-source, transparent, community-driven platform for knowledge-sharing. We're small and trying to grow and only time will tell if we truly helped others, but it's where I invest my community-building and platform-building efforts now.

I guess I served as a canary when that other place turned evil. No one ever signs up to be a canary.

I have used email, and restricted email lists, to both give and get counsel on personal matters. I think I've helped a bunch of people who were considering conversion to Judaism. I consider it a success that some of them did and some of them decided not to; it's not about recruiting but about helping people evaluate the fit.

One of those people was a seeker in Iran, where it was very dangerous to be out about that sort of thing. I think we (one other person and I, in a private chat room with this person) might have saved some lives that day, but I'll never know.

I had a remote intern a few years ago (pre-pandemic); I met her once, about halfway through the internship when I traveled to her location, but otherwise it was all done remotely. I've had in-person interns and junior hires before and I enjoy mentoring them; this was my first time doing it remotely. (I've since done it a couple more times.) Kind of relatedly, I received email last night from an SCA contact who's looking for a mentor for a student for a Girl Scout project. I don't know where this student lives.

I was contacted by a schoolteacher in Myanmar several years ago; her students were building a yurt based on an article I had allowed someone to publish online (it was originally in a paper SCA newsletter), and she had a question. Myanmar. My jaw dropped. Another time, I got email from somebody in Scotland asking me if it would stand up to force-12 winds (which I had to look up). This article was kind of a one-off; it's just a thing I wrote up, after learning from someone else (credited of course) and building one, because I needed something to live in at Pennsic. It wasn't a focus area for me; I've never been part of online yurt communities and stuff; I never promoted it anywhere. I don't even have direct access to edit it. A chance "sure, go ahead and put it on your site if you want" was pretty much my entire contribution to it being online. It makes me wonder how much the stuff that I've intentionally published and maintained has helped people that I'll never know about.

I'll never know most of the impact I have on others. I do the best I can to help it be positive impact.

Always read the reviews

I needed a new thumb drive, so I figured I'd just get one from Amazon along with some other stuff I needed. I found a reasonable-looking candidate but looked at the reviews, the first few of which were bad. How can a thumb drive be bad? The first review said it was unreliable (not described further); the second said it came with malware. I looked at a couple other options, and -- same sort of complaints.

Hmm, I said. These are all third-party sellers (different ones, in the few product pages I looked at). Amazon isn't vetting them and never gets its own hands on the products. They're just an aggregator. I would buy a thumb drive from Amazon, but their credibility does not extend to other sellers they happen to host -- I shouldn't trust a thumb drive being sold by "Joe's Anonymous Store" any more than I should trust one I find lying around waiting to spread the malware within. Even if Amazon eventually boots sellers with lots of complaints, that doesn't help me, now.

I had an errand to run today anyway and figured I'd pick one up in person at Best Buy. That's how I found out my local Best Buy isn't there any more. Oops.

I've bought electronics online from NewEgg before and that's always been fine, so I headed there next -- where I saw that the products I was looking at were listed as third-party sellers. I didn't know NewEgg did third-party sellers. I wouldn't have thought to look if not for those Amazon reviews.

I finally ordered from Best Buy online; I figure it's probably really them, and if there's a problem I can, if necessary, go to a (less-local) brick-and-mortar store to deal with it.

Decisions as barriers to entry

I've been hearing a lot about Mastodon for a while and thought I'd look around, see if I know anyone there, see what it's like, see if it seems to work better than Twitter... and the first step is to choose a host community/server, from dozens of options. The options are grouped into categories like "Tech" and "Arts" and "Activism" and there's also "General" and "Regional". None of the regional offerings are my region, so I browsed General and Tech.

All of the communities have names and short blurbs. Some sound serious and some sound less-so. Mastodon is a Twitter-like social network, so -- unlike topic-focused Q&A sites, subreddits, forums, etc -- one should expect people to bring their "whole selves". That is, a person on a tech server is likely to also post about food and hobbies and world events and cats. From the outside, I can't tell whether the mindset of the Mastodon-verse it "well yeah, duh, the server you choose is really just a loose starting point because you need to start somewhere" or if there's more of a presumption that you'll stay on-topic (more like Reddit than Twitter, for example).

A selling point of Mastodon is that it's distributed, not centrally-managed; anybody is free to set up an instance and set the rules for that instance. One considering options might reasonably want to know what those rules are -- how will this instance be moderated? But I see no links to such things. Many instances also require you to request access, which further deters the casually curious.

I guess the model is that you go where your friends are -- you know someone who knows someone who knows someone with a server and you join and you make connections from there. That's a valid and oft-used model, though I wasn't expecting it here.

What's confusing my phone?

I have a problem with my (older) Android phone and am not sure how to debug it.

Four times in the last six months, I have used the navigation in Google Maps while in a car (audio, not looking at the screen). Every time the trip has ended the same way: the app informs me that I have reached my destination, I reach for the phone to exit, and the phone crashes. On restarting, it tells me I have 1% battery and crashes again. (Phone was not low at the start of the trip.) Now here's the interesting part: when I plug it in to charge, it reports something in the range of 30-40%. So, something is confusing the phone about its battery state, because no way does my phone charge that quickly (especially on a car charger).

Here's tonight's case: I was at something over 60% when I turned on nav for a 15-minute trip. Crashed on arrival, plugged in (in the car) and turned on, it said 32%, I unplugged, and it crashed again (back to 1%). I left it off while I completed my errand, but plugged it in to charge on the drive home. At home, it was 40% and, this time, did not crash when I unplugged it from the charger.

To determine whether the problem is specific to Google Maps, I installed another navigation app (Waze). When the installation finished I opened the app...and the phone crashed. When I connected it to the charger, it said it was at 31%. I let it charge for a bit (I turned it on while it was connected to the charger), and disconnected it around 50% with no issues.

Here's all that in pictorial form:

Image without description

Also, the power manager reports no fast-drain apps. iDrive, a backup app, was a fast-drain app and is the singular entry in the history, but I've nerfed it and it hasn't popped up recently. Could its mere presence be a problem?

Now, I'm pretty sure the battery isn't actually being drained to practically nothing, because it wouldn't bounce back that quickly. And apparently it's not just Google Maps or GPS, because Waze didn't even finish opening before that crash. But something, either Android or something in hardware or firmware, sure thinks there's a problem that calls for shutting down.

How do I find it?

I have not had crashes with other apps -- though I also don't stream videos or play games on my phone, so I'm not taxing it. I have noticed the pattern of "steps" you can see in the picture here -- battery will drop noticably, then stay level for a while, then do it again. I don't know what's causing that or if it's related.

The phone is old -- ZTE Axon 7, bought in 2017, running Android 7.1.1 and apparently not eligibile for newer -- but it otherwise works, has the (rare) aspect ratio I crave, and already has all my stuff on it. I'd like to keep using it for a while (and let the 5G world sort itself out in the meantime).

Help wanted (involves git)

Dear Brain Trust,

I have a technical problem that I'm a few clues shy of solving. Can you help?

I have a personal web site, which I built using an SSG called Yellow. I'm using a few of their extensions, most importantly Blog. The way you use Yellow is to download and unpack a ZIP file, download any extensions you want into that directory structure, and add your content (also into that directory structure). The source is on GitHub but they also give you these ZIP files.

Last summer I downloaded those ZIP files, unpacked them, started tweaking things, and added my own content. I never cloned their repositories; I just took the ZIP files. Eventually I figured out that the easiest way for me to deploy my site was to use GitHub: I created a private repository, into which I added my then-current versions of both the tooling and the content, and I update it as needed (for example to add this post).

Yes I now know this was the wrong way to go about it. Apparently we won't have gotten "send clue back in time" working in my lifetime.

Since then, they've made some updates that I would like to take advantage of. I want to update to the new version, incorporating the changes I made to the previous version (figure out what they were and how to apply them). And I want to figure out a better way to organize this so that the next upgrade is more straightforward.

I imagine that what I wanted to have done instead was to fork their repos, apply my changes, make a separate repo for my content, and (do magic here) so it all works together. I don't know what that magic is. I'd like to check my assumptions about this being a better approach. Is there some other way I should be managing this? Another way to think about it is that my project (my site) has GitHub dependencies (those other two repositories); I'm not familiar with how dependencies are typically managed.

I mentioned I'm using GitHub for deployment. More specifically: I make edits on my personal machine, commit and push, and then on the hosting server I pull and, wham, the site is up to date. There's no explicit build step and I'm not fussing with rsync. My "aha" moment was that git can already figure out what's changed and needs to be pulled, so why should I have to? I like this simplicity.

I have found the version of the blog extension I started from (thank you for explicit version numbering), so it is possible to identify the changes I made to the original.

Should I create new repos (or forks) from the previous version, apply my changes, get that working, and then try to do the upgrade from there? How should I manage the multiple git repositories so that everything ends up in the right places? There's one repo for the base system (yellow), one for all the extensions (which overlays the file structure of the base system), and then I need a place for my actual content. How do I do this?

Livejournal user agreement updated

I still have an LJ account, though I stopped posting there after they changed the terms of service in problematic ways. Today I got email notifying me of an update to those terms of service, so out of curiosity I took a look. That's the new version; I didn't look for the old one or attempt a direct comparison. A few things jumped out on a quick skim (conclusion: still not using them):

  • Section 6.1 says this about termination of accounts: "The Administration reserves the right to delete Account and Blog if User did not access the Account or the access was restricted for more than 2 years due to a breach hereof." They don't say what "access" means, but if you left LJ and thought your posts would remain until you removed them, you might want to check into that, or log in once a year, or something.

  • Section 7.4, about blogs and comments, says that the commenter and blog owner are "jointly and severally liable" for their content. (If someone posts a problematic comment and you don't nuke it, you're complicit.) The "severally" part means the parties can be sued independently, or at least that's what it means under US law as I understand it. Russian law? No idea. I bring this up because in the next section, about communities (shared blogs), it says in 8.4 that a poster or commenter and the community owner are "subsidiarily liable" with respect to the content. I don't know what that means or why it's different from the blog case.

  • Section 9.2.6 says that users may not "without the Administration’s special permit, use automatic scripts (bots, crawlers etc.) to collect information from the Service and/or to interact with the Service". Do they mean userscripts too? Other clients? That cron job that posts a quote of the day?

  • Users may also not "post advertising and/or political solicitation materials" without permission, but these terms are not defined. Are you allowed to pitch your new book (with purchase link)? Link to the feedback form for legislation that's out for public comment? I assume the purpose is to support the goals of the Russian government, but the language is more expansive.

  • Section 11.3 (under liability) says (my emphasis): "Please note that in accordance with the Russian Federation Act No. 2300-1 dated February 7, 1992, the provisions of the said act related to consumer rights protection do not apply to the relationship between the Administration and Users as the Service is provided for free." I paid for a permanent account. On the other hand, they also say (in 10.6): "The Administration may at its own discretion and without User’s prior notice supplement, reduce or otherwise modify any Service function and it’ [sic] procedures." So I guess they have cancelled or can cancel permanent accounts at will.

  • As with the 2016 change, the English-language document they post isn't legally relevant in any way; you are agreeing to the Russian-language TOS. Can you read Russian?

Scanning for Wordpress?

Every now and then I remember to look at my web site's traffic. Every month my site produces a few hundred "URL not found" errors, and almost all of them are related to Wordpress -- wp-login.php, xmlrpc.php, and wlwmanifest.xml (tried at a bunch of entry points, each exactly 30 times in the last 30 days, presumably a daily probe).

I don't run Wordpress -- never have. But I guess it's popular enough, and has bugs or security holes, that people find it worthwhile to send their bots to look for it on every web site they can find?