As some of you may know, recently Reddit started using some sort of bot or LLM called Anti-Evil Operations (AEO) to start rummaging through comments and ban users unilaterally. You've probably noticed it when you've gone through a post and found a long list of removals saying "[ Removed by Reddit ]". That's the AEO bot. My own personal experience with that bot is that the ban reasons can be rather assinine.
Case in point, earlier this year I got 1-week Reddit ban. This was in /r/ShittyLifeProTips where there was a post proposing a silly method for dealing with drivers who park in handicapped spots. My comment was very short and simply said to make a piss disc (the usual SLPT joke answer) and drop it through the car window. That was the entire comment. This got me a Reddit-wide ban for Rule 1: promoting violence. My appeal was declined. I get it, it's a dumb joke, but that's just what they do there.
I am now on another 1-week ban for a similar kind of silly joke that AEO took too seriously. I'm not even going to bother appealing because it's clear they've taken humans out of the loop and just have LLMs processing the user side of things.
Oddly enough I don't have any problems with individual subreddits or moderators. Most of them, except for a handful of power tripping individuals on the big subreddits, do a rather thankless job keeping their communities running. Reddit's AEO bots are where the problem is.
Just Googling around it appears this has come up among some of the moderators: 1, 2, 3, 4, etc.
Reddit is not only getting aggressive in deleting legitimate users (as per the other posts people have made) but are also trying to completely automate sitewide moderation. It appears they're willing to take a tremendous amount of collateral damage to do this. LLM based tools don't maintain understandings of a community's evolving culture and are unable to gauge intent and tone. Friendly joking "trash talk" between gets flagged as toxicity, satirical content mocking bigotry gets flagged, ironically, as bigotry, and so forth. They'll definitely get rid of the toxic content like they want, but at the expense of killing communities and driving off old timers.
Hopefully this is a case study for Lemmy to not start rolling out those tools here. For the time being I'm going to look around and use my ban period to get more familiar with Lemmy. I was surprised that my Lemmy front page had a lot more fresh content than I remember last time... that's a good sign, and it makes me wonder if a slow exodus is already underway. Reddit's overaggressive moderation seems to be helping it along.
I wonder if anyone else has stories about ridiculous reasons for getting flagged by the AEO bot.
Some perspective from the mod side
The 'Removed by Anti-Evil' isn't a new thing. It used to be the admin side spam/site wide rule breaking content remover.
It acted like Lemmy's purge function. When something is removed on Reddit, it's still visible to mods. Sometimes after something extra awful had been removed, anti-evil would come along and clean it up.
It would be an indication that something is against site wide rules. If the mods don't take care of reported content that's clearly against sitewide rules, and anti-evil has to step in, then it's a sign that the subreddit might need to be doing more
Recently though it's been coming along and removing comments before any of the mods can see what the comment was. That makes it hard to take any further action since the mods can't know what the problem was. So far when that's happened, the thread had nothing controversial and the user's history was normal and tame, so I have to assume that the new version of anti-evil has a few screws loose. It's not even that they've raised the threshold for what's appropriate, since awful content still gets through about equally as often.
The only reason why Reddit's moderation tooling is considered better than the threadiverse is the standard regex based automod rules. The other reddit tools continue to be hot garbage
they seemed to have turned it up several notches, of what they detect, of course now they arnt even telling the mods what they are detecting or how. plus they already have the other site-wide overly sensitive "bot detection/spamming going on too"
Yup, and meanwhile a lot of the spam we manually remove after users report it, should be obvious to detect with automated tools. For example, when a "user" is posting the same link in every comment, or posting the same length comment in many unrelated subs 24/7 every few minutes, etc.
it's annoying