I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.
One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.
On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.
Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.
Worth checking out this related discussion:
Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.
PieFed, unlike Lemmy, allows access to community-specific values, yes "karma" if you will. So if someone builds up a strong reputation and length of membership elsewhere, that will not help one iota within the specific community in question, if the mod chooses those settings (disclosure that I've only read of these but have no direct mod experience on a PieFed instance).
Also, at least at the instance level but it would probably be helpful to extend this model to a community one as well, votes can be differentially weighted from "trusted" instances, let's say those not known as spreaders of disinformation.
So someone could spin up 10 private instances and 10 accounts on each to attempt to influence the vote counts, and since Lemmy only allows "upvote" vs. "downvote", Lemmy will be susceptible to this kind of malicious interference, but PieFed offers multiple methods to limit and attempt to minimize this kind of behavior. e.g. each of those 100 alt accounts would need to be considered helpful members of the community and he upvoted often in order to karma farm sufficiently in order to then be able to influence voting patterns. Though let's face it, if someone is willing to go to all that amount of trouble, could they really be kept at bay by any automated - or even entirely manual - system? Generally the best that can be done is to raise the level of effort required so as to not be worth the reward, and PieFed certainly does that! (While Lemmy does little to nothing, at least directly although some instance admins have their own approaches, using a mixture of automated help and manual decision-making.)