crashdoom

joined 2 years ago
MODERATOR OF
[–] crashdoom@pawb.social 6 points 1 month ago (2 children)

Restarting the federation pods appears to have at least lowered the failure queue a little, but it's showing that while we've sent things to lemmit.online (as the example), it hasn't seen anything from them since 4/14/2025, 10:14:27 PM. So I'm presuming the next time they try to sync and send something to us it SHOULD update...

The federation state checker seems to be indicating that we're not behind in sending them posts anymore so outgoing should be fixed, but I'm unsure how to troubleshoot incoming further. I'm wondering if there's a bug with the latest version of Lemmy?

[–] crashdoom@pawb.social 7 points 1 month ago (3 children)

Morning all! It does appear like we're not getting some instances posts (you can find a list by using https://phiresky.github.io/lemmy-federation-state/site?domain=pawb.social). I'm currently investigating and I appreciate folks poking on this :3

[–] crashdoom@pawb.social 13 points 1 month ago

😂 We've given it a stern talking to and helped it recover through a restart, and image uploads appear to be working as normal again!

[–] crashdoom@pawb.social 2 points 2 months ago (1 children)

via here (I'm setting up an RSS feed monitor), via email (network[at]pawb.social), or with a DM message to highlight it. It seems like mentions in the original post don't alert the mentioned person, so I only got a notice when you replied to a message from me / mentioned me in a comment. >~<;

[–] crashdoom@pawb.social 2 points 2 months ago (3 children)

Media uploads should be working again. My apologies.

[–] crashdoom@pawb.social 2 points 2 months ago

Media uploads should now work again!

[–] crashdoom@pawb.social 5 points 2 months ago

We're investigating! Appreciate the report!

[–] crashdoom@pawb.social 2 points 2 months ago

Taking a peek now, appreciate the heads up!

[–] crashdoom@pawb.social 6 points 2 months ago (4 children)

The server hosting the database crashed for a still unknown reason. Sometime around 5 am MT, the database server had a kernel panic and locked up, it’s been like that since then until we woke up to the alert this morning.

We’ve got a watchdog on the server and an auto-restart in case it hangs, and we’re working to migrate it away to a known stable server while we troubleshoot.

[–] crashdoom@pawb.social 7 points 3 months ago (1 children)

Honestly, like you said, I genuinely have no idea what the US will do; Currently things are being so shaken up that anything is plausible right now.

For outside the US, I don't really know what reach other countries have. The UK's Online Safety Act is the current one I'm watching for since that would require "highly effective age assurance" which... if the folks writing the law had been online like, ever, they'll have realized is an utterly stupid statement unless they plan to enforce integrating ID Verification via a Credit Agency (Experian, Equifax) which does personal question-based verification (which imo is a huge breach of user privacy since while WE might not know who you are still, the credit agency does and where you signed up).

[–] crashdoom@pawb.social 7 points 3 months ago (3 children)

Yes, several US states along with other nations have been considering porn ban laws, along with the rather blatant anti-LGBTQ stance of the currently elected US party.

[–] crashdoom@pawb.social 6 points 3 months ago (1 children)

In the works! We're planning to have some info very soon, though we're still building out an improvement for the NAS to allow us to expand the storage capacity for long term, high size file storage specifically for Pixelfed!

 

On Feb 14th we migrated Lemmy from its standalone Docker setup to the same Kubernetes cluster operating furry.engineer and pawb.fun, discussed in https://pawb.social/post/6591445.

As of 5:09 PM MT on Feb 14th, we are still transferring the media to the new storage, which may result in broken images. Please do still reply to this thread if your issue is media related, but please check again after a few hours and edit your comment to say "resolved" if it's rectified by the transfer.

As of 11:02 AM MT on Feb 15th, we have migrated all media and are awaiting the media service coming back online and performing a hash check of all files. Once this is completed, uploads should work per normal.


To make it easier for us to go through your issues, please include the following information:

  • Time / Date Occurred
  • Page URL where you encountered the issue
  • What you were trying to do at the time you encountered the issue
  • Any other info you think might be important / relevant
 

tl;dr summary furry.engineer and pawb.fun will be down for several hours this evening (5 PM Mountain Time onward) as we migrate data from the cloud to local storage. We'll post updates via our announcements channel at https://t.me/pawbsocial.


In order to reduce costs and expand our storage pool, we'll be migrating data from our existing Cloudflare R2 buckets to local replicated network storage, and from Proxmox-based LXC containers to Kubernetes pods.

Currently, according to Mastodon, we're using about 1 TB of media storage, but according to Cloudflare, we're using near 6 TB. This appears to be due to Cloudflare R2's implementation of the underlying S3 protocol that Mastodon uses for cloud-based media storage, which is preventing Mastodon from properly cleaning up no longer used files.

As part of the move, we'll be creating / using new Docker-based images for Glitch-SOC (the fork of Mastodon we use) and hooking that up to a dedicated set of database nodes and replicated storage through Longhorn. This should allow us to seamlessly move the instances from one Kubernetes node to another for performing routine hardware and system maintenance without taking the instances offline.

We're planning to roll out the changes in several stages:

  1. Taking furry.engineer and pawb.fun down for maintenance to prevent additional media being created.

  2. Initiating a transfer from R2 to the new local replicated network storage for locally generated user content first, then remote media. (This will happen in parallel to the other stages, so some media may be unavailable until the transfer fully completes).

  3. Exporting and re-importing the databases from their LXC containers to the new dedicated database servers.

  4. Creating and deploying the new Kubernetes pods, and bringing one of the two instances back online, pointing at the new database and storage.

  5. Monitoring for any media-related issues, and bringing the second instance back online.

We'll be beginning the maintenance window at 5 PM Mountain Time (4 PM Pacific Time) and have no ETA at this time. We'll provide updates through our existing Telegram announcements channel at https://t.me/pawbsocial.

During this maintenance window, furry.engineer and pawb.fun will be unavailable until the maintenance concluded. Our Lemmy instance at pawb.social will remain online, though you may experience longer than normal load times due to high network traffic.


Finally and most importantly, I want to thank those who have been donating through our Ko-Fi page as this has allowed us to build up a small war chest to make this transfer possible through both new hardware and the inevitable data export fees we'll face bringing content down from Cloudflare R2.

Going forward, we're looking into providing additional fediverse services (such as Pixelfed) and extending our data retention length to allow us to maintain more content for longer, but none of this would be possible if it weren't for your generous donations.

1
Lemmy v0.19.3 (pawb.social)
submitted 1 year ago* (last edited 1 year ago) by crashdoom@pawb.social to c/pawbsocial_announcements@pawb.social
 

We've updated to Lemmy v0.19.3!

For a full change log, see the updates below:

Major changes

Improved Post Ranking

There is a new scaled sort which takes into account the number of active users in a community, and boosts posts from less-active communities to the top. Additionally there is a new controversial sort which brings posts and comments to the top that have similar amounts of upvotes and downvotes. Lemmy’s sorts are detailed here.

Instance Blocks for Users

Users can now block instances. Similar to community blocks, it means that any posts from communities which are hosted on that instance are hidden. However the block doesn’t affect users from the blocked instance, their posts and comments can still be seen normally in other communities.

Two-Factor Auth Rework

Previously 2FA was enabled in a single step which made it easy to lock yourself out. This is now fixed by using a two-step process, where the secret is generated first, and then 2FA is enabled by entering a valid 2FA token. It also fixes the problem where 2FA can be disabled without passing any 2FA token. As part of this change, 2FA is disabled for all users. This allows users who are locked out to get into their account again.

New Federation Queue

Outgoing federation actions are processed through a new persistent queue. This means that actions don’t get lost if Lemmy is restarted. It is also much more performant, with separate senders for each target instance. This avoids problems when instances are unreachable. Additionally it supports horizontal scaling across different servers. The endpoint /api/v3/federated_instances contains details about federation state of each remote instance

Remote Follow

Another new feature is support for remote follow. When browsing another instance where you don’t have an account, you can click the subscribe button and enter the domain of your home instance in the popup dialog. It will automatically redirect you to your home instance where it fetches the community and presents a subscribe button. Here is a video showing how it works.

Moderation

Reports are now resolved automatically when the associated post/comment is marked as deleted. This reduces the amount of work for moderators. There is a new log for image uploads which stores uploader. For now it is used to delete all user uploads when an account is purged. Later the list can be used for other purposes and made available through the API.

 
  • Instance: pisskey.io
  • Type: Defederation
  • Affects: Pawb.Social, furry.engineer, pawb.fun
  • Reason: Nazi imagery, affiliated with poa.st and other known abusive instances
  • Fediseer Action: Censured
1
submitted 1 year ago* (last edited 1 year ago) by crashdoom@pawb.social to c/adminlog@pawb.social
1
submitted 1 year ago* (last edited 1 year ago) by crashdoom@pawb.social to c/adminlog@pawb.social
 
  • Instance: lab.nyanide.com
  • Type: Defederation
  • Affects: Pawb.Social, furry.engineer, pawb.fun
  • Reason: Trolling, harassment, homophobia, nazi imagery, admin / mod engaged abuse
  • Fediseer Action: Censured

Block has been applied to the entire domain.

Evidence

 

cross-posted from: https://pawb.social/post/3393854

  • Instance: cunnyborea.space
  • Type: Defederation
  • Affects: Pawb.Social, furry.engineer, pawb.fun
  • Reason: Racism, antisemitism, homophobia, abusive admin, nazi imagery
  • Fediseer Action: Censured

Evidence

 
  • Instance: cunnyborea.space
  • Type: Defederation
  • Affects: Pawb.Social, furry.engineer, pawb.fun
  • Reason: Racism, antisemitism, homophobia, abusive admin, nazi imagery
  • Fediseer Action: Censured

Evidence

 

cross-posted from: https://pawb.social/post/3337642

  • Instance: bv.umbrellix.org
  • Type: Defederation
  • Affects: Pawb.Social, furry.engineer, pawb.fun
  • Reason: Toxicity, abusive admin, death threats, emotional abuse
  • Fediseer Action: Censured

Evidence

Admin Note: This block applies to the root domain (umbrellix.org) and all associated sub-domains.

 
  • Instance: bv.umbrellix.org
  • Type: Defederation
  • Affects: Pawb.Social, furry.engineer, pawb.fun
  • Reason: Toxicity, abusive admin, death threats, emotional abuse
  • Fediseer Action: Censured

Evidence

Admin Note: This block applies to the root domain (umbrellix.org) and all associated sub-domains.

 

Currently, we’re running the Ubiquiti Dream Machine directly as the modem via PPPoE, but there appears to be an intermittent issue with the software implementation that results in periodic downtimes of a few minutes while it reconnects.

We’re looking at switching this back out with the ISP provided router in pass through mode to negate the PPPoE connectivity drop.

We don’t expect this to take longer than 1 hour to switch over and test for reliability before bringing the services back up.

We’ll be performing this maintenance around 11 AM US Mountain Time, and will provide updates via the Telegram channel at https://t.me/pawbsocial.

1
liberdon.com (pawb.social)
submitted 2 years ago* (last edited 2 years ago) by crashdoom@pawb.social to c/fediblock@lemmy.dbzer0.com
view more: ‹ prev next ›