rs5th

joined 2 years ago
[–] rs5th@lemmy.scottlabs.io 1 points 2 years ago* (last edited 2 years ago)

This is a change with 0.17.4. You cannot have both federation enabled and the private instance box checked. You might try downgrading to the 0.17.3 image (so that you can get into the UI) and unchecking either the private or federation boxes (whichever way you wanna go). I’d also suggest pinning the docker image versions, as I bet you’ve got latest set (or nothing set, which I believe also grabs latest), and the vm reboot prompted docker to go grab the latest image on startup. Surprise upgrades probably aren’t what you want.

[–] rs5th@lemmy.scottlabs.io 5 points 2 years ago (1 children)

There’s no function in lemmy to track reasons in the admin interface, it’s a text box where you pass in a list of blocked instances. The Beehaw admins may maintain a list separately.

[–] rs5th@lemmy.scottlabs.io 1 points 2 years ago (3 children)

beehaw.org/instances

[–] rs5th@lemmy.scottlabs.io 2 points 2 years ago

You should be able to look up the last hop that responds (via ARIN or whoever the internet number agency is in your region) and see who that ISP is. Now the annoying part is some ISPs just drop ICMP at their border so it’s not s smoking gun that they are the issue.

[–] rs5th@lemmy.scottlabs.io 15 points 2 years ago (3 children)

bit off more than they could chew

By starting a Lemmy instance a year and a half before Rexxit? I never saw them claim to want to be the next Reddit. The Fediverse had an influx of users and Lemmy doesn’t currently have the mod or admin tools to deal with that situation gracefully. My understanding is that most of the bad actors were external to Beehaw.

They didn’t bite off anything, shit was being shoved into their mouth so they closed it.

Personally, I’m using my very own Lemmy instance so that I can choose who I federate with (including Beehaw). I totally understand why some folks might want to have their home instance elsewhere, and it’s cool that federation gives us that ability.

[–] rs5th@lemmy.scottlabs.io 2 points 2 years ago

This wasn’t the impression I got from the Beehaw admins. I believe they felt that blocking lemmy.world users from Beehaw but still being able to have Beehaw users interact with lemmy.world would have been better than full defederation, but I don’t think that was the ideal solution either. Something like an approval process for an external user to interact with Beehaw communities would be preferable.

Also, Beehaw could go fully private today if they wanted to, that definitely doesn’t seem to be their intention.

[–] rs5th@lemmy.scottlabs.io 2 points 2 years ago* (last edited 2 years ago)

I’ve seen reports of people who haven’t tried to reset their password getting a reset notification, I bet the email issue Beehaw is having is exposing some buggy behavior in Lemmy’s notification system. The random email address “working” makes sense, as that email likely isn’t attached to a user, so the send function never gets hit. At the same time, to prevent enumeration attacks, Lemmy probably doesn’t have a notification that says “That email address doesn’t exist”

[–] rs5th@lemmy.scottlabs.io 3 points 2 years ago

When did you try? I think emails haven’t been working for about the last hour.

[–] rs5th@lemmy.scottlabs.io 2 points 2 years ago

I’ve got two synology NASes. My current backup strategy is to backup everything between the two NASes so I have two copies of everything locally. Then I back up documents, photos, pretty much everything except TV shows and movies to Backblaze.

[–] rs5th@lemmy.scottlabs.io 2 points 2 years ago

With that many disks, I'd compare what it would cost to build a desktop PC to hold all the drives, compared to a commercial NAS. When I pulled the trigger on my Synology, the thing that really sold me was the hot-swappable drive bays. I use mine to back VMware storage, so if I had a drive fail, I didn't want to have to take down all my VMs to offline the storage and swap a disk.

Another thing you might look at is used hard drives. I know you've got some, but they're pretty small, and drives have gotten pretty cheap. NASes with more than 4-5 drive bays get pretty $$$. I just bought an 8TB HGST Ultrastar "refurb" drive for $75. Lots of options, but the bottom line is, I think you'll love having your own media.

[–] rs5th@lemmy.scottlabs.io 1 points 2 years ago* (last edited 2 years ago) (2 children)

How many drives do you have laying around? Synologys are nice (I have 2) but they’re a little pricey. You could go with one of the plus units and run Plex directly from there.

One of my coworkers got a TerraMaster NAS and installed Xpenology (basically the synology OS) on it. There hardware is a little cheaper.

[–] rs5th@lemmy.scottlabs.io 4 points 2 years ago (2 children)

Couple questions:

  • What’s your ISP at home?
  • What’s the ISP of the remote IPv6 server?
  • Are the other networks you’ve tried from the same or different?

I’d start with traceroute and see how far your IPv6 traffic gets before it fails. It could very well be some peering or routing issue between some of the ISPs in between you and wherever that IPv6 address lives. If this ends up identifying where the traffic dies, a lot of the tier 1 ISPs have BGP looking glass servers so you can get an idea of what they know about that subnet.

view more: ‹ prev next ›