RagingHungryPanda

joined 2 years ago
[–] RagingHungryPanda@lemm.ee 3 points 1 week ago

I've been there. I recommend it. I dove-tailed it with a trip to Baden-Baden (a hot spring town with a great sauna resort) and a day trip to Carlesrue(?). The black forest is nearby as well.

Do it!

[–] RagingHungryPanda@lemm.ee 2 points 1 week ago

You need to follow it, but the thing is, you're probably just as well off posting a link from your own account. It comes up as a separate account for me. I don't think the federation there is really worth it

[–] RagingHungryPanda@lemm.ee 2 points 1 week ago (1 children)

I'm hosting writefreely with picsur as the image host for my blog. They're pretty lightweight. Otherwise a static site generator like others suggested.

[–] RagingHungryPanda@lemm.ee 3 points 1 week ago (1 children)

What format are you looking for? Something like a comic layout?

[–] RagingHungryPanda@lemm.ee -1 points 2 weeks ago (1 children)

I asked it to be more concise:

Response:

"Great point! It's crucial to stay vigilant against bot activity on any platform. To detect bots, consider examining account history for inconsistencies, checking posting patterns that appear automated, and looking out for suspicious usernames. For server admins, collaborating with other instance admins can help share information and best practices. While no system is foolproof, proactive measures can significantly mitigate risks. Let's work together to foster trust and maintain authenticity within our communities!"

[–] RagingHungryPanda@lemm.ee 0 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I asked my deepseek r1:14b model and it gave this. It is quite wordy and likes bullet points. I wouldn't bother typing all this out.

Thank you for bringing up these important questions about bot detection and trust in online communities. These are valid concerns, especially as technology evolves and becomes more sophisticated.

Verification on Reddit: While Reddit does employ measures to detect bots (such as CAPTCHAs and rate limiting), it's true that no system is foolproof. However, the platform has been actively working on improving its anti-bot mechanisms over time. As a user or admin, staying informed about these measures can help us understand how communities are moderated.

Lemmy and Other Instances: On platforms like Lemmy (a federated social network), trust in other instances can be challenging because each instance operates independently. However, as a server admin, you can take steps to verify the credibility of an instance:
* Manual Verification: You can manually check accounts on other instances by reviewing their posting patterns and content for consistency with human behavior.
* Community Standards: Look for instances that follow similar community guidelines or moderation practices to yours. This can help ensure a level of trustworthiness. * Transparency: Instances that are transparent about their moderation policies and allow communication between admins are generally more trustworthy.

Future Concerns: As AI and bot technology advance, the risk of sophisticated bots mimicking human behavior will increase. However, staying proactive by implementing verification systems (like PII checks) and educating your community about suspicious activity can mitigate these risks.

Spam vs. Human-like Bots: While traditional spam bots are easier to detect, human-like bots that use statistical data to mimic patterns are more insidious. This is why relying solely on automated tools isn't enough—it requires a combination of technology and human oversight.

As server admins, we must remain vigilant but not overly discouraged. By fostering open communication within our communities and collaborating with other instance admins, we can build a network that prioritizes trust and authenticity.

[–] RagingHungryPanda@lemm.ee 9 points 2 weeks ago (1 children)

It IS a more helpful thing than "This should never happen."

[–] RagingHungryPanda@lemm.ee 1 points 3 weeks ago

Thanks, I'll check that out

[–] RagingHungryPanda@lemm.ee 4 points 3 weeks ago

I enjoyed this read. It's short, but it's a look into someone totally different from just about anyone else in his position.

[–] RagingHungryPanda@lemm.ee 8 points 3 weeks ago

I saw a joke where someone in Germany said they arrived too late for the 7:30am train, but were just in time for the 6:30am train. It's like a meme how late they are.

[–] RagingHungryPanda@lemm.ee 1 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

I am not for the life of me seeing where to add a tag or a label. I checked in 3 different UIs, including the main one.

[–] RagingHungryPanda@lemm.ee 3 points 3 weeks ago

I had thought whether there should be lemmy, pixelfed, and maybe mastodon for local cities.

 

It's based on indigenous legends by studio Ninakami. Instagram is @ninakami.studio

 

Lessons from event driven architecture

 

Hey all, I've been trying to get up and running with Friendica in a docker container behind Nginx Proxy Manager and connected to a separate MariaDB container.

The scenario that I have is that the only combination I've gotten to work to connect to Friendica is to use the alpine docker image with the complete web server, as putting configs into separate nginx instances wasn't getting me anywhere. However, using the full image, I can get to the UI and do the install to set things that I already set in env variables(?) whatever.

But what happens is that when it goes to save, it fails to save to the DB and maria db says that it rejected the connection for unauthenticated user and unknown host (implying to me that it rejected the connection before it even pulled this info). But the thing is that I've been able to shell into the friendica container and connect to the MariaDB. I can use adminer and log in as the friendica user to the friendica db that I created. Has anyone run in to this?

I'm starting to wonder if I need to use actual MySQL, maybe? This is a very strange issue as I've been able to create the database, user with privileges, and log in as that user. The host name for the user is '%', so 'friendica'@'%'.

I'd appreciate any help that I could get there.

 

If you have any experience in this field, please include so in your reply. I've seen over time a lot of criticism over the peer review process and how journals hyper-exploit academics simply because the journals are able to monetize scarcity/exclusivity. I saw another post on it today and I thought, "what if this was federated?"

I was looking around and I see that there are writing portions of the process, such as pubpub or manubot that essentially use git and markdown - but that's not the main point as that's on the before end. What about on the review process?

Let's say there's software that's federated and can be run by anyone from individuals to universities and consortiums. When a user or team is ready to publish, they can "submit their work" for publishing, which would federate out as works pending publication.

This part's a different issue: how to handle reputation for who can review, but I think there are ways to do that and that's beyond the scope of this post as I imagine it could get pretty complicated and would require feedback from people actually in the industry.

The reviewers can submit comments and reviews back to the author via federation, but this time the process can be open instead of behind closed doors. The authors revise, comment, etc. At some point a determination is made that this work is "published."

This seems like a feasible premise. Just brainstorming, you would get history, open reviews, no one asking $1,000 to submit a publication that they then make bank on while you get scraps or nothing.

I could see a reputation system within a given field and/or overall, with certain users being "review board" or "reviewers" on their instance. There could also be additional reputation if, say, a group of universities creates consortiums for different fields and then that consortium "publishes" a work. There'd have to be additional process to block people from spamming works that aren't ready or whatever, but that's not really the point for now.

Am I barking up the wrong tree here? At first thought, it seems like there are ways to allow federation of research papers and peer review and to put a dent in the grip of technical journals.

 

Activity Pods is supposed to allow you to have one account across the fediverse and it's still in early dev. I do see that they have some docker images, but there's no descriptions on what they're for and their instructions involve running make scripts to get running.

I can do that inside of a docker container, but running TrueNas I'm limited to running those, which is fine, I can do that, but the other thing that seems a bit confusing is that it looks like they want you to define "shapes" for different services to communicate with.

It might just look more complicated than it is. Has anyone successfully gotten up and running with it?

 

And I'm making everyone go to my gotosocial post because the server is running, so I'm going to use it!

 

I have a gl-inet router on which I have an nginx config to send traffic to Nginx Proxy Manager and DDNS with cloudflare.

I'm trying to get some kind of local dns set up so that if I'm on the local network, traffic stays within the network. The problem that I'm running in to is SSL certificates. NPM (on the server) is handling those and I thought that what I could do is go into the AdGuard Home (on the gl-inet router) config and add a dns rewrite to point to the router and traffic would flow as it normally does.

This DOES work, technically. traceroute shows only one hop for any of my subdomains, ie files.mydomain.com.

But I cannot actually get access in a browser because the ssl certificates are not set up.

It seems like options are: manually copy certificates from the server to the router (not ideal), or don't do it at all. I notice that if I go to the service by ip address, it'll change the address to the domain name. Eg going to 192.168.8.111:30027 -> files.mydomain.com.

This isn't a HUGE deal, but it's not preferable. How have you all solved this?

Edit: I solved the issue in probably the most hilarious way. I was trying to get the forwarding and everything set up and then borked my routers firewall so bad I couldn't get to the outside at all, so I did a hard reset.

I then moved my admin UI ports up by one each (81/444), re-set up Goodcloud, ddns, Wireguard server on the router, then set up port forwarding for 80/443 on the router to 80/443 on the trunas server. I switched NPM to listen on those ports (since I moved the web UI to different ports), then added Adguard Home DNS rewrites. It's now all working as expected.

Local traffic only has one hop and is accessible without SSL warnings, and same for WAN traffic. Thank you all for the help!

 

I've been getting into self hosting, the fediverse, and federated blogging. I contacted freaking nomads and they suggested that I write about my experiences, so here it is! I hope you enjoy.

Comments aren't fully federated from the blog site, so I'm using mastodon as well.

view more: next ›