Self-hosting

3918 readers
25 users here now

Hosting your own services. Preferably at home and on low-power or shared hardware.

Also check out:

founded 3 years ago
MODERATORS
1
 
 

Beneath the dark and uncertain clouds of bigtech, hidden among the declassed byte workers and the false technological prophets who with siren songs offer their digital services to "facilitate" digital life, rises an anarchic and countercultural community that seeks to reclaim the Internet and fight against those who squeeze our identity in the form of data to generate wealth and advertising for mass social manipulation and cohesion. Navigating the network of networks, with a small fleet of self-managed servers, geographically distributed yet cohesively united by cyberspace, the self-hosting community emerges as a way of life, a logic of inhabiting the digital, a way of fighting for an open, human network, free from the oligarchy of data.

To the naturalization of the already crystallized phrase "the cloud is someone else's computer" we add that this "someone else" is nothing more than a conglomerate of corporations that, like a hungry kraken, devours and controls the oceans of cyberspace. Against this we arm ourselves in community action, direct and self-managed by and for those of us who inhabit and fight for a more sovereign and just Internet. Our objectives are clear, and our principles are precise. We seek to break the mirage and charm that these beasts imposed at the point of ISPs and blacklist and we promote the ideal of an organized community based on their computing needs without the intermediation of outlaws and byte smugglers.

The big tech companies disembarked on the net with a myriad of free services that came to replace standards established during years of work among users, developers, communities, technocrats and other enthusiasts of the sidereal tide of cyberspace. By commoditizing basic Internet services and transforming them into objects of consumption, they led us to their islands of stylized products, built entirely with the aim of commercializing every aspect of our lives in an attempt to digitize and direct our consumption. Sending an email, chatting with family and friends, saving files on the network or simply sharing a link, everything becomes duly indexed, tagged and processed by someone else's computer. An other that is not a friend, nor a family member, nor anyone we know, but a megacorporation that, based on coldly calculated decisions, tries to manipulate and modify our habits and consumption. Anyone who has inhabited these digital spaces has seen how these services have changed our social behaviors and perceptions of reality, or will we continue to turn a blind eye to the tremendous disruption that social networks generate in all young people or the absurd waste of resources involved in sustaining the applications of technological mega-companies? Perhaps those who praise the Silicon Valley technogurus so much do not see the disaster of having to change your cell phone or computer because you can no longer surf the web or send an email.

If this is the technosolutionism that crypto-enthusiasts, evangelists of the web of the future or false shamans of programming offer us, we reject it out of hand. We are hacktivists and grassroots free software activists: we appropriate technology in pursuit of a collective construction according to our communities and not to the spurious designs of a hypercommercialized IT market. If today the byte worker plays the same role as the charcoal burner or workshop worker at the end of the 19th century, it is imperative that he politicizes and appropriates the means of production to build an alternative to this data violence. Only when this huge mass of computer workers awaken from their lethargy will we be able to take the next step towards the re-foundation of a cyberspace.

But we do not have to build on the empty ocean, as if we were lost overseas far from any coast; there is already a small but solid fleet of nomadic islands, which dodge and cut off the tentacles of the big tech kraken. Those islands are the computers of others, but real others, self-managed and organized in pursuit of personal, community and social needs. Self-hosting consists of materializing what is known as "the cloud", but stripped of the tyranny of data and the waste of energy to which the big tech companies have accustomed us. They are not organized to commoditize our identities, but to provide email, chat, file hosting, voice chat or any other existing digital need. Our small server-islands demonstrate that it is possible to stay active on the network without the violent tracking and theft, nor the imposed need to constantly replace our computer equipment: the self-hosted services, being thought by and for the community, are thought from the highest possible efficiency and not the immoral waste that directly collaborates with the climate crisis.

For this reason, we say to you, declassed byte workers, train yourself, question yourself, and appropriate the tools you use in order to form a commonwealth of hacktivists! Only between the union of computer workers and the communities of self-hosting and hacktivism we will be able to build alternatives for the refoundation of a cyberspace at the service of the people and not of the byte oligarchy.

But we need not only the working masses but also ordinary digital citizens, let's wake up from the generalized apathy to which we have been accustomed! No one can say anymore that technology is not their thing or that computing does not matter to them when all our lives are mediated through digital systems. That android phone that is still alive but no longer allows you to check your emails or chat with your family is simply the technological reality hitting you in the face; as much as the anxiety or dispersion that has existed in you for the last 15 years. Imagine the brain of a 14 year old teenager, totally moth-eaten by the violent algorithms of big tech!

Community digital needs are settled on the shores of our server-islands, not on the flagships of data refineries. Let's unite by building small servers in our homes, workplaces or cultural spaces; let's unite by building data networks that provide federated public instant messaging services that truly respect our freedoms and privacy. Let's publish robust, low-latency voice services; let's encourage the use of low computational consumption services to democratize voices whether you use a boat or a state-of-the-art racing boat. Let's create specialized forums and interconnect communities to unite us all, let's set our sails with the protocols and standards that exist, which allow us to dive the network using the device we want and not the one imposed on us. Let's lose the fear that prevents us from taking the first step and start this great learning path, which as an extra benefit will make us regain not only our technological sovereignty but also the control of our digital essence. It is not a matter of cutting off the private data networks of big tech but rather of building self-managed, self-hosted and self-administered spaces from the hacktivist bases, together with the workers of the byte and the digital citizenship: an Internet of the community for the community.

2
3
 
 

cross-posted from: https://piefed.social/c/piefed_meta/p/1568066/easily-set-up-your-piefed-instance-using-yunohost

After 5 months of chipping away at it, PieFed is now an installable 'app' in the Yunohost store! It's been there for a couple of weeks actually but until this weekend it had a scary red exclamation mark because some automated tests hadn't ran yet. But that's gone now so I feel confident about recommending it to others.

Yunohost is a linux distro for servers that has a web gui for installing and managing services, that takes all the hassle out of self-hosting. How to get started with Yunohost.

@squirrel@piefed.kobel.fyi and @michael@piefed.chrisco.me have had good success setting up their instances already: https://piefed.social/c/piefed_meta/p/1561141/thanks-to-rimu-ericgaspar-and-tituspijean-yunohost-has-a-working-piefed-setup

4
 
 

Last week bought a Nuki Smart Lock Pro 5 to be able to open the door remotely in case it is needed.

As I don't want any IoT device to have access to internet and send telemetry, I (tried) to add it to my isolated vlan where all my sensors are connected, but I had some issues trying to set up the local MQTT (I'm not alone on this). DISCLAIMER: you need their mobile app to set up the device, but I was able to do it mostly without internet connection, only Bluetooth and GPS enabled.

After some digging, found this troubleshooting FAQ, which mentioned to either disabling DNS port or blocking just HTTPS port in firewall.

In my case, as I do provide DNS to some local services within that isolated network I cannot simply block DNS on firewall, indeed my DNS querying is restricted to my local zone, anything else is refused. Internet forwarding is blocked, too. Under this conditions, MQTT setup was still refusing to connect to my server, although I was seeing some attempts in the mosquitto server logs.

My solution was just forcing the nuki.io to return 127.0.0.1 for any record (i.e. set up *.nuki.io IN A 127.0.0.1 in my DNS server for that network), as it seems the device use DNS as LAN connectivity healthcheck, so when it was unable to resolve some nuki.io records, it was disconnected from the WLAN.

With that set up I was able to make it work without internet connectivity. Note that even with this I received an error (8E) within the app, but if you return back, the MQTT connectivity data gets stored and it connects after a few seconds.

Hope this helps anyone facing the same issue.

OC by @morpheus17pro@lemmy.ml

5
 
 

Didn't read the full post yet but the introduction on how much load these carpets create was interesting.

6
 
 

I have been able to self-host many essential services: social media accounts (microblogging, photo sharing, video sharing), a LinkTree alternative for all my links and a powerful private file hosting service. How was I able to do all this?

Simple: with YunoHost, a system that anyone can easily install on a server with little technical knowledge and that gives people access to hundreds of free, open source apps.

Run your own PieFed instance using Yunohost!

7
 
 

Experiences with the Matrix protocol, Matrix Synapse server, bridges, and Element mobile app. There are some you-just-have-to-know-this issues.

TL;DR:

  • Matrix Synapse: works fine, but requires constant manual maintenance.
  • Bridges: work pretty well.
  • Element: generally OK, some issues with timely notifications, no feature parity between Element Classic and Element X, terrible on-boarding (with current setup)
8
 
 

Hi, I've had issues for the last days where my services were unreachable via their domains sporadically. They are scattered across 2-3 VMs which are working fine and can be reached by their domain (usually x.my.domain subdomains) via my nginx reverse proxy (running in it's own Debian vm). The services themself were running fine. My monitoring (Node Exporter/Prometheus) notified me that the conntrack limit on the nginx vm was reached in the timeframes where my services weren't reachable, so that seems to be the obvious issue.

As for the why, it seems that my domains are known to more spammers/scripters now. The nginx error.log grew by factor 100 from one day to the next. Most of my services are restriced to local IPs, but some like this lemmy instance are open entirely (nginx vm has port 80 and 443 forwarded).

I never heard of conntrack before but tried to read up on it a bit. It keeps track of the vm's connections. The limit seems to be rather low, apparently it depends on the memory of the vm which is also low. I can increase the memory and the limit, but some posts suggest to generally disable it if not stricly needed. The vm is doing nothing but reverse proxying so I'm not sure if I really need it. I usually stick to Debians defauls though. Would appreciate input on this as I don't really see what the conseqences of this would be. Can it really just be disabled?

But that's just making symptons go away and I'd like to stop the attackers even before reaching the vm/nginx. I basically have 2 options.

  • The vm has ufw enabled and I can set up fail2ban (should've done that earlier). However, I'm not sure if this helps with the conntrack thing since they need to make a connection before getting f2b'd and that will stay in the list for a bit.
  • There's an OPNsense between the router and the nginx vm. I have to figure out how, but I bet there's a possibility to subscribe to known-attacker-IP-lists and auto-block or the like. I'd like some transparency here though and also would want to see which of the blocked IPs actually try to get in.

Would appreciate thoughts or ideas on this!

9
 
 

cross-posted from: https://infosec.pub/post/38030158

Ethan Sholly, the driving force behind selfh.st, one of the most recognized communities uniting self-hosting enthusiasts, has published the latest results of his annual survey on the community’s preferences, collecting 4,081 responses from self-hosting practitioners worldwide.

No surprise there: Linux is overwhelmingly dominant, chosen by more than four out of five self-hosters (81%). In other words, for self-hosters operating at bare-metal, virtualised, or container-based infrastructure, Linux remains the backbone.

In fact, this result aligns closely with broader trends: according to Wikipedia, Linux holds a 63% share of global server infrastructure. Aside from the hobby aspect, most respondents said privacy was their main reason for self-hosting, which, as you know, remains one of Linux’s strongest selling points. Now, back to the numbers.

10
 
 

cross-posted from: https://lemmy.blahaj.zone/post/34623175

How realistic is this architecture? It's been a while since I've set something like this up for work.

The thought behind this layout is that having only one machine hanging out there with just Apache and ssh (from lan only, non-standard port), and forwarding via Mod_Proxy any services I might want to share with non-LAN friends/family (photos, docs), is a smaller exposure than hosting all my VMs in a DMZ and hoping that the one server doesn't get nuked.

Something like: DNS -> public-zone{ www-serv } <-> firewall-1 <-> lan{ vm-host <-> firewall-2 <-> (printers, laptops, etc) }

firewall-1 is actually a router running Tomato, with custom iptables rules. That way if www-serv is compromised the attacker can't just drop some rules.

firewall-2 is just iptables rules on vm-host

all LAN computers' iptables are a little more permissive, with holes for SAMBA, CUPS, and ssh on non-standard port.

What do you think? Is this sufficient? What would you do differently?

11
12
 
 

I am attempting to set up a little PhpMyAdmin host in a LXC container running Devuan 6 (based on Debian 13) and I am running into problems with the setup script run by apt install, which I believe is actually dbconfig-mysql making its own set of debconf style prompts.

This is for my homelab, and my MariaDB server has self-signed SSL certs. The problem is that I can't figure out either of the following:

  • how to have the dbconfig-mysql setup procedure not verify SSL certs
  • how to manually run the portion of the procedure which sets up a database for PhpMyAdmin in order to completely work around the dbconfig-mysql setup process

Regarding the former issue, in /etc/mysql/mariadb.conf.d/50-client.cnf I set

[client]
ssl-verify-server-cert = off

and

[client-mariadb]
disable-ssl-verify-server-cert

but this doesn't affect the setup or dpkg-reconfigure process for PhpMyAdmin.

Regarding the second issue, I'm just stumped.

Any hints would be appreciated!

edit: changed dbconfig-common to dbconfig-mysql

13
 
 

It manages schedules, deliveries, donations, volunteers, and data for volunteer shifts, see also: https://rootable.org/

14
 
 

Fedfree is a website aimed at teaching people how to run their own servers, of various kinds, on libre operating systems e.g. Linux and BSD. It aims to do this, using libre software exclusively, teaching people about the importance of libre software and hardware as it pertains to freedom; the right to use, study, adapt, share. The right to read. Universal access to knowledge... education. Education is the goal.

Fedfree's mission is to bring back the real internet to normal people, the one where you can have your own unique voice on the internet, without plugging into the hive mind that is websites like twitter or youtube.

Most of the internet's problems exist, precisely because of modern centralised providers holding us back from true innovation.

The goal is to spread libre software ideology, while providing a practical means for people to know how to conduct themselves, such as:

  • Advice about how to start your own projects
  • How to get into computer science, electronics and other computer-related fields... with a view towards libre ideology
  • How to run and maintain your own infrastructure, free from interference, completely hardened against intrusion
  • Link to resources, covering many different topics. For example, when to use FreeBSD vs OpenBSD or vice versa, or situations where Linux might be better, comparing various servers e.g. postfix vs opensmtpd
  • Other examples of links to resources could include: links to books about programming languages, networking concepts, from beginner level all the way to BOFH (Bastard Operator From Hell)

The real internet exists. Fedfree's mission is to teach you how to use it. Every part of it. To most people, it is hidden. Your ISP might put you behind a CGNAT for example, or outright ban you from opening ports; one of Fedfree's goals is to teach you how to set up various kinds of tunnel connections e.g. SSH port forwarding, PPP over L2TP, Wireguard/OpenVPN, etc.

The mentality behind Fedfree is that all the organisations out there, like SFC, GNU, EFF, FSF... April... all these organisations are good, but they can only do so much. We as libre software activists must organise, but how? First, we need infrastructure, our own infrastructure that we control, and we need a charter that defines our movement. By definition, the libre movement is loose and free, where people can do whatever they like, but most people today use centralised hosting services like Github, which means we have huge single points of failure.

15
 
 

My Nextcloud instance broke again-again a month ago where the iOS app could not connect anymore… No helpful error code or message. I have never really been satisfied with their products, it always seemed slow and buggy and they have thousands of open issues on their repos. I used it for photos, online file storage/sharing and for messaging / video chat. I have now replaced it with Owncloud + Immich + Signal (not self hosted) and i’m very satisfied. Owncloud is just way snappier and no bugs so far. Docs are ok, though a little hard to search in and you constantly have to be aware, that the docs might only be relevant for their prior php-implementation. I followed this guide:

https://doc.owncloud.com/ocis/next/depl-examples/ubuntu-compose/ubuntu-compose-prod.html

16
17
 
 

Found in this reddit post. The lacking encryption in Komodo is something I miss and I'm not satisfied with how to handle .env files plus it's really big for what it's doing. Of course I discover this the day after migrating one of the last stacks to Komodo but I'm tempted to give this a try at some point.

Full Quote from the reddit post:


Hey all, I just felt like making a post about a project that I feel like is the most important and genuinely game changing pieces of software I've seen for any homelab. It's called Doco-CD.

I know that's high praise. I'm not affiliated with the project in any way, but I really want to get the word out.

Doco-CD is a docker management system like Portainer and Komodo but is WAY lighter, much more flexible, and Git focused. The main features that stand out to me:

  • Native encryption/decryption via SOPS and Age

  • Docker Swarm support

  • And runs under a single, tiny, rootless Go based container.

I would imagine many here have used Kubernetes, and Git-Ops tools like FluxCD or ArgoCD and enjoyed the automation aspect of it, but grown to dislike Kubernetes for simple container deployments. Git Ops on Docker has been WAY overshadowed. Portainer puts features behind paid licenses, Komodo does much better in my opinion, but to get native decryption to work it's pretty hacky, has zero Docker Swarm support (and removed a release for it's roadmap), and is a heavier deployment that requires a separate database.

Doco-CD is the closest thing we have to a true Git Ops tool for Docker, and I just came across it last week. And beforehand I've desperately wanted a tool such as this. I've since deployed a ton of stuff with it and is the tool I will be managing the rest of my services with.

It seems to be primarily developed by one guy. Which is in part why I want to share the project. Yet, he's been VERY responsive. Just a few days ago, bind mounts weren't working correctly in Docker Swarm, I made an issue on Github and within hours he had a new version to release fixing the problem.

If anyone has been desperately wanting a Docker Git Ops tool that really does compete with feature parity with other Kubernetes based Git Ops tools. This is the best one out there.

I think for some the only potential con is it has no UI. (Like FluxCD) Yet, in some ways that can be seen as a pro.

Go check it out.

18
27
submitted 1 month ago* (last edited 1 month ago) by Deebster to c/selfhosting@slrpnk.net
 
 

My personal domain has hundreds of aliases - one for each site I deal with. This is great for identifying the source of spam, and I retire any aliases that get spam.

haveibeenpwned.com lets me add a domain, but wants 3912 USD a year to actually tell me which addresses leaked. This is obviously an insane price for a nice-to-have.

Is there an alternative for free or very cheap? A self-hosted tool that would pull down lists would be great, but I suppose those lists aren't public.

19
 
 

What's going on on your servers? Smooth operations or putting out fires?

I got some tinkering time recently and migrated most of my Docker services to Komodo/Forgejo. Already merged some Renovate PRs to update my containers which feels really smooth.

Have to restructure some of the remaining services before migrating them and after that I want to automate config backup for my OpnSense and TrueNAS machines.

20
 
 

Hi fellow self-hosters! Has anyone ran Element Server Suite or updated their existing Synapse to include Element Call? How many users do you have?

I have been running Matrix Synapse server on a 1 CPU 1 GB RAM VPS for about 5 years. Just a few close people and a WhatsApp bridge (also for just a few people who use that). It worked fairly well.

Now that Element took over many of the Matrix things, they are expanding the server architecture and bundling the server install as Element Server Suite. The Community Edition is said to be aimed at "small to mid-sized deployments (1–100 users)", but looking at the architecture and requirements... the setup requires Kubernetes (!), at least 2 CPUs and 2 GB RAM, a handful of services, each with their own sub-domain.

Is this corporatesque setup overkill for only a handful of users, or is this my inner Luddite talking? For comparison, Snikket (bundled XMPP server that provides very similar functionality) requires only 128MB RAM. Not sure if it's worth it trying to set up Element Call alongside existing Synapse, starting over with ESS, or going to Snikket.

21
 
 

Hi All, my fork of Tempo has had a rebrand, which was a requirement to get back into the app stores as the original Tempo still exists in F-Droid/IzzyOnDroid

Tempus v4.0.7

Attention

This release will not update previous installs as it is considered a new app, no longer Tempo, new icon, new app id, and new app name. Hoping it will not be a huge inconvenience but was necessary in order to publish to app stores izzyDroid

Android Auto Support should be the same as before, however, I was not able to test any of the icons/visuals, so please let me know if there are any remnants of the tempo logo/icon as I believe I removed them all and replaced them successfully.

What's Changed

fix: Crash on share no expiration date or field returned from api
fix: Check also underlying transport 
feat: Unhide genre from album details view 
fix: persist album sorting on resume 
chore: Tempus rebrand 
chore: Update Polish translation 

Now available via the IzzyOnDroid Repository -> https://apt.izzysoft.de/fdroid/index/apk/com.eddyizm.degoogled.tempus

note:

app-tempo* <- The github release with all the android auto/chromecast features

app-degoogled* <- The izzyOnDroid release that goes without any of the google stuff.

As usual, any dev contributions appreciated as I am not actually a java/mobile dev, so my progress is significantly slower than those who do this on the daily.

In particular, any android dev is familiar android auto to help me set up a dev environment

22
 
 

This is my solar powered setup. A somewhat old Pixel 6a that fell from a foot and a half (really!?), a 10w Solar setup that was around 20$ on amazon. And an old compost container I have too many of. Ill be giving it a proper 3d printed case when I get a chance (and a host of other changes) but for now this works! Its worth about 40$ in total (the phone is now worth about 21$ on the open market).

Qm4kpb3x0dQ7Qib.jpg

hRMBBvZMfVgbgIs.jpg

Website: https://solar.chrisco.me/

Website was made with a collection of scripts, apache2 (nginx for some reason did not install, errors), and termux. Ill open source the whole setup in a bit. Theres not much to it to be honest.

Hopefully keeping the battery at 80% will help the lifetime of the battery. I may bump it up at some point if it keeps dieing because lack of sunlight. But we shall see.

More info in the link. I couldn't get Piefed to repost from a GotoSocial link.

23
 
 

When it comes to self-hosting services, there’s always a question of whether deploying a local container is worth the extra hassle when you can just access cloud-based platforms. However, Vert is one instance where it’s always better to ditch the online apps for a containerized solution.

For starters, you’ll have to upload your documents to third-party servers when converting PDFs, JPGs, or other documents via online platforms. While many websites claim to delete your data after a specific span of time, there’s a huge privacy issue every time you send private files to an external server.

24
25
view more: next ›