Self-Hosted Alternatives to Popular Services

139 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/hettereloy29g3z on 2025-12-13 23:43:16+00:00.


I've reached a point where my "little internal dashboard" has grown significantly.

Initially, I gave a few trusted coworkers read access through tools like Adminer and pgAdmin. That didn’t go well. One wrong click or one misunderstood query, and I found myself restoring from backups while pretending everything was fine.

So, I started exploring the usual internal tools and low-code options. Retool looked appealing but felt too cloud-focused. Appsmith and Tooljet caught my attention on the open-source side. I also checked out Budibase and NocoBase. They all have potential, but I worried about them randomly breaking late at night once I imagined more than a few users interacting with them.

Recently, I tried the self-hosted version of UI Bakery. What I liked is that it runs within my infrastructure, connects to my database and APIs, and still provides a user interface that isn’t intimidating for non-technical users. The new OpenAPI support in their AI mode was a great bonus since many of our projects already have specs. It’s not perfect; there’s still a learning curve and some rough edges, but it feels less fragile than some of the other options I’ve tested.

I'm curious about what others are doing to tackle this issue.

If you need internal CRUD tools and small workflows for your team, what are you self-hosting?

Did you stay with tools like Retool, Appsmith, Budibase, NocoBase, or UI Bakery, or did you revert to custom code?

Do you have any horror stories about granting the wrong person access to the wrong panel?

I’d love to hear some ideas from those who have advanced further along this path.

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/GroomedHedgehog on 2025-12-13 15:30:26+00:00.


I am currently self-hosting Gitea (maybe Nextcloud too in the future) and I would like to make it internet accessible without a VPN (I have a very sticky /56 IPv6 prefix so NAT is not a concern).

I'd like to ask more experienced people than me about dangers I should be aware of in doing so.

My setup is as such:

  • Gitea is running containerized in k3s Kubernetes, with access to its own PV/PVC only
  • The VMs acting as Kubernetes nodes are in their own DMZ VLAN. The firewall only allows connections from that VLAN to the internet or to another VLAN for the HTTP/HTTPS/LDAPS ports.
  • For authentication, I am using Oauth2-Proxy as a router middleware for the Traefik ingress. Unauthenticated requests are redirected to my single sign on endpoint
  • Dex acts as the OpenIdConnect IdP, and Oauth2-proxy is configured as an OpenidConnect client for it
  • My user accounts are stored in Active Directory (Samba), with the Domain Controllers in another VLAN. Dex (which has its own service account with standard user privileges) connects to them over LDAPS and allows users to sign in with their AD username/passwords. There should be no way to create or modify user accounts from the web.
  • All services are run over HTTPS with trusted certificates (private root CA that is added to clients' trust stores) under a registered public domain. I use cert-manager to request short lived certs (24 hours) from my internal step-ca instance (in the same VLAN as the DCs and also separate from the Kubernetes nodes by a firewall) via ACME.
  • All my VMs (Kubernetes nodes, cert authorities, domain controllers) are Linux based, with root as the only user and the default PermitRootLogin prohibit-password unchanged
  • I automate as much as possible, using Terraform + Cloud-Init for provisioning VMs and LXC containers on the Proxmox cluster that hosts the whole infrastructure and Ansible for configuration. Everything is version controlled and I avoid doing stuff ad hoc on VMs/LXC Containers - if things get too out of hand I delete and rebuild from scratch ("cattle, not pets").
  • My client devices are on yet another VLAN, separate from the DMZ and the one with the domain controllers and cert authorities.

If I decided to go forward with this plan, I'd be allowing inbound WAN connections on ports 22/80/443 specifically to the Kubernetes' Traefik ingress IP and add global DNS entries pointing to that address as needed. SSH access would only be allowed to Gitea for Git and nothing else.

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/fozid on 2025-12-13 13:51:03+00:00.


Overview

Been running this setup for about a year now, although a couple of services have been added in that time. All works really well and has minimal maintenance as everything is fully automated with scripts. Only thing manual is updates as I like to do them when I have enough time in case something breaks.

Hardware

Server 1

Trycoo / Peladn mini pc

  • Intel n97 CPU
  • Integrated GPU
  • 32gb of 3200mt/s ddr4 (Upgraded from 16gb)
  • 512nvme
  • 2x 2tb ssd's (Raid1 + LVM)
    • Startech usb to sata cable
    • Atolla 6 port powered usb 3.0 splitter
  • 2x 8tb hdd's
    • 2 bay usb 3.0 Fideco dock
    • Each 8tb HDD is split into 2 equal size partitions, making 4 x 4tb partitions
    • Each night, the 2tb SSD array backups to the alternating first partition of the HDD's .
    • Each 1st of the month, the 2tb SSD array backups to the alternating 2nd partition of the HDD's .

Server 2

Raspberry pi 4b

  • 32gb SD card
  • 4gb ram

Services

Server 1

  • Nginx web server / reverse proxy
  • Fail2ban
  • Crowdsec
  • Immich
    • Google Photos replacement
    • External libraries only
    • 4 users
  • Navidrome
    • Spotify replacement
    • 2 users
  • Adguard home
    • 1st instance
    • Provides Network wide DNS filtering and DHCP server
  • Unbound
    • Provides recursive DNS
  • Go-notes
    • Rich Text formatting, live, real time multi-user notes app
  • Go-llama
    • LLM chat UI / Orchestrator - aimed at low end hardware
  • llama.cpp
    • GPT-OSS-20B
    • Exaone-4.0-1.2B
    • LFM2-8B-A1B
  • Transmission
    • Torrent client
  • PIA VPN
    • Network Namespace script to isolate PIA & Transmission
  • Searxng
    • Meta search engine - integrates with Go-llama
  • StirlingPDF
    • PDF editor
  • File browser
    • This is in maintenance mode only so I am planning to migrate to File Browser Quantum soon
  • Syncthing
    • Syncs 3 android and 1 apple phone for immich
  • Custom rsync backup script
  • Darkstat
    • Real time Network statistics

Server 2

  • Fail2ban
  • Crowdsec
  • Honeygain
    • Generates a tiny passive income
    • I'm UK based and in the last 6 months it has produced £15
  • Adguard home
    • 2nd instance
    • Provides Network wide DNS filtering and DHCP server
  • Unbound
    • Provides recursive DNS
  • Custom DDNS update script
4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Stuwik on 2025-12-13 13:52:11+00:00.


I’ve been setting up proper proxying and authentication for my self hosted home services, and I landed on PocketID as OIDC provider and primary authentication, with TinyAuth as middleware for unsupported services and LLDAP in the middle for user management. It got me thinking about the password management however, because when will the users ever need to know and/or use their LLDAP passwords?

To enroll a new user I will add them to LLDAP with a generated password, sync with PocketID, and then send a token invite for PocketID to them. After this they should never need anything other than their passkey, since authentication for all services should just happen automatically in the background, right? This means that they shouldn’t need access to the LLDAP web UI.

I just want someone to confirm that my thinking is correct or tell me if I’m missing something.

5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Old_Rock_9457 on 2025-12-13 15:06:30+00:00.


Hi everyone,

I’m happy to announce that AudioMuse-AI v0.8.0 is finally out, and this time as a stable release.

This journey started back in May 2025. While talking with u/anultravioletaurora, the developer of Jellify, I casually said: “It would be nice to automatically create playlists.”

Then I thought: instead of asking and waiting, why not try to build a Minimum Viable Product myself?

That’s how the first version was born: based on Essentia and TensorFlow, with audio analysis and clustering at its core. My old machine-learning background about normalization, standardization, evolutionary methods, and clustering algorithms, became the foundation. On top of that, I spent months researching, experimenting, and refining the approach.

But the journey didn’t stop there.

With the help of u/Chaphasilor, we asked ourselves: “Why not use the same data to start from one song and find similar ones?”

From that idea, Similar Songs was born. Then came Song Path, Song Alchemy, and Sonic Fingerprint.

At this point, we were deeply exploring how a high-dimensional embedding space (200 dimensions) could be navigated to generate truly meaningful playlists based on sonic characteristics, not just metadata.

The Music Map may look like a “nice to have”, but it was actually a crucial step: a way to visually represent all those numbers and relationships we had been working with from the beginning.

Later, we developed Instant Playlist with AI.

Initially, the idea was simple: an AI acting as an expert that directly suggests song titles and artists. Over time, this evolved into something more interesting, an AI that understands the user’s request, then retrieves music by orchestrating existing features as tools. This concept aligns closely with what is now known as the Model Context Protocol.

Every single feature followed the same principles:

  • What is actually useful for the user?
  • How can we make it run on a homelab, even on low-end CPUs or ARM devices?

I know the “-AI” in the name can scare people who are understandably skeptical about AI. But AudioMuse-AI is not “just AI”.

It’s machine learning, research, experimentation, and study.

It’s a free and open-source project, grounded in university-level research and built through more than six months of continuous work.

And now, with v0.8.0, we’re introducing Text Search.

This feature is based on the CLAP model, which can represent text and audio in the same embedding space.

What does that mean?

It means you can search for music using text.

It works especially well with short queries (1–3 words), such as:

  • Genres: Rock, Pop, Jazz, etc.
  • Moods: Energetic, relaxed, romantic, sad, and more
  • Instruments: Guitar, piano, saxophone, ukulele, and beyond

So you can search for things like:

  • Calm piano
  • Energetic pop with female vocals

If this resonates with you, take a look at AudioMuse-AI on GitHub: https://github.com/NeptuneHub/AudioMuse-AI

We don’t ask for money, only for feedback, and maybe a ⭐ on the repository if you like the project.

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/drome691 on 2025-12-13 11:11:44+00:00.


I want something self-hosted-ish but still safe if my house burns down. What setups are people using? Remote server? Family member’s house? Something else?

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/egehancry on 2025-12-13 10:59:59+00:00.

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/cvicpp on 2025-12-13 09:54:08+00:00.


.: What is Tududi? :.

Tududi is a self-hosted life manager that organizes everything into Areas → Projects → Tasks, with rich notes and tags on top. It’s built for people who want a calm, opinionated system they fully own:

• Clear hierarchy for work, personal, health, learning, etc.

• Smart recurring tasks and subtasks for real-world routines

• Rich notes next to your projects and tasks

• Runs on your own server or NAS – your data, your rules

What’s new in v0.88.0

Task attachments!!!

• Now you can add your files to a task and preview them. Works great with images and pdf

https://preview.redd.it/mmy7r2eo1y6g1.png?width=3300&format=png&auto=webp&s=0809a06ca00984b9d6ba5d8cc8334032bc229a0c

Inbox flow for fast capture

• New Inbox flow so you can quickly dump tasks and process them later into the right area/project.

• Designed to reduce friction when ideas/tasks appear in the middle of your day.

https://preview.redd.it/ufwte4dp1y6g1.png?width=3296&format=png&auto=webp&s=8664099a6290f2e1a5a78b3b25618f9bf6c69131

https://preview.redd.it/7nsbtucp1y6g1.png?width=3300&format=png&auto=webp&s=a2b19ba160fc661399579b07951c9630236866bf

Smarter Telegram experience

• New Telegram notifications – get nudges and updates (and enable them individually in profile settings) where you already hang out.

• Improved Telegram processing so it’s more reliable and less noisy.

Better review & navigation

Refactored task details for a cleaner, more readable layout.

Universal filter on tag details page – slice tasks/notes by tag with more control.

Reliability & polish

• Healthcheck command fixes for better monitoring (works properly with 127.0.0.1 + array syntax).

• Locale fixes, notification read counter fixes, and an API keys issue resolved.

• Better mobile layout in profile/settings.

• A bunch of small bug fixes and wording cleanups in the Productivity Assistant.

🧑‍🤝‍🧑 Community.

New contributors this release: u/JustAmply, u/r-sargento – welcome and thank you!

⭐ If you self-host Tududi and like where it’s going, consider starring the repo or sharing some screenshots of your setup.

🔗 Release notes: https://github.com/chrisvel/tududi/releases/tag/v0.88.0.

🔗 Website / docs: https://tududi.com/.

💬 Feedback, bugs, or ideas? Drop them in #feedback or open an issue on GitHub.

9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/TechHutTV on 2025-12-13 07:02:18+00:00.


A vast majority of people with a smartphone are, by default, uploading their most personal pictures to Google, Apple, Amazon, whoever. I firmly believe companies like this don't need my photos. You can keep that data yourself, and Immich makes it genuinely easy to do so.

We're going through the entire Docker Compose stack using Portainer, enabling hardware acceleration for machine learning, configuring all the settings I actually recommend changing, and setting up secure remote access so you can back up photos from anywhere.

Why Immich Over the Alternatives

Two things make Immich stand out from other self-hosted photo solutions. First is the feature set, it's remarkably close to what you get from the big cloud providers. You've got a world map with photo locations, a timeline view, face recognition that actually works, albums, sharing capabilities, video transcoding, and smart search. It's incredibly feature-rich software.

Immich features

Second is the mobile app. Most of those features are accessible right from your phone, and the automatic backup from your camera roll works great. Combining it with NetBird makes backing up your images quick and secure with WireGuard working for us in the background.

Immich hit stable v2.0 back in October 2025, so the days of "it's still in beta" warnings are behind us. The development pace remains aggressive with updates rolling out regularly, but the core is solid.

Hardware Considerations

I'm not going to spend too much time on hardware specifics because setups vary wildly. For some of the machine learning features, you might want a GPU or at least an Intel processor with Quick Sync. But honestly, those features aren't strictly necessary. For most of us CPU transcoding will be fine.

The main consideration is storage. How much media are you actually going to put on this thing? In my setup, all my personal media sits around 300GB, but with additional family members on the server, everything totals just about a terabyte. And with that we need room to grow so plan accordingly.

For reference, my VM runs with 4 cores and 8GB of RAM. The database needs to live on an SSD, this isn't optional. Network shares for the PostgreSQL database will cause corruption and data loss. Your actual photos can live on spinning rust or a NAS share, but keep that database on local SSD storage.

Setting Up Ubuntu Server

I'm doing this on Ubuntu Server running as a VM on Unraid. You don't have to use Unraid, as TrueNAS, Proxmox, and other solutions work great, or you can install Ubuntu directly on hardware. The process is close to the same regardless.

If you're installing fresh, grab the Ubuntu Server ISO and flash it with Etcher or Rufus depending on your OS. During installation, I typically skip the LVM group option and go with standard partition schemes. There's documentation on LVM if you want to read more about it, but I've never found it necessary for this use case.

The one thing you absolutely want to enable during setup is the OpenSSH server. Skip all the snap packages, we don't need them.

Once you're booted in, set a static IP through your router. Check your current IP with:

ip a

Then navigate to your router's admin panel and assign a fixed IP to this machine or VM. How you do this varies by router, so check your manual if needed. I set mine to immich.lan for convenience.

First order of business on any fresh Linux install is to update everything:

sudo apt update && sudo apt upgrade -y

Installing Docker

Docker's official documentation has a convenience script that handles everything. SSH into your server and run:

curl -fsSL https://get.docker.com/ -o get-docker.sh
sudo sh get-docker.sh

This installs Docker, Docker Compose, and all the dependencies. Next, add your user to the docker group so you don't need sudo for every command:

sudo usermod -aG docker $USER
newgrp docker

Installing Portainer

Note: Using Portainer is optional, it's a nice GUI that helps manage Docker containers. If you prefer using Docker Compose from the command line or other installation methods, check out the Immich docs for alternative approaches.

Portainer provides a web-based interface for managing Docker containers, which makes setting up and managing Immich much easier. First let's create our volume for the Portainer data.

docker volume create portainer_data

Spin up Portainer Community Edition:

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:latest

Once Portainer is running, access the web interface at https://your-server-ip:9443/. You'll be prompted to create an admin account on first login. The self-signed certificate warning is normal, just proceed.

https://preview.redd.it/1e5q24j76x6g1.jpg?width=3840&format=pjpg&auto=webp&s=023c44345a2ff8e5591d2f9ea65deb326ae44e06

That's the bulk of the prerequisites handled.

The Docker Compose Setup

Immich recommends Docker Compose as the installation method, and I agree. We'll use Portainer's Stack feature to deploy Immich, which makes the process much more visual and easier to manage.

  1. In Portainer, go to Stacks in the left sidebar.
  2. Click on Add stack.
  3. Give the stack a name (i.e., immich), and select Web Editor as the build method.
  4. We need to get the docker-compose.yml file. Open a terminal and download it from the Immich releases page:

https://preview.redd.it/ph1uafov6x6g1.jpg?width=3840&format=pjpg&auto=webp&s=fa2db564e8f1ca62ccc547fc78fd3fbffc80866d

wget -O docker-compose.yml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
cat docker-compose.yml

  1. Copy the entire contents of the docker-compose.yml file and paste it into Portainer's Web Editor.
  2. Important: In Portainer, you need to replace .env with stack.env for all containers that reference environment variables. Search for .env in the editor and replace it with stack.env.
  3. Now we need to set up the environment variables. Click on Advanced Mode in the Environment Variables section.
  4. Download the example environment file from the Immich releases page:

wget https://github.com/immich-app/immich/releases/latest/download/example.env cat example.env 5. Copy the entire contents of the example.env file and paste it into Portainer's environment variables editor or upload it directly. 6. Switch back to Simple Mode and update the key variables:

https://preview.redd.it/mnqtp2jm6x6g1.jpg?width=3840&format=pjpg&auto=webp&s=07571a7db817c4a0ce44f9e1fbb30146a92dce98

The key variables to change:

  • DB_PASSWORD: Change this to something secure (alphanumeric only)
  • DB_DATA_LOCATION: Set to an absolute path where the database will be saved (e.g., /mnt/user/appdata/immich/postgres). This MUST be on SSD storage.
  • UPLOAD_LOCATION: Set to an absolute path where your photos will be stored (e.g., /mnt/user/images)
  • TZ: Set your timezone (e.g., America/Los_Angeles)
  • IMMICH_VERSION: Set to v2 for the latest stable version

For my setup, the upload location points to an Unraid share where my storage array lives. The database stays on local SSD storage. Adjust these paths for your environment.

Enabling Hardware Acceleration

If you have Intel Quick Sync, an NVIDIA GPU, or AMD graphics, you can offload transcoding from the CPU. You'll need to download the hardware acceleration configs and merge them into your Portainer stack.

First, download the hardware acceleration files:

wget https://github.com/immich-app/immich/releases/latest/download/hwaccel.transcoding.yml
wget https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml

For transcoding acceleration, you'll need to edit the immich-server section in your Portainer stack. Find the immich-server service and add the extends block. For Intel Quick Sync:

immich-server:
  extends:
    file: hwaccel.transcoding.yml
    service: quicksync  # or nvenc, vaapi, rkmpp depending on your hardware

However, since Portainer uses a single compose file, you'll need to either:

  1. Copy the relevant device mappings and environment variables from hwaccel.transcoding.yml directly into your stack, or
  2. Use Portainer's file-based compose method if you have the files on disk

For machine learning acceleration with Intel, update the immich-machine-learning service image to use the OpenVINO variant:

immich-machine-learning:
  image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-openvino

And add the device mappings from hwaccel.ml.yml for the openvino service directly into the stack.

If you're on Proxmo...


Content cut off. Read original on https://old.reddit.com/r/selfhosted/comments/1plf8aa/self_hosted_immich_and_netbird_for_full_control/

10
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Ordinary_Ad8756 on 2025-12-12 23:13:49+00:00.


Not sure who posted about it originally, but I wanted to give a huge shout-out and thank you! I saw a post mentioning Lube Logger a while ago, checked it out, and just finished using it to log my recent maintenance.

Website: https://lubelogger.com/

It's self-hosted, open-source, and exactly what I needed to track maintenance on multiple vehicles (and tractors!).

The setup was simple, and the interface is incredibly easy to use. I just logged two oil changes, which saved me about $60 compared to the shop quote, and now I have a perfect digital record in my own hands. I'm already looking forward to setting up QR codes for quick logging and eventually tracking fuel use.

If you're looking for a simple, self-hosted solution for vehicle records/fuel tracking, definitely check it out.

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/The_Food_Scientist on 2025-12-12 17:56:52+00:00.


My family has a small warehouse with 3 workers. Recently the law in our country has changed and we need to present evidence of the time and worked clocked in and clocked out of their shift. I would like to know if there is any selfhosted solutions so they can register their shifts from their phones. The simpler the better, if it is just a portal/app with a button for clocking in - clocking out and a option in case they forget some day it would be ideal. I just need to download a csv or excel sheet with the day-time data and user.

Thanks in advance

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Loud_Distribution_60 on 2025-12-12 20:02:25+00:00.

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/atomwide on 2025-12-12 17:27:46+00:00.


Modern servers are incredibly powerful and reliable. For most workloads, a single well-configured server with Docker Compose or single-node Kubernetes can get you 99.99% of the way there - at a fraction of the cloud cost.

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/trailbaseio on 2025-12-12 16:49:55+00:00.

15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Mag37 on 2025-12-12 13:25:26+00:00.


I had the honor of writing an article at selfh.st - and as mentioned there a new version has slowly been in the works for a few weeks and is now released!

The release brings the new option -b N (or config BackupForDays=N) which enables backups and removes backups older then N days. The backups will be handled per container image and will be created (by retagging) just before pulling a new version.

This provide an easy way to roll back to previous image if a new update breaks.

It have been a while since I posted any news so here's the last 6 months in brief:

  • Snooze function to notifications.
  • Added a function to print what files are sourced.
  • Home Assistant notification template added.
  • Improved search filtering eg. dockccheck -yp homer,dozzle.
  • More advanced control of notifications, multiple notification templates etc.
  • Label reworks
  • Option -R to skip recreation - to allow to only pull updates without applying.
  • Plus a bunch of bugfixes.

Thanks to this community dockcheck keeps evolving! More features, more control, better handling. I'm so grateful that people give feedback and suggestions and help testing things.

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Nicoledtg on 2025-12-12 10:10:08+00:00.


I was thinking about creating a self-hosted environment after reading about how a face seek-inspired system gets better through specific steps. I used to switch a lot of services at once, but the setup felt more stable when I divided them into smaller, independent components. Do you prefer to set everything at once and make adjustments later, or do you prefer to build your stack piece by piece for frequent self-hosts? I'm interested in learning how others maintain flexibility while avoiding needless complexity.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/holey_shite on 2025-12-12 09:53:14+00:00.


I got my first Raspberry Pi during covid to run home assistant, which soon led to me learning about all the other cool stuff like plex and the arr's and docker etc. I have learnt a lot about Linux, DevOps and open source tools over the last few years.

I recently nuked everything and decided to start fresh because over time all of my stuff was a mess and making a small change sometimes meant hours of debugging and fixing things that I unintentionally broke. This time I decided to use IaC as much as possible (Although I am still learning).

Sharing my repository hoping it helps others and also that I get suggestions to improve this setup.

Anterra: N28M/anterra: Repository for Ansible and Terraform

I don't want to make this a wall of text but adding some explanations for decisions I made on this repo.

1. Cloudflare: I use Cloudflare for managing my domains as well as for DNS. I ended up taking my network down with no one being able to access the internet while playing with DNS, so I am sticking with Cloudflare till I am confident enough to self host it. (Still dont really get recursive DNS)

2. Bitwarden Secrets: being able to self host vaultwarden is great, but I don't trust myself enough to run my own password manager, especially when so much of my infrastructure now depends on it.

Note: This repo is definitely not beginner friendly but I am happy to try and help if anyone wants to try and set this up themselves.

Note about AI: I used Claude extensively to help me create playbooks and configs, but everything has been tested by me in my own home lab. I would still advise caution using this code.

Looking forward to read what you guys think !

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/jsiwks on 2025-12-12 15:58:57+00:00.


Hello everyone, we are back with a BIG update!

TLDR; We built private VPN-based remote access into Pangolin with apps for Windows, Mac, and Linux. This functions similarly to Twingate and Cloudflare ZTNA – drop the Pangolin site connector in any network, define resources, give users and roles access, then connect privately.

Pangolin is an identity aware remote access platform. It enables access to resources anywhere via a web browser or privately with remote clients. Read about how it works and more in the docs.

NEW Private resources page of Pangolin showing resources for hosts with magic DNS aliases and CIDRs.

What's New?

We've built a zero-trust remote access VPN that lets you access private resources on sites running Pangolin’s network connector, Newt. Define specific hosts, or entire network ranges for users to access. Optionally set friendly “magic” DNS aliases for specific hosts.

Platform Support:

Once you install the client, log in with your Pangolin account and you'll get remote network access to resources you configure in the dashboard UI. Authentication uses Pangolin's existing infrastructure, so you can connect to your IdP and use your familiar login flow.

Android, iOS, and native Linux GUI apps are in the works and will probably be released early next year (2026).

Key Features

While still early (and in beta), we packed a lot into this feature. Here are some of the highlights:

  • User and role based access: Control which users and groups have access to each individual IP or subnet containing private resources.
  • Whole network access: Access anything on the site of the network without setting up individual forwarding rules - everything is proxied out! You can even be connected to multiple CIDR at the same time!
  • DNS aliases: Assign an internal domain name to a private IP address and access it using the alias when connected to the tunnel, like my-database.server1.internal.
  • Desktop clients: Native Windows and MacOS GUI clients. Pangolin CLI for Linux (for now).
  • NAT traversal (holepunch): Under the right conditions, clients will connect directly to the Newt site without relaying through your Pangolin server.

How is this different from Tailscale/Netbird/ZeroTier/Netmaker?

These are great tools for building complex mesh overlay networks and doing remote access! Fundamentally, every node in the network can talk to every other node. This means you use ACLs to control this cross talk, and you address each peer by its overlay-IP on the network. They also require every node to run node software to be joined into the network.

With Pangolin, we have a more traditional hub-and-spoke VPN model where each site represents an entire network of resources clients can connect to. Clients don't talk to each other and there are no ACLs; rather, you give specific users and roles access to resources on the site’s network. Since Pangolin sites are also an intelligent relay, clients use familiar LAN-style addresses and can access any host in the addressable range of the connector.

Both tools provide various levels of identity-based remote access, but Pangolin focuses on removing network complexity and simplifying remote access down to users, sites, and resources, instead of building out large mesh networks with ACLs.

More New Features

  • Analytics dashboard with graphs, charts, and world maps
  • Site credentials regeneration and rotation
  • Ability for server admins to generate password reset codes for users
  • Many UI enhancements

Release notes: https://github.com/fosrl/pangolin/releases/tag/1.13.0

⚠️ Security Notice

CVE-2025-55182 React2Shell: Please update to Pangolin 1.12.3+ to avoid critical RCE vulnerabilities in older versions!

19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/gilles_vauvarin on 2025-12-12 15:23:17+00:00.


Hi,

In 2018, I got tired of filling up my web browser's bookmarks. It was a mess, not user-friendly for finding links, and difficult to share.

So I decided to bookmark my finds on a simple website with a small search engine. And I continue to add my discoveries to this site every day. It's useful for me, but also for others, since everything is public.

https://thewhale.cc/

I'll let you browse around—who knows, you might find a rare gem ;-)

Have fun!

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/mick285 on 2025-12-12 14:49:55+00:00.


I’ve got like 10 containers running now and I’m already losing track of what lives where. Do you guys use labels, dashboards, or some kind of internal wiki to keep things sane?

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/AnyHour9173 on 2025-12-12 00:34:48+00:00.


I'm not sure if this is the right place to ask and I'm kinda lost at the beginning with trying to find exactly what I need. When I tried to find this on my own nothing seemed like exactly what I needed (or maybe it was and it just went over my head). I'm a writer and really, I want a way to work on my books on one device, and then have it synced to all my other devices automatically. That way I have safe backups and so I can pick up working on them from my laptop, tablet or desktop etc. I used to use Google Docs for this but started just using libreoffice on my desktop. Having my entire book on one computer is scary though, so for the last while I've just been periodically copying the file to an external SSD but this system isn't really... great in a lot of ways. I'm a total newbie to all this, sorry if this is an obvious question.

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Fab_Terminator on 2025-12-12 08:26:00+00:00.


I’ll be lying in bed or in the middle of work and suddenly think, “I should totally reorganize my entire homelab tonight.” Does this happen to everyone, or is my self-hosting brain just wired weirdly?

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Hiryu on 2025-12-11 19:56:00+00:00.


Hey everyone! I’ve been working on a personal project for a while, and it’s finally at a point where I feel comfortable sharing it.

Parker is a self‑hosted comic book server for CBZ/CBR libraries. It focuses on speed, a clean UI, and a “filesystem is truth” approach — metadata is parsed directly from ComicInfo.xml inside archives.

I’ve been a longtime Kavita user, but I wanted to tailor certain things to work the way I prefer — so Parker grew out of that.

Highlights

  • Fast parallel scanning so large libraries import quickly
  • Netflix‑style home page with content rails (On Deck, Up Next, Smart Lists, Random Gems, Recently Updated)
  • Context‑aware Web Reader (series, volumes, reading lists, pull lists)
  • Manga mode, double‑page spreads with smart detection, swipe navigation, and zero‑latency page transitions
  • Smart Lists (saved searches that auto‑update)
  • User‑created Pull Lists with custom ordering
  • OPDS 1.2 support for external readers (Chunky, Panels, Tachiyomi, etc.)
  • Reports Dashboard (missing issues, duplicates, storage analysis, metadata health)
  • WebP transcoding for bandwidth savings
  • Multi‑user support with per‑library permissions
  • Auto‑generated Reading Lists and Collections from <AlternateSeries> and <SeriesGroup> metadata

Tech Stack

FastAPI, SQLAlchemy, Jinja2, Alpine.js, Tailwind, SQLite (WAL) with FTS5, Docker

Repository: https://github.com/parker-server/parker

It’s early but stable, and I’d love feedback from the self‑hosted crowd. If you try it out, let me know how it goes.

https://preview.redd.it/mx61nj8hrm6g1.png?width=1681&format=png&auto=webp&s=9333d304a6252897128b4b0cf34ae8b0ef99a126

https://preview.redd.it/xkd8cmohrm6g1.png?width=1676&format=png&auto=webp&s=443356a97b118a6f5d5e851574ae10aa5a645cab

https://preview.redd.it/bxsocymirm6g1.png?width=1653&format=png&auto=webp&s=d8fb4f725669b59d7c2729d4af513f25a23b3fbc

https://preview.redd.it/6rbh502jrm6g1.png?width=1608&format=png&auto=webp&s=ebafb01905168654b3fbacd809eafaaa92b81bde

https://preview.redd.it/8v2ynlfjrm6g1.png?width=1842&format=png&auto=webp&s=6383c0bcc206d7689018ec11dbbc1c6795f61e28

https://preview.redd.it/imv28tqjrm6g1.png?width=1657&format=png&auto=webp&s=329f59d849c615d61858ddd57d0e5582cf8df0ad

https://preview.redd.it/k7qhs79krm6g1.png?width=1555&format=png&auto=webp&s=c8566dcf68eadefe59556ee75d952e4fea214b76

24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/karant_dev on 2025-12-11 14:58:55+00:00.


Hi everyone,

I'm a first-time Open Source maintainer, and I wanted to share a tool I built to scratch my own itch: AutoRedact.

The Problem: I constantly take screenshots for documentation or sharing, but I hate manually drawing boxes over IPs, email addresses, and secrets. I also didn't trust uploading those images to some random "free online redactor."

The Solution: AutoRedact runs entirely in your browser (or self-hosted Docker container). It uses Tesseract.js (WASM) to OCR the image, finds sensitive strings via Regex, and draws black boxes over them coordinates-wise.

Features:

🕵️♂️ Auto-Detection: IPs, Emails, Credit Cards, common API Keys.

🔒 Offline/Local: Your images never leave your machine.

🐳 Docker: docker run -p 8080:8080 karantdev/autoredact

📜 GPLv3: Free and open forever.

Tech Stack: React, Vite, Tesseract.js v6.

I'd love for you to give it a spin. It’s my first real OSS project (and first TS project), so feedback is welcome!

Repo: https://github.com/karant-dev/AutoRedact

Demo: https://autoredact.karant.dev/

Thanks!

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/pfthurley on 2025-12-11 19:57:34+00:00.


Hey everyone - just wanted to share something we released today that might be interesting to folks running their own AI infrastructure.

CopilotKit is an open-source framework (MIT licensed) for building agentic UIs - think Cursor for x, agent dashboards, or multi-step AI workflows that you can fully self-host and wire up to any backend or LLM you run locally.

CopilotKit v1.50 is now live, and it includes a major architectural cleanup that makes it much easier to build and self-host agentic applications on your own stack.

It's free, no lock-in, no required cloud, just a lightweight frontend framework you can wire up to whatever backend or LLM host you prefer.

What’s new in 1.50?

  • A cleaner internal architecture built around open protocols (AG-UI)
  • Full backwards compatibility — no breaking changes
  • Support for running UI/agent interactions on your own server
  • New developer interfaces that make it easier to integrate self-hosted LLMs
  • Persistence + threading + reconnection support (useful when running your own infra)
  • A new Inspector for debugging AG-UI events in real time

If you’re experimenting with agent frameworks (LangGraph, PydanticAI, CrewAI, Microsoft Agent Framework, etc.) and want to hook them up to a self-hosted frontend, this release was basically built for that.

Happy to answer questions or hear from anyone who’s tried building agentic UIs on their own stack.

view more: next ›