Self-Hosted Alternatives to Popular Services

139 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
201
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Drumstel97 on 2025-12-03 09:45:06+00:00.

202
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/BookHost on 2025-12-03 07:07:48+00:00.


Round 1 recap of my last post:

I counted 68 different credentials across my lab (23 Docker admin users, 18 static API keys, 27 human accounts). Got so fed up that I migrated everything possible to:

  • Single OIDC provider (Authentik, because I like pain)
  • Workload identities + short-lived certs via Spike (formerly Smallstep)
  • Forward auth on Traefik for anything that doesn’t speak OIDC natively Result: literally one master password + certs that auto-expire every 4–8 h. Felt like ascending.

Then y’all showed up with the war crimes:

  • “1Password/KeePassXC master race. You never forget a password if it’s in the vault.”
  • “Local logins just work. Family accounts change once every five years.”
  • “The only thing your fancy OIDC setup guarantees is that YOU will break it at 3 a.m.”
  • “Half the *arrs and paperless and immich still don’t support OIDC without a paywall or a 400-line proxy hack.”
  • “If you’re offboarding family that often you need therapy, not Keycloak.”

…okay, that last one was fair.

So here’s the actual challenge for the password-manager maximalists and the “static credentials are fine” crowd:

Give me the killer argument why I should rip out Authentik + Spike + all the forward-auth nonsense and go back to:

  1. One shared 1Password/KeePassXC family vault (or separate vaults + emergency kit drama)
  2. Long-lived random passwords for every service
  3. Static API keys that never rotate because “if it ain’t broke”

Specific things I’m currently enjoying that you have to beat:

  • Family member creates their own account once, logs in with Google/Microsoft from phone/TV/browser, never asks me for a password again
  • In case someone’s phone gets stolen(that has happened once) I just revoke their OIDC session in Authentik, no password changes anywhere
  • API keys are gone; everything uses mTLS certs that expire before breakfast
  • New service gets added → one line in Traefik middleware → done, no new credential
  • I can see exactly who logged into what and when (yes I’m that guy)

Your move. Convince me the complexity budget isn’t worth it for a homelab that’s literally just me + wife + parents + sister. Make it technical, make it brutal, make it real.

Best argument gets gold and I’ll make a full “I was wrong” post with screenshots if I actually revert.

Current mental scoreboard:

Password manager gang — 1

OIDC cult — 0.5 (I’m coping)

(Paperless-ngx password reset PTSD still haunts me. Don’t @ me unless you’ve been there.)

203
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bobbintb on 2025-12-02 22:50:29+00:00.


I have a problem. I run a media server for my family. They have the choice of using Plex, Emby, or Jellyfin. I'm trying to avoid simply buying my storage every time I run out of space, for a number of reasons. The issue I am facing is how to manage space. It's easy enough when it's just my data. There is stuff they request and could probably just delete afterwards. I know I could probably grant them permissions to delete things that they request, which would probably be a half-way solution. But someone might be watching a show that someone else requested so I don't want a situation where the requester deletes it before the other person that wants to watch it watches it. I don't know of any existing features the existing media players have that may help this this. Or maybe even another tool. Right now I've just resorted to manually pruning things and asking in a group text if anyone wants me to keep it. Any suggestions are appreciated.

204
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Hot-Chemistry7557 on 2025-12-03 03:19:27+00:00.


Hey self-hosters here

It is been quite a while since YAMLResume's last update.

I'm excited to share YAMLResume v0.8, a significant milestone in the journey to make "Resume as Code" the standard for developers.

If you are first time here: YAMLResume allows you to craft resumes in a clean, version-controlled YAML format and compile them into beautifully typeset, pixel-perfect PDFs. No more fighting with Word formatting or proprietary online builders. You own your data.

What's New in v0.8?

The big shift in this version is the introduction of Multiple Layouts. Previously, the pipeline was linear (YAML -> PDF). Now, a single build command can produce multiple artifacts simultaneously.

1. Markdown Output Support We've added a first-class markdown engine. Why?

  • LLM Optimization: PDF is great for humans, but bad for AI. You can now feed the generated resume.md directly into ChatGPT/Claude to tailor your resume for specific job descriptions or critique your summary.
  • Web Integration: Drop the generated Markdown file directly into your Hugo, Jekyll, or Next.js personal site/portfolio.
  • Git Diffs: Track changes to your resume content in plain text, making peer reviews in Pull Requests much easier than diffing binary PDFs.

2. Flexible Configuration You can now define multiple outputs in your resume.yml. For example, generate a formal PDF for applications and a Markdown file for your website in one go:

layouts:
  - engine: latex
    template: moderncv-banking
  - engine: markdown

Quick Demo

You can see the new workflow in action here: https://asciinema.org/a/759578

YAMLResume Markdown output

How to try it

If you have Node.js installed:

npm install -g yamlresume
# or
brew install yamlresume

# Generate a boilerplate
yamlresume new my-resume.yml

# Build PDF and Markdown simultaneously
yamlresume build my-resume.yml

What's Next?

We are working on a native HTML layout engine. Imagine generating a fully responsive, SEO-optimized standalone HTML file that looks as good as the PDF but is native to the browser—perfect for hosting on your self-hosted infrastructure or GitHub Pages.

I'd love to hear your feedback!

Links:

205
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/supz_k on 2025-12-03 02:17:17+00:00.


Hey r/selfhosted

We released Hyvor Relay on Monday after working on it for almost an year. We took on the challenge of building our own email delivery platform. We made it open-source under AGPLv3 and easily self-hostable using Docker Compose or Swarm.

Why we built it

We were working on Hyvor Post, a privacy-first newsletter platform, and wanted a cost-effective email API without any tracking features. We could not find one and decided to build our own.

Self-hosting email?

Yes, we know the cliché. Hyvor Relay helps with the deliverability problem in a few ways:

  • Automates DKIM, SPF, and other DNS records (except PTR). Instead of managing DNS records manually, you delegate it to the in-built DNS server which takes care of everything dynamically.
  • Automatic DNSBL querying to get notified if any of the sending IPs are listed on them
  • Many other health checks to ensure everything is correctly configured
  • Ability to easily configure multiple servers and fallback IP addresses
  • Extensive documentation for help

Tech Stack

  • Symfony for the API
  • Go for SMTP and DNS servers, email and webhook workers
  • Sveltekit and Hyvor Design System for frontend
  • PGSQL for database & queue

Future Plans

  • Incoming mail routing (Email to HTTP)
  • Dedicated IPs / queues
  • Cloud public release next year

Links

We would absolutely love to hear what you think!

206
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kikootwo on 2025-12-02 23:12:35+00:00.


Hello!

For Context - Here's the initial teaser post

ReadMeABook is getting very close to being done with MVP and I am looking for a couple of savvy users who are using my same media stack to test things out, look for bugs, and provide overall user feedback.

Specific requirements (based on MVP limitations):

  • Plex Audiobook Library
  • Preferably Audnexus metadata management in plex
  • English (other audible regions not supported currently)
  • qBitTorrent as downloading backend (torrent only)
  • Prowlarr indexer management

Some key features added since the last post:

  • BookDate - AI Powered (Claude/OpenAI) book suggestions using your existing library and/or how you rated your library to drive compelling suggestions
  • Managed user account support in plex
  • Cleaned up UI all over the place
  • Interactive search supported for unfound audiobooks
  • Fully hand-held setup with interactive wizard
  • Metadata tagging of audio files (to help plex match)

Some things I know you guys want, but aren't here yet:

  • Audiobookshelf support
  • Usenet support
  • Non-audible results in search and recommended
  • Non-english support

Here's a video sample of walking through the setup wizard

Here's a video of some general use, similar to the last post

If you meet the above requirements and are interested in participating, comment below and let me know!

207
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/psxndc on 2025-12-02 22:13:57+00:00.


Hey there. I don't run much myself, really only FreshRSS, Kavita/Calibre, a couple old websites for my family members, and Trilium-Next.

I've been seeing a lot of comments here lately that effectively say "nothing you host should be publicly visible; put everything behind a tunnel/Tailscale." And I could see retiring the websites for my family (they aren't really used) and doing that for every other service - I don't really need Calibre or Trilium-Next unless I'm at home. But FreshRSS is a different matter. I have that open at work all day and check stuff when I have downtime.

What do folks do for services that they use *all the time*. Just always have a Tailscale connection going? Or is there a better way to access it?

Or is it really not that bad to have a service publicly visible? I don't trust myself to securely lock down a server, which is why I'm thinking I need to pull it from being publicly visible. Thanks.

Edit/Update - I'll look into Cloudflare tunnels. I (maybe naively) though it was the same thing as a Tailscale connection I had to manually spin up every time, so I hadn't dug into them.

208
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/FloTec09 on 2025-12-02 17:41:14+00:00.

209
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/IliasHad on 2025-12-02 12:28:53+00:00.


Hey r/selfhosted!

A month ago, I shared my personal project here - my self-hosted alternative to Google's Video Intelligence API. The response was absolutely incredible: (1.5K+ upvotes here and 800+ GitHub stars)

Thank you so much for the amazing support and feedback!

Here's what's new:

🐳 Docker Support (Finally!)

The #1 requested feature is here. Edit Mind now runs in Docker with a simple docker-compose up --build:

  • Pre-configured Python environment with all ML dependencies
  • Persistent storage for your analysis data
  • Cross-platform compatibility (tested macOS)

Immich Integration

This was another highly requested feature - you can now:

  • Connect Edit Mind directly to your Immich library
  • Pull the faces image and their label names
  • Using the Immich face labels for the Edit Mind face recognition feature

Other Improvements Based on Your Feedback

  • Multi LLM support improved: You have the option to use Gemini or Local LLM for NLP (Converting your words into vector db search query)
  • UI refinements: Dark mode improvements, progress indicators, face management interface

📺 Demo Video (Updated + a bonus feature)

I've created a new video showcasing the Docker setup and Immich integration: https://youtu.be/YrVaJ33qmtg

💬 I Need Your Help

As this moves from "weekend project" to something people actually use:

  1. Docker testers needed: Especially on different hardware configurations
  2. Immich integration feedback: What works? What breaks? What's missing?
  3. Feature priorities: What should I focus on next?
  4. Documentation: What's confusing? What needs better explanation?

🙏 A Genuine Thank You

I built this out of frustration with my own 2TB video library. I never expected this level of interest. Your feedback, bug reports, and encouragement have been incredible.

Special shoutouts to:

  • Everyone who opened GitHub issues with detailed bug reports
  • Those who tested on exotic hardware configurations
  • Those who upvoted or shared their feedback and support over the comments
  • Those who shared the project with other people

This is still very much a work in progress, but it's getting better because of this community. Keep the feedback coming!

210
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bit-voyage on 2025-12-02 05:23:36+00:00.


Please correct me if my understanding at any stage is incorrect.

I’ve been learning how Cloudflare’s proxy (orange cloud) works and a friend mentioned that Cloudflare actually terminates TLS at their edge, so I looked into my setup a bit more. This makes sense but it means all traffic is completely unencrypted for cloudflare, any cookies or headers, passwords your users may be sending from client is plain text readable to cloudflare as the DNS proxy. After this it will be re-encrypted by cloudflare. This is fine but I feel that others may have been under the impression that TLS meant end to end encryption for them.

For my admin services I require mTLS and VPN, but for friends/family I still want something easy like HTTPS and passkeys.

I have been running an alternate solution for some time and would like to get thoughts and opinions on the following

Flow: DNS -> VPS Public IP -> Wireguard Tunnel 443 TLS passthrough -> VM-B Caddy TLS Certs -> VM-C Authentik -> VM-D Jellyfin etc

First I will outline my requirements:

  • Hidden public IP - Access via HTTPS externally (no vpn for client)
    • (Passkeys, HTTPs should be enough)
  • No port opening on Home router.

The proposal to be audited:

(VPS-A) Trusted VPS:

  • Caddy L4 TLS Passthrough
  • Wireguard Tunnel to VM-B:443

(VM-B) Proxmox Alpine VM in Segregated VLAN:

  • Caddy TLS Termination
  • Reverse proxy to Authentik

(VM-C) Authentik:

  • Authorise and proxy to App (Jellyfin, Immich etc)

Flow: DNS -> VPS Public IP -> Wireguard Tunnel 443 TLS passthrough -> VM-B Caddy TLS Certs -> VM-C Authentik -> VM-D Jellyfin etc

Pros:

  • Hidden public IP - Zero ports open on home router
  • Complete TLS end-to-end encryption (No man in the middle [orange cloud])
  • Cloudflare can no longer inspect the traffic (passwords typed, cookies, headers passed)
  • I can now also use CGNAT network providers to expose services which was not possible before
  • I now have more granular control over caching images etc which Cloudflare was disallowing before for some reason... Even video stream chunks can be cached now that I am controlling the proxy.

Cons I can see:

  • VPS must be trusted party
  • Losing a bit of selfhosted control due to VPS (must trust **some** party but considering cloudflare is a US entity I am fine with outsourcing this to an offshore service like OrangeWebsite or Infomaniak).

What else would I be losing from moving away from CF proxy (orange cloud) on home lab services?

Do self hosting folks also use CF proxy and are fine with Cloudflare terminating TLS and thus being able to see all traffic unencrypted?

If there is enough interest in the comments I will be happy to do a detailed guide on how to get the VPS setup with custom xcaddy build for tls passthrough and I am writing generic ansible playbooks for both the L4 passthrough on the VPS and the TLS terminator caddy VM.

If I am missing something or could make this flow any more secure please comment.

211
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/chris_socal on 2025-12-02 01:22:59+00:00.


So if I understand correctly the purpose of a reverse proxy is to obfuscate your local network traffic while at the same time providing host names for services you wish to expose to the internet.

So lets say I set up a caddy server and open ports 80 and 443 on my router. If a bad actor hits my IP what will they see and what could they do?

As far as I know there have been no known public exploits of caddy. However the services behind the proxies must also be secure amd that is where I am having trouble understanding.

The simplest way I can ask this is: Can a bad actor probe caddy and find out what services it is hosting? Lets say I give all my services obscure names, would that make me almost un-hackable? Does the bad guy have to know the names of my services before trying to hack them?

212
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Mrowe101 on 2025-12-02 00:22:13+00:00.


Hi, chief dumbass here,

I bought a new router a while ago and instead of forwarding a single port I opened an entire machine to the internet. I was hosting immich and then some web projects for testing. I had left the sever do its thing not paying attention for quite a while and then I was alerted to everything being open when I created a default user/pass/port postgres DB and saw my data instantly vanish.

I checked through my auth logs and could see many people/bots were trying to brute force their way into SSH but never succeeded because I had disabled password logins. Looked through my open connections nothing out of the ordinary, no crypto miners in top, nothing from rkhunter. Is there anything I should look for?

Should I wipe the machine completely?

213
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Impossible_Belt_7757 on 2025-12-01 18:53:07+00:00.


While waiting for tunarr to fix Jellyfin support I made a thing to auto-create TV channels for my Jellyfin Server

  • Simulate any decade of TV from your Jellyfin library
  • Docker

JellyfinTV

214
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/asciimoo on 2025-12-01 15:23:15+00:00.


I'm working on Omnom with the goal of being able to locally collect, store and categorize information from the internet making it always available in one place no matter what happens with the original sources. Currently the core functionality covers

  • Bookmark Creation with Website Snapshots: Save web pages along with static snapshots capturing their in-browser visual state, including dynamic content. Snapshots are searchable, comparable and downloadable as a single file.
  • Feed Aggregation: RSS and Atom feed reader.
  • ActivityPub Support: Integrate with the Fediverse by sharing your bookmarks or following and consuming content from ActivityPub-enabled platforms and users.
  • Unified Filtering: Allows for precise content retrieval through extensive filtering by date, free text search, tags, users, domains, URLs, and more.

The code is free (AGPLv3+), the whole project is packed into a single binary file for quick deployment.

It's still work in progress and have some rough edges, but the core feature set is usable and hopefully some folks here can find it useful/interesting.

The code is available at https://github.com/asciimoo/omnom

A small read-only showcase instance: https://omnom.zone/

Longer description: https://omnom.zone/docs/

I'd highly appreciate any kind of feedback/advice/idea/feature request helping future development. <3

215
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bits-hyd-throwaway on 2025-12-01 12:41:57+00:00.


Ferron is a fast and memory safe web server written in Rust. It supports automatic TLS via Let's Encrypt out of the box and uses the KDL configuration language for its configuration.

Ferron's reverse proxy performance is on par with NGINX without the difficult configuration which comes with NGINX. Ferron is available as a Docker container for easy deployment.

Github Link: https://github.com/ferronweb/ferron

216
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ChiefAoki on 2025-12-01 16:49:53+00:00.


Good day, been a hot minute since I posted on this sub, we've been working on some shiny new features for LubeLogger which we think our userbase would really benefit from.

Inspections

First up, Inspections. This is pretty much a custom forms feature for your vehicle. You can create re-usable inspection forms for pretty much every aspect of your vehicle and create action items for failed inspections.

Documentation

Youtube Walkthrough

Household

One of the most requested features for LubeLogger is the ability to allow users to inherit vehicles in a garage and also to limit what actions they can perform for the vehicles. With the new Household feature, you no longer have to manually add a user to each vehicle and instead you can add them to your household once and they will automatically have access to all the vehicles in your garage. Household members can be assigned Viewer, Editor, or Manager roles.

Viewer has read-only permissions, Editor can Add and Edit records, and Manager can Add, Edit, and Delete records.

Documentation

AI

It's a controversial topic, we're well aware of that, which is why instead of adding AI directly into LubeLogger and asking for an API key, we have decided to create a MCP server for LubeLogger that you have to spin up separately and this will serve as a bridge between any AI Agents capable of tool-calling and LubeLogger.

This integration allows you to add fuel records from receipts, odometer record from a picture of your dashboard, and even service/repair/upgrade records from invoices. Note that this MCP server is still in an experimental stage and is not considered stable whatsoever.

Youtube Walkthrough

GitHub Repository

Ending Notes

We know these changes might not seem huge compared to other projects, but we sincerely do believe that these are some key features that will reduce friction when it comes to user experience.

Anyways, if you've never heard of LubeLogger and you're looking to start logging lube, here's our details:

GitHub

Website

217
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/iamhereunderprotest on 2025-12-01 11:28:10+00:00.


I spent years deploying different services using NAS-IP:port number. I’ve heard about reverse proxies for a while, and have been worried about taking the next step.

Is deploying caddy as simple as launching another docker container, editing all the other docker compose files, and … pointing my router at caddy?

218
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Local-Comparison-One on 2025-12-01 12:25:56+00:00.


Hey r/selfhosted!

I've been working on Relaticle, a modern open-source CRM built with Laravel and Filament. After years of using various SaaS CRMs and being frustrated with data ownership concerns and subscription costs, I decided to build something that can be fully self-hosted.

Why I built this

  • Complete data ownership - your customer data stays on your servers
  • No per-seat pricing or usage limits
  • Full customization through custom fields
  • Modern tech stack that's easy to maintain

Tech Stack

  • Backend: Laravel 12, PHP 8.4
  • Frontend: Livewire 3, Alpine.js, TailwindCSS
  • Admin Panel: Filament 4
  • Database: PostgreSQL (recommended) or MySQL
  • Search: Meilisearch (optional)
  • Queue: Redis + Laravel Horizon

Features

  • Company & Contact management with relationship linking
  • Sales pipeline with custom stages
  • Task management with assignments and notifications
  • Notes system linked to any entity
  • AI-powered record summaries
  • Custom fields - add any field type to any entity
  • Multi-workspace support for teams
  • CSV import/export for data portability
  • Role-based permissions

Deployment

Works great with:

  • Docker / Docker Compose
  • Laravel Forge / Ploi
  • Any VPS with PHP 8.4+
  • Coolify, CapRover, or similar PaaS

Links

Would love to hear your feedback! What features would you want to see in a self-hosted CRM?

219
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Urittaja023984 on 2025-12-01 09:02:29+00:00.


So I've been running variations of my own stack for a long time, but have always avoided the great and terrible public Internet. This has meant local network only, Wireguard, getting frustrated with telling people how to Wireguard and switching to Tailscale so people can just "install app and connect" and so forth. My current setup is a home server (some old piece of office computing with a server motherboard I picked for cheap used) fitted with a 2TB SSD with Proxmox, where I host most of my services like a true pagan in a single VM via a single file of docker compose spaghetti just allocating 90% of the disk for the single VM.

This weekend, after yet another manual configuration session of doom with Nginx proxy manager, Pi-hole local DNS and Tailscale, I figured I'm tired of the GUI. Everything else in my stack is infrastructure as code (IAC), so why not the rest of it too. I'm also tired of logging into every service one by one, so just knock SSO at the same time, because why not (spoilers: it was not simple, should have guessed).

What resulted was half of my weekend spent configuring, tinkering, hitting my LLM usage maxes and a lot of RTFM moments, but in the end I can now happily report that the whole stack is now accessible from internet and behind some sweet, sweet SSO.

After a few tests I ended up going with Cloudflare (DNS + Tunnels) + Traefik + Authelia. I split my services into two groups: User facing software I want to be accessible from Internet directly and admin stuff only via Authelia. I figured because Jellyfin+Jellyseer work so nicely together, my users already have and know their credentials there and nobody except me really requires the SSO stuff for the underlying stack, I'll just keep those using their own auth and move myself alone to the SSO (and just use my own Jellyfin account like my users).

In the end the result was:

       Internet
           |
Cloudflare DNS + Tunnel
           |
          /\
  Authelia  Media (Jellyfin, Jellyseer, Wizarr)
     |
     |
Admin (Dashy, Glances, *arrs)

This way my users get invite via Wizarr explaining everything (+ I get easy visibility and user management) and can connect to the Jellyfin / Jellyseer with just my domain, no tricks required. Users use the basic Jellyfin account and auth for both Jellyfin and Jellyseer.

Authelia sits in front of all the admin stuff, making it easy for me to just handle the login there. For now I'm the only admin, so I figured to just use a local user in Authelia to login.

Surprising amount of time was spent on:

  • Figuring out to make Cloudflare tunnel HTTP and use traefik for HTTPS/SSL termination
  • Traefik-Authelia and required middleware
  • Making sure Cloudflare tunnel is not using caching for the Jellyfin. My understanding is that this is enough for the ToS, but would appreciate if anyone knows definitively.

Anyways I wish I kept a better install journal, as there was a bazillion things I fixed on the way here, as the stack had been running for a while without intervention. I also set up UptimeRobot with integration to my Discord to ping me in case the media services aren't working.

Only thing left unsatisfactory in the stack was the Cloudflared docker container setup:

The Cloudflare panel GUI was even worse than nginx proxy manager, but fortunately they have API-access. Unfortunately I didn't get the Cloudflared docker container to be able to create the required tunnels itself and had to resort to a bash script that does it via the API. It works, but it's still half manual as it doesn't handle migrations and deletes, only does updates and requires update in the script in case my paths change. That's hopefully rare enough that it doesn't matter too much.

I think I spent over 8 hours on this during the weekend (other obligations, so in hour-two increments) and overall happy. Huge increase in requests, bots and crawling, my domain used to get 100 hits a month and now it's thousands per day, but that's ofc also because the applications themselves are much request heavier than what I used to host (only a static homesite that didn't get much traffic).

What surprised me was the lack of comprehensive guides for this. I'm still not sure if my stack is what you'd call "optimal", but at least it works for me and my users right now :)

220
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/BookHost on 2025-12-01 06:32:30+00:00.


Did a quick audit tonight:

23 Docker containers with their own admin users 18 services still using static API keys 27 human logins (me + family) That’s sixty-eight ways this can break at 3 a.m.

Just migrated everything I could to workload identities + JIT certs + single OIDC provider for humans. Cut the list down to literally one master password + certs that expire before I wake up.

If you’ve ever cried while resetting a forgotten Paperless-ngx password at 2 a.m., you’ll get it. What’s your actual credential count right now? Be honest.

221
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Neither_Buy_7989 on 2025-12-01 01:10:40+00:00.

222
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/DeepanshKhurana on 2025-11-30 21:38:50+00:00.

223
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/One_Housing9619 on 2025-11-30 19:09:30+00:00.


So we all spent hours if not days creating/designing our perfect NAS or homelab but a lot of us dont think of backups I understand like its too complicated or its too costly etc etc But hear me out I am saying dont backup your whole server just backup your app data folder(which contains all your configuration of all your apps) and backup your important data which you know will either take hours to setup or very important(like photos). you can install duplicati or similar software which even optimise your backup and use cloud storage like backblaze or whichever you like dont trust your harddisks something can go wrong anytime

PS: My last month bill was ~₹10(i.e $0.1)

224
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kunalhazard on 2025-11-30 16:03:20+00:00.


Void is an open-source, modern, powerful, and feature-rich client for Jellyfin, written from scratch (not a fork). It features a clean UI and solid playback support, designed to take full advantage of the Jellyfin API.

Feature List

  • Faster login using QR code
  • Full ASS subtitle support thanks to MPV
  • HDR fallback support if Dolby Vision is not supported by the device (fixes black-screen issues) (experimental)
  • Audio passthrough
  • Hi10P hardware decoding support
  • Transcoding
  • Theme music
  • Skip intro
  • Special features / extras support (behind the scenes , deleted scenes, etc)
  • Subtitle offset and size adjustment
  • Improved multi-version support with preferred parent-folder logic (e.g., if you play an episode from Folder B, the next episode will also play from Folder B, instead of switching back to Folder A)
  • Remembered audio and subtitle selections (. if you use English subtitles with Japanese audio for episode 1, the next episode will also use English subtitles with Japanese audio instead of the default)
  • Auto-player mode (automatically switches between ExoPlayer and MPV based on content)
  • Version tagging based on filename or parent folder name (such as REMUX,Blu-Ray,etc)
  • Collections support
  • Alpha scroller
  • MPV config edit support
  • And more…

In progress

  • mTLS

Planned

  • Cast and crew page
  • Multi-user support
  • Music support

Hi10P hardware decoding is supported on Fire TV 4K (1st gen, 2nd gen, and Max).

(It was very painful to figure out how to enable this!)

If someone with a Google TV streamer can confirm whether Hi10P playback works on their device, I will enable support for it as well.

This is a hobby project built around my own library and structure so I can enjoy my media better. If you have any feature requests, feel free to ask or open an issue on GitHub :)

Github TV | Mobile | Playstore | Amazon Appstore | Discord

Screenshot

https://preview.redd.it/n3a6asdj2f4g1.png?width=1920&format=png&auto=webp&s=075c314082baee1d9753f4d736a058aee39e4865

https://preview.redd.it/xco1a3dx1f4g1.png?width=1920&format=png&auto=webp&s=1bccc5c0c2a510552d3331422baecaba09a67515

https://preview.redd.it/nyar8gdy1f4g1.png?width=1920&format=png&auto=webp&s=6d6970667f47fb36d923554b0e0de168db4efc63

https://preview.redd.it/7dovolom2f4g1.png?width=1920&format=png&auto=webp&s=e8232813b5bf5fbc9c2573378512713136b8df09

225
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/CompetitiveCod76 on 2025-11-30 08:09:00+00:00.


I have a VPS that I had planned to use for two purposes. Headscale server so I can access selfhosted services when away from home; and to route all outgoing traffic through it as a replacement for my VPN subscription (a tailnet 'exit node'). I was hoping to have adgaurd on there too.

After doing some research/testing I think I might need a different solution. It appears that the server you use for Headscale can't also be used as an exit node. I'd either have to buy another VPS for that (the exit node is more important tbh), or just use Tailscale. I am against Tailscale as I don't want to set it up with an MS/google/github etc account or have to go to the trouble of setting up a webfinger for OIDC.

I've been looking at Pangolin and it seems pretty neat - I like that it also handles reverse proxy, auth, crowdsec etc. Onlt unknown is if I set that up on the VPS can I still route outgoing traffic through it?

I could just use wiregaurd, but tbh I'm looking at low effort solutions that wont take up a lot of free time to maintain. That's why Tailscale and Pangolin appeal.

Have I overlooked something here? Maybe my requirements are niche, or perhaps there is a better solution out there.

view more: ‹ prev next ›