Self-Hosted Alternatives to Popular Services

139 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
176
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/DejavuMoe on 2025-12-04 08:08:09+00:00.


I use Docker containers and a cloud server to host services mainly for my personal workflow. Here are my favorite self-hosted projects in 2025 — all of them have been extremely useful to me!

  1. Blinko – A self-hosted AI-powered knowledge base and note-taking app
  2. Ollama – Works perfectly with Blinko for local embedding models
  3. Gitea – Where I host the source code of my Hugo blog
  4. Woodpecker – My CI/CD tool paired with Gitea (e.g., automatically builds my blog)
  5. wakapi – Self-hosted API for tracking my coding time
  6. Plausible CE – My favorite privacy-friendly web analytics with zero bloat
  7. nahpet – A simple and clean URL shortener
  8. Twikoo – A self-hosted comment system I use on my Hugo blog
  9. immich – The best Google Photos alternative — powerful and impressive
  10. IT Tools – A collection of simple web utilities running entirely in the browser
  11. bark server – Sends APNs notifications to iOS/iPadOS
  12. Uptime Kuma – Monitors the uptime and health of all my sites and containers
  13. Cloudreve Pro – My private cloud storage solution
  14. Stirling PDF – A powerful PDF toolkit, though the commercialization is getting heavy… I’m looking for alternatives

For domains, I purchase from Porkbun because Cloudflare doesn’t support my TLD. DNS and CDN are provided by Cloudflare, and my server uses Nginx as a reverse proxy with Cloudflare-only access to the origin. Cloudflare Zero Trust adds another layer of protection for secure access to my services.

If you have more recommendations, please share them! I’d love to discover more awesome self-hosted tools. Thanks, everyone!

177
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/rlnerd on 2025-12-04 04:52:50+00:00.


TL;DR: Self-hosted containerized services with custom domains, all behind Tailscale. Tailscale + Traefik + valid SSL = zero public exposure

Detailed guide series coming soon...

————————————

After spending way too much time on trying to figure out ways to secure my homelab setup, I finally figured out how to get clean custom domains with valid SSL certificates for self-hosted services while keeping everything behind Tailscale (zero public ports).

What This Achieves

Ability to access your application services, this way:

  • https://app.yourdomain.com/ (valid SSL, no warnings)
  • Accessible from anywhere via Tailscale
  • Selectively share with friends/family by inviting them to your Tailnet
  • No port forwards, no public exposure, no VPN configs for users

The Approach

Tailscale + Traefik + DNS challenge

[User on Tailscale] → [Tailscale Container] → [Traefik] → [Your Apps] ↓ [DNS Challenge]

Point your custom domain to your Tailscale IP (100.x.x.x), use DNS challenge for cert validation, and let Traefik handle routing.

Key Technical Bits

The trick that took forever to figure out:

  • Run Tailscale as a sidecar Docker container
  • Use network_mode: service:tailscale-container so Traefik shares the Tailscale network
  • Setting the correct set of commands and labels for Traefik and exposed application containers
  • Ensure Tailscale container also joins your internal Docker network (so Traefik can reach backend services)
  • Use DNS challenge (not HTTP) since your IP is private

Sample use case: I have n8n accessible at https://automation.mydomain.com/ - valid SSL, works from my phone/laptop anywhere. Friends/family can access, if invited to Tailnet.

Why Not Tailscale Serve/Funnel?

The solution I am suggesting, gives you:

  • Custom domains (not *.ts.net);
  • Full Traefik middleware control;
  • Multiple services behind one Tailscale node;
  • Better integration with existing Docker setups;
  • External HTTPS management, without relying on Tailscale's limited HTTPS settings.

What’s Next

Planning to create a detailed blog/video series covering:

  • Complete Docker Compose setup
  • Traefik configuration and routing
  • DNS provider setup (Cloudflare/others)
  • Tailscale ACLs for restricted access
  • Common pitfalls and solutions

Wanted to share the approach here first and see if anyone’s tackled this differently or has been thinking about doing something similar for their setup!

178
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Master-Variety3841 on 2025-12-04 02:34:31+00:00.


https://github.com/EmbarkStudios/wg-ui

Just went back to look at the repo, was bummed that it was archived and... then noticed that it was Embark Studios that built it.

I know its not /r/selfhosted specific, but a ton of people in this community use Wireguard and variations of a web ui (maybe even still run wg-ui) and also enjoy their games.

Anyway, cool to see a game developer have some cross-over to the /r/selfhosted world, even found the post where I discovered it originally: https://www.reddit.com/r/selfhosted/comments/o4fqnu/trying_and_failing_to_make_rpi_seedbox/

🤯

179
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Ifrahimm on 2025-12-04 01:40:04+00:00.


As the title states, what are your most used services in 2025 that you self host.

For me, its

  1. Forgejo (Git Version Control)
  2. OpenWebUI w/ Ollama
  3. Immich (Photos)
  4. Jellyfin

PS

If there are any suggestions for a calorie tracking/health wellness app. Let me know. I was thinking of creating one, as a personal project.

180
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Green_hammock on 2025-12-03 21:42:13+00:00.


Just wondering if there is something equivalent for self hosted music platforms?

181
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/GeoSabreX on 2025-12-03 18:47:18+00:00.


Hi all,

Anyone have good documentation resources or opinions on using a single (or at least a few) docker compose files instead of separate files per?

I've always kept them separate, and as I am figuring out my backup solution, it seems easier to backup my /a/b/docker folder, which then has /container/config folders for each of the containers.

BUT, I'm also getting into Caddy now, where I am having to specify the correct Docker network on each .yml file separately, and it's getting a little old.

For things like the *arr stack, or everything running on Caddy, it seems intuitive to include them on the same file.

But I'm not sure best practice for this. Does that make redeployment easier or harder, should I group by type or by "Caddy network" vs not, aka exposed vs not....I'm not sure.

Thoughts?

I've been doing a lot of cd /a/b/docker/container during troubleshooting lately....

182
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Open-Coder on 2025-12-03 17:36:45+00:00.

183
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/value1338 on 2025-12-03 09:21:18+00:00.

184
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Gryphonics on 2025-12-03 17:30:58+00:00.

185
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/jaydrogers on 2025-12-03 16:41:35+00:00.


Hours ago, MinIO published this in their GitHub README:

https://preview.redd.it/lhkxrfzeo05g1.png?width=1848&format=png&auto=webp&s=79ff7e17e7d6e7aef54ef2e7b7339729cd5d7b96

It seems the project has come to an abrupt halt (at least on their open source side). I know this leaves a bad taste for many people as we're all scrambling to figure out what to migrate to next.

I know there's been prior discussions of what people are moving to, but I just wanted to check in with how your experiences are going.

Many people talked about Garage (https://garagehq.deuxfleurs.fr/), but I am not sure how many people actually made the switch.

What alternatives did you roll with and how did the migration go? Do you feel any features are missing from when you used MinIO?

186
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Drumstel97 on 2025-12-03 09:45:06+00:00.

187
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/BookHost on 2025-12-03 07:07:48+00:00.


Round 1 recap of my last post:

I counted 68 different credentials across my lab (23 Docker admin users, 18 static API keys, 27 human accounts). Got so fed up that I migrated everything possible to:

  • Single OIDC provider (Authentik, because I like pain)
  • Workload identities + short-lived certs via Spike (formerly Smallstep)
  • Forward auth on Traefik for anything that doesn’t speak OIDC natively Result: literally one master password + certs that auto-expire every 4–8 h. Felt like ascending.

Then y’all showed up with the war crimes:

  • “1Password/KeePassXC master race. You never forget a password if it’s in the vault.”
  • “Local logins just work. Family accounts change once every five years.”
  • “The only thing your fancy OIDC setup guarantees is that YOU will break it at 3 a.m.”
  • “Half the *arrs and paperless and immich still don’t support OIDC without a paywall or a 400-line proxy hack.”
  • “If you’re offboarding family that often you need therapy, not Keycloak.”

…okay, that last one was fair.

So here’s the actual challenge for the password-manager maximalists and the “static credentials are fine” crowd:

Give me the killer argument why I should rip out Authentik + Spike + all the forward-auth nonsense and go back to:

  1. One shared 1Password/KeePassXC family vault (or separate vaults + emergency kit drama)
  2. Long-lived random passwords for every service
  3. Static API keys that never rotate because “if it ain’t broke”

Specific things I’m currently enjoying that you have to beat:

  • Family member creates their own account once, logs in with Google/Microsoft from phone/TV/browser, never asks me for a password again
  • In case someone’s phone gets stolen(that has happened once) I just revoke their OIDC session in Authentik, no password changes anywhere
  • API keys are gone; everything uses mTLS certs that expire before breakfast
  • New service gets added → one line in Traefik middleware → done, no new credential
  • I can see exactly who logged into what and when (yes I’m that guy)

Your move. Convince me the complexity budget isn’t worth it for a homelab that’s literally just me + wife + parents + sister. Make it technical, make it brutal, make it real.

Best argument gets gold and I’ll make a full “I was wrong” post with screenshots if I actually revert.

Current mental scoreboard:

Password manager gang — 1

OIDC cult — 0.5 (I’m coping)

(Paperless-ngx password reset PTSD still haunts me. Don’t @ me unless you’ve been there.)

188
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bobbintb on 2025-12-02 22:50:29+00:00.


I have a problem. I run a media server for my family. They have the choice of using Plex, Emby, or Jellyfin. I'm trying to avoid simply buying my storage every time I run out of space, for a number of reasons. The issue I am facing is how to manage space. It's easy enough when it's just my data. There is stuff they request and could probably just delete afterwards. I know I could probably grant them permissions to delete things that they request, which would probably be a half-way solution. But someone might be watching a show that someone else requested so I don't want a situation where the requester deletes it before the other person that wants to watch it watches it. I don't know of any existing features the existing media players have that may help this this. Or maybe even another tool. Right now I've just resorted to manually pruning things and asking in a group text if anyone wants me to keep it. Any suggestions are appreciated.

189
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Hot-Chemistry7557 on 2025-12-03 03:19:27+00:00.


Hey self-hosters here

It is been quite a while since YAMLResume's last update.

I'm excited to share YAMLResume v0.8, a significant milestone in the journey to make "Resume as Code" the standard for developers.

If you are first time here: YAMLResume allows you to craft resumes in a clean, version-controlled YAML format and compile them into beautifully typeset, pixel-perfect PDFs. No more fighting with Word formatting or proprietary online builders. You own your data.

What's New in v0.8?

The big shift in this version is the introduction of Multiple Layouts. Previously, the pipeline was linear (YAML -> PDF). Now, a single build command can produce multiple artifacts simultaneously.

1. Markdown Output Support We've added a first-class markdown engine. Why?

  • LLM Optimization: PDF is great for humans, but bad for AI. You can now feed the generated resume.md directly into ChatGPT/Claude to tailor your resume for specific job descriptions or critique your summary.
  • Web Integration: Drop the generated Markdown file directly into your Hugo, Jekyll, or Next.js personal site/portfolio.
  • Git Diffs: Track changes to your resume content in plain text, making peer reviews in Pull Requests much easier than diffing binary PDFs.

2. Flexible Configuration You can now define multiple outputs in your resume.yml. For example, generate a formal PDF for applications and a Markdown file for your website in one go:

layouts:
  - engine: latex
    template: moderncv-banking
  - engine: markdown

Quick Demo

You can see the new workflow in action here: https://asciinema.org/a/759578

YAMLResume Markdown output

How to try it

If you have Node.js installed:

npm install -g yamlresume
# or
brew install yamlresume

# Generate a boilerplate
yamlresume new my-resume.yml

# Build PDF and Markdown simultaneously
yamlresume build my-resume.yml

What's Next?

We are working on a native HTML layout engine. Imagine generating a fully responsive, SEO-optimized standalone HTML file that looks as good as the PDF but is native to the browser—perfect for hosting on your self-hosted infrastructure or GitHub Pages.

I'd love to hear your feedback!

Links:

190
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/supz_k on 2025-12-03 02:17:17+00:00.


Hey r/selfhosted

We released Hyvor Relay on Monday after working on it for almost an year. We took on the challenge of building our own email delivery platform. We made it open-source under AGPLv3 and easily self-hostable using Docker Compose or Swarm.

Why we built it

We were working on Hyvor Post, a privacy-first newsletter platform, and wanted a cost-effective email API without any tracking features. We could not find one and decided to build our own.

Self-hosting email?

Yes, we know the cliché. Hyvor Relay helps with the deliverability problem in a few ways:

  • Automates DKIM, SPF, and other DNS records (except PTR). Instead of managing DNS records manually, you delegate it to the in-built DNS server which takes care of everything dynamically.
  • Automatic DNSBL querying to get notified if any of the sending IPs are listed on them
  • Many other health checks to ensure everything is correctly configured
  • Ability to easily configure multiple servers and fallback IP addresses
  • Extensive documentation for help

Tech Stack

  • Symfony for the API
  • Go for SMTP and DNS servers, email and webhook workers
  • Sveltekit and Hyvor Design System for frontend
  • PGSQL for database & queue

Future Plans

  • Incoming mail routing (Email to HTTP)
  • Dedicated IPs / queues
  • Cloud public release next year

Links

We would absolutely love to hear what you think!

191
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kikootwo on 2025-12-02 23:12:35+00:00.


Hello!

For Context - Here's the initial teaser post

ReadMeABook is getting very close to being done with MVP and I am looking for a couple of savvy users who are using my same media stack to test things out, look for bugs, and provide overall user feedback.

Specific requirements (based on MVP limitations):

  • Plex Audiobook Library
  • Preferably Audnexus metadata management in plex
  • English (other audible regions not supported currently)
  • qBitTorrent as downloading backend (torrent only)
  • Prowlarr indexer management

Some key features added since the last post:

  • BookDate - AI Powered (Claude/OpenAI) book suggestions using your existing library and/or how you rated your library to drive compelling suggestions
  • Managed user account support in plex
  • Cleaned up UI all over the place
  • Interactive search supported for unfound audiobooks
  • Fully hand-held setup with interactive wizard
  • Metadata tagging of audio files (to help plex match)

Some things I know you guys want, but aren't here yet:

  • Audiobookshelf support
  • Usenet support
  • Non-audible results in search and recommended
  • Non-english support

Here's a video sample of walking through the setup wizard

Here's a video of some general use, similar to the last post

If you meet the above requirements and are interested in participating, comment below and let me know!

192
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/psxndc on 2025-12-02 22:13:57+00:00.


Hey there. I don't run much myself, really only FreshRSS, Kavita/Calibre, a couple old websites for my family members, and Trilium-Next.

I've been seeing a lot of comments here lately that effectively say "nothing you host should be publicly visible; put everything behind a tunnel/Tailscale." And I could see retiring the websites for my family (they aren't really used) and doing that for every other service - I don't really need Calibre or Trilium-Next unless I'm at home. But FreshRSS is a different matter. I have that open at work all day and check stuff when I have downtime.

What do folks do for services that they use *all the time*. Just always have a Tailscale connection going? Or is there a better way to access it?

Or is it really not that bad to have a service publicly visible? I don't trust myself to securely lock down a server, which is why I'm thinking I need to pull it from being publicly visible. Thanks.

Edit/Update - I'll look into Cloudflare tunnels. I (maybe naively) though it was the same thing as a Tailscale connection I had to manually spin up every time, so I hadn't dug into them.

193
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/FloTec09 on 2025-12-02 17:41:14+00:00.

194
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/IliasHad on 2025-12-02 12:28:53+00:00.


Hey r/selfhosted!

A month ago, I shared my personal project here - my self-hosted alternative to Google's Video Intelligence API. The response was absolutely incredible: (1.5K+ upvotes here and 800+ GitHub stars)

Thank you so much for the amazing support and feedback!

Here's what's new:

🐳 Docker Support (Finally!)

The #1 requested feature is here. Edit Mind now runs in Docker with a simple docker-compose up --build:

  • Pre-configured Python environment with all ML dependencies
  • Persistent storage for your analysis data
  • Cross-platform compatibility (tested macOS)

Immich Integration

This was another highly requested feature - you can now:

  • Connect Edit Mind directly to your Immich library
  • Pull the faces image and their label names
  • Using the Immich face labels for the Edit Mind face recognition feature

Other Improvements Based on Your Feedback

  • Multi LLM support improved: You have the option to use Gemini or Local LLM for NLP (Converting your words into vector db search query)
  • UI refinements: Dark mode improvements, progress indicators, face management interface

📺 Demo Video (Updated + a bonus feature)

I've created a new video showcasing the Docker setup and Immich integration: https://youtu.be/YrVaJ33qmtg

💬 I Need Your Help

As this moves from "weekend project" to something people actually use:

  1. Docker testers needed: Especially on different hardware configurations
  2. Immich integration feedback: What works? What breaks? What's missing?
  3. Feature priorities: What should I focus on next?
  4. Documentation: What's confusing? What needs better explanation?

🙏 A Genuine Thank You

I built this out of frustration with my own 2TB video library. I never expected this level of interest. Your feedback, bug reports, and encouragement have been incredible.

Special shoutouts to:

  • Everyone who opened GitHub issues with detailed bug reports
  • Those who tested on exotic hardware configurations
  • Those who upvoted or shared their feedback and support over the comments
  • Those who shared the project with other people

This is still very much a work in progress, but it's getting better because of this community. Keep the feedback coming!

195
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bit-voyage on 2025-12-02 05:23:36+00:00.


Please correct me if my understanding at any stage is incorrect.

I’ve been learning how Cloudflare’s proxy (orange cloud) works and a friend mentioned that Cloudflare actually terminates TLS at their edge, so I looked into my setup a bit more. This makes sense but it means all traffic is completely unencrypted for cloudflare, any cookies or headers, passwords your users may be sending from client is plain text readable to cloudflare as the DNS proxy. After this it will be re-encrypted by cloudflare. This is fine but I feel that others may have been under the impression that TLS meant end to end encryption for them.

For my admin services I require mTLS and VPN, but for friends/family I still want something easy like HTTPS and passkeys.

I have been running an alternate solution for some time and would like to get thoughts and opinions on the following

Flow: DNS -> VPS Public IP -> Wireguard Tunnel 443 TLS passthrough -> VM-B Caddy TLS Certs -> VM-C Authentik -> VM-D Jellyfin etc

First I will outline my requirements:

  • Hidden public IP - Access via HTTPS externally (no vpn for client)
    • (Passkeys, HTTPs should be enough)
  • No port opening on Home router.

The proposal to be audited:

(VPS-A) Trusted VPS:

  • Caddy L4 TLS Passthrough
  • Wireguard Tunnel to VM-B:443

(VM-B) Proxmox Alpine VM in Segregated VLAN:

  • Caddy TLS Termination
  • Reverse proxy to Authentik

(VM-C) Authentik:

  • Authorise and proxy to App (Jellyfin, Immich etc)

Flow: DNS -> VPS Public IP -> Wireguard Tunnel 443 TLS passthrough -> VM-B Caddy TLS Certs -> VM-C Authentik -> VM-D Jellyfin etc

Pros:

  • Hidden public IP - Zero ports open on home router
  • Complete TLS end-to-end encryption (No man in the middle [orange cloud])
  • Cloudflare can no longer inspect the traffic (passwords typed, cookies, headers passed)
  • I can now also use CGNAT network providers to expose services which was not possible before
  • I now have more granular control over caching images etc which Cloudflare was disallowing before for some reason... Even video stream chunks can be cached now that I am controlling the proxy.

Cons I can see:

  • VPS must be trusted party
  • Losing a bit of selfhosted control due to VPS (must trust **some** party but considering cloudflare is a US entity I am fine with outsourcing this to an offshore service like OrangeWebsite or Infomaniak).

What else would I be losing from moving away from CF proxy (orange cloud) on home lab services?

Do self hosting folks also use CF proxy and are fine with Cloudflare terminating TLS and thus being able to see all traffic unencrypted?

If there is enough interest in the comments I will be happy to do a detailed guide on how to get the VPS setup with custom xcaddy build for tls passthrough and I am writing generic ansible playbooks for both the L4 passthrough on the VPS and the TLS terminator caddy VM.

If I am missing something or could make this flow any more secure please comment.

196
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/chris_socal on 2025-12-02 01:22:59+00:00.


So if I understand correctly the purpose of a reverse proxy is to obfuscate your local network traffic while at the same time providing host names for services you wish to expose to the internet.

So lets say I set up a caddy server and open ports 80 and 443 on my router. If a bad actor hits my IP what will they see and what could they do?

As far as I know there have been no known public exploits of caddy. However the services behind the proxies must also be secure amd that is where I am having trouble understanding.

The simplest way I can ask this is: Can a bad actor probe caddy and find out what services it is hosting? Lets say I give all my services obscure names, would that make me almost un-hackable? Does the bad guy have to know the names of my services before trying to hack them?

197
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Mrowe101 on 2025-12-02 00:22:13+00:00.


Hi, chief dumbass here,

I bought a new router a while ago and instead of forwarding a single port I opened an entire machine to the internet. I was hosting immich and then some web projects for testing. I had left the sever do its thing not paying attention for quite a while and then I was alerted to everything being open when I created a default user/pass/port postgres DB and saw my data instantly vanish.

I checked through my auth logs and could see many people/bots were trying to brute force their way into SSH but never succeeded because I had disabled password logins. Looked through my open connections nothing out of the ordinary, no crypto miners in top, nothing from rkhunter. Is there anything I should look for?

Should I wipe the machine completely?

198
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Impossible_Belt_7757 on 2025-12-01 18:53:07+00:00.


While waiting for tunarr to fix Jellyfin support I made a thing to auto-create TV channels for my Jellyfin Server

  • Simulate any decade of TV from your Jellyfin library
  • Docker

JellyfinTV

199
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/asciimoo on 2025-12-01 15:23:15+00:00.


I'm working on Omnom with the goal of being able to locally collect, store and categorize information from the internet making it always available in one place no matter what happens with the original sources. Currently the core functionality covers

  • Bookmark Creation with Website Snapshots: Save web pages along with static snapshots capturing their in-browser visual state, including dynamic content. Snapshots are searchable, comparable and downloadable as a single file.
  • Feed Aggregation: RSS and Atom feed reader.
  • ActivityPub Support: Integrate with the Fediverse by sharing your bookmarks or following and consuming content from ActivityPub-enabled platforms and users.
  • Unified Filtering: Allows for precise content retrieval through extensive filtering by date, free text search, tags, users, domains, URLs, and more.

The code is free (AGPLv3+), the whole project is packed into a single binary file for quick deployment.

It's still work in progress and have some rough edges, but the core feature set is usable and hopefully some folks here can find it useful/interesting.

The code is available at https://github.com/asciimoo/omnom

A small read-only showcase instance: https://omnom.zone/

Longer description: https://omnom.zone/docs/

I'd highly appreciate any kind of feedback/advice/idea/feature request helping future development. <3

200
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bits-hyd-throwaway on 2025-12-01 12:41:57+00:00.


Ferron is a fast and memory safe web server written in Rust. It supports automatic TLS via Let's Encrypt out of the box and uses the KDL configuration language for its configuration.

Ferron's reverse proxy performance is on par with NGINX without the difficult configuration which comes with NGINX. Ferron is available as a Docker container for easy deployment.

Github Link: https://github.com/ferronweb/ferron

view more: ‹ prev next ›