Self-Hosted Alternatives to Popular Services

139 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
126
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/pgilah on 2025-12-09 16:15:47+00:00.


Hi there! Some old machines lack Wake-On-Lan (WOL) or BIOS boot timers, making it difficult to reuse them as home servers. Some months ago I shared WakeMyPotato, a service that runs automatic rtcwake calls in the near future and safely powers down the laptop if AC fails. It will then turn on your server once AC is restored.

The community response was awesome, and after some suggestions I have now implemented an IP check, which will trigger the emergency shutdown if a ping to your chosen IP fails. This IP can be whatever you want, from your router's local IP to Cloudflare's IP or a friend's IP, whatever you want!

Hope you enjoy this update and please let me know if it can be improved in any way :D

https://github.com/pablogila/WakeMyPotato

127
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Daniel31X13 on 2025-12-09 13:12:48+00:00.


Hello everyone,

Before we talk about today’s announcement, let's take a moment to appreciate what this community has built together. What started as a project to preserve webpages and articles has quietly grown into Linkwarden, a tool used by researchers, journalists, and knowledge collectors all over the world.

As we’ve grown, the Linkwarden community has helped us reach:

  • 16,000+ GitHub stars
  • 11M+ Docker downloads
  • Thousands of self-hosted instances running in different companies, universities, agencies, and homelabs
  • A thriving ecosystem of contributors, donors, and Cloud subscribers keeping the project sustainable

None of this would've happened without you. Thank you! 🚀

Today, we’re excited to launch something you’ve been asking for since the very beginning: the official Linkwarden mobile app, now available on iOS and Android.

Different screens (iPad, Pixel, and iPhone)

Here are the highlights so far:

  • 🧩 Create, organize, and browse your links: A native, mobile-first experience with collections, tags, and powerful search.
  • 📤 Save links directly from the share sheet: Send interesting articles from the browser or any other app straight into Linkwarden, no copy-paste required.
  • 📚 Cached data for offline reading: Catch up on long reads, articles, or saved blog posts when you’re away from Wi-Fi.
  • ☁️ Works with Linkwarden Cloud and self-hosted: Use the same app whether you’re on Linkwarden Cloud or your own self-hosted instance, just point it at your server and sign in.
  • 📱 Built for different screen sizes: Supports iOS / iPadOS, and Android (phones and tablets).
  • 🔜 And more coming soon: This first release is just the foundation, expect many improvements and new features soon.

Get the app

To use the app you’ll first need a Linkwarden account (version v2.13+ recommended).

You can choose between:

  • Linkwarden Cloud – instant setup, and your subscription directly supports ongoing development.
  • Self-hosted Linkwarden – free, but you’ll need to deploy and maintain a Linkwarden instance on a server.

After creating an account, download the app from your preferred store:

App Store

Google Play

How you can support Linkwarden

Linkwarden exists because of people like you. Other than using our official Cloud offering and dontations, here are the other ways to help us grow and stay sustainable:

Thank you for being part of this community. 💫

128
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Doc_CoBrA on 2025-12-09 10:54:57+00:00.


Hey r/selfhosted,

I’ve been using Slskd (Soulseek) to find music and Beets to organize my library for a bit. Both tools are great, but the workflow between them has always been annoying for me. I’d download something, SSH into my server, find the folder, run beet import, then move it to my Navidrome library.

I wanted a "click and forget" experience, so I built Soulbeet.

It’s a self-hosted web app that acts as the glue between the two.

What it actually does:

  1. Unified Search: You search for an album/track in the UI (it queries MusicBrainz for metadata).
  2. Finds Sources: It asks your existing slskd instance to find the files on Soulseek.
  3. Automates the rest: Once you click download, it grabs the files, and automatically runs the beets CLI in the background to tag, organize, and move the files to your library.

The Tech Stack:

  • Backend/Frontend: Rust (using Dioxus Fullstack), Tailwind.
  • Database: SQLite. (PostgreSQL support a few lines of code away, can add if requested)
  • Integrations: Slskd API & Beets CLI.

Setup: It’s packaged as a Docker container. You basically just need to mount your music volume and tell it where Slskd is running.

services:
  soulbeet:
    image: docker.io/docccccc/soulbeet:master
    environment:
      - SLSKD_URL=http://[slskd_ip]:5030
      - SLSKD_API_KEY=your_key
    volumes:
      - /path/to/slskd/downloads:/downloads 
      - /path/to/music:/music

(Full compose file is in the repo)

Current State & TODOs:

It's stable enough for daily use (I use it), but it's definitely still a work in progress.

  • Search scoring: Could be enhanced, works well though.
  • No dedicated mobile app yet, but the web UI is responsive-ish. The mobile app is a few lines of code away too, thanks to dioxus.
  • I need to clean the code a bit
  • Improve Slskd search, it's a bit tricky.
  • I'd like to add previews too, to listen to the track before downloading.
  • Add versioning for the releases

Repo: https://github.com/terry90/soulbeet

Let me know if you run into any issues or have feature requests. I'm specifically looking for feedback on the default Beets configuration and your experience with the app.

Contributions are welcome of course.

Cheers!

129
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Uiqueblhats on 2025-12-09 10:38:46+00:00.


For those of you who aren't familiar with SurfSense, it aims to be the open-source alternative to NotebookLM, Perplexity, or Glean.

In short, it's a Highly Customizable AI Research Agent that connects to your personal external sources and Search Engines (SearxNG, Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluence, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar and more to come.

Here’s a quick look at what SurfSense offers right now:

Features

  • RBAC (Role Based Access for Teams)
  • Notion Like Document Editing experience
  • Supports 100+ LLMs
  • Supports local Ollama or vLLM setups
  • 6000+ Embedding Models
  • 50+ File extensions supported (Added Docling recently)
  • Podcasts support with local TTS providers (Kokoro TTS)
  • Connects with 15+ external sources such as Search Engines, Slack, Notion, Gmail, Notion, Confluence etc
  • Cross-Browser Extension to let you save any dynamic webpage you want, including authenticated content.

Upcoming Planned Features

  • Agentic chat
  • Note Management (Like Notion)
  • Multi Collaborative Chats.
  • Multi Collaborative Documents.

Installation (Self-Host)

Linux/macOS:

docker run -d -p 3000:3000 -p 8000:8000 \
  -v surfsense-data:/data \
  --name surfsense \
  --restart unless-stopped \
  ghcr.io/modsetter/surfsense:latest

Windows (PowerShell):

docker run -d -p 3000:3000 -p 8000:8000 `
  -v surfsense-data:/data `
  --name surfsense `
  --restart unless-stopped `
  ghcr.io/modsetter/surfsense:latest

GitHub: https://github.com/MODSetter/SurfSense

130
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/TheNick0fTime on 2025-12-09 03:05:39+00:00.


Hey there,

I've just released v0.8.0 of my open source program HandBrake Web. For all the details, check out the release notes over at GitHub!

Intro

As I'm sure many of you are familiar with, HandBrake is a fantastic video transcoding program that has been around for ages. The two primary ways to use the program are via a desktop GUI application, or using it's CLI. Unfortunately, this means it's not super convenient to use on headless devices, like a server or a NAS. HandBrake Web hopes to solve this by providing a native, modern, and responsive web interface for you to interact with HandBrake via your favorite web browser. HandBrake Web supports additional features (compared to the desktop version of HandBrake) such as:

  • Distributed Encoding - Transcode multiple videos from a single queue at once with multiple devices/nodes/workers.
  • Directory Monitoring - Create directory "Watchers" to automatically create jobs based on various criteria.

For additional details about the program's features, check out the project's README over at GitHub.

v0.8.0 Release

The goal of this release was to improve the state of things under-the-hood and make it easier to maintain the program moving forward. Here's some changes I would like to feature here:

  • The bundled version of HandBrakeCLI has been updated from 1.6.1 to 1.10.2, using a custom build process (rather than using binaries from a package manager).
  • The entire build process of the application has been overhauled, resulting in massive image size improvements:
    • The server image has been reduced from 1.04 GB to 222 MB
    • The worker image has been reduce from 1.29 GB to 394 MB
  • The entire client application has been refactored to more closely adhere to best practices, with a variety of styling and functionality improvements.
  • Intel QSV support has been improved with updated drivers that allow previously unsupported Intel Arc GPUs to be used.
  • Documentation actually exists with the creation of the project's Wiki.

There's a lot more to what went into this release, so check out the previously mentioned release notes if you would like to know more!

A Quick "Thanks"

It's been quite some time since the last release, over a year in fact (sorry I've been busy!). In that time some cool milestones have happened:

  • The project has reached over 500 stars on GitHub
  • The handbrake-web-server image has been downloaded over 200,000 times

Just wanted to say thanks to everyone that has taken the time to check out my program, write a bug or feature request, and especially to anyone that has donated. With donations to the project (in addition to donations people have made to my blog), I was able to purchase a second-hand Intel Arc B770 at no cost to my personal wallet. This allowed me to actually test Intel QSV support this time around since I only had an NVIDIA card previously. So once again thanks, the self-hosting community and FOSS communities in general are incredible!

131
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Ok_Equipment4115 on 2025-12-09 00:01:44+00:00.

132
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/RabidHunt86 on 2025-12-08 19:46:33+00:00.


I was experimenting with a tiny concept inspired by face seek website styled systems, just to organize my personal photo library better. It made me wonder whether anyone here has tried running a lightweight face-matching or tagging workflow entirely on their own server..

I’m not looking for specific tools or recommendations just curious whether people have gone down this path and what kind of setup worked well for you. Any insights on resource requirements or common pitfalls would be great!

133
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Far-Wedding-5751 on 2025-12-08 21:25:13+00:00.


I am looking for a cheap vps to act as a secondary mx relay. I usually use hetzner for this but I want a different asn for redundancy. virtarix caught my eye because of the RAM/Storage ratio which is perfect for a mail archive.

My main concern is Port 25 blocking. I know a lot of these budget providers block SMTP by default to prevent spam. Do I need to jump through hoops to get it opened? has anyone checked their IP ranges against blacklists (spamhaus etc.) recently?

If you are sending mail through them, are you landing in the gmail spam folder or is the delivery clean?

134
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/compromised_roomba on 2025-12-09 00:00:58+00:00.


https://theonion.com/plex-submits-35-bid-for-warner-bros/

I thought you all would enjoy this bit of satire.

135
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/davidlbowman on 2025-12-08 19:22:49+00:00.


Hey Everyone,

I've been a long-term user of the Kavita platform. If you're unfamiliar, it's a self-hosted e-reading platform, where you can store your epub, pdf, and image-based reading (e.g., manga, comics, etc.). It's an incredible tool to share your library with others, both locally and abroad.

Recently, Kavita released an annotations module. Giving users the ability to generate annotations, highlights, and notes. These are accompanied by quite a bit of proper metadata (e.g., book, chapter, tags, etc.).

While this is incredible, it's not the best way to review annotations for self-learning. Recently, I've started using Obsidian to organize notes, another incredible tool for self-learning.

For this reason, I've developed an Obsidian Plugin, which syncs annotations from your Kavita service to your Obsidian vault. I've gone through a few versions with Kavita members, and with the approval of the core development team, I've released version 1.0.0 of the Kavita to Obsidian plugin.

If you're a Kavita and Obsidian user, I'd love for you to try the plugin. If you happen to run into any issues, please create a GitHub issue, and I'll resolve them as quickly as possible. I've also currently applied for Obsidian Community Plugin status.

Please feel free to share your experience here or via GitHub.

136
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/nbtm_sh on 2025-12-08 16:27:11+00:00.


Now yes, I’m fully aware this creates a single pointe of failure. As such, I still have local admin accounts on all my Linux PCs If you’re crazy enough to do something like this, make sure you have failsafes.

Ive been going kinda insane recently, and have been setting up SSO, LDAP, etc. I was already sharing me home folder over SMB from my NAS, but I was just mounting it to my PC and copying files over manually.

I don’t really like having files on my PC. They aren't accessible from outside my PC, and they aren’t backed up. So I set up autofs on my gaming PC and TV PC to mount /home/user from my NAS over NFS. I’ve configured SSSD to ensure the UIDs match on all my desktops.

I've been running this for about a month now and it’s been amazing. Any document I download or edit is automatically snapshotted and backed up. Nothing except games, the OS and caches are physically on my desktop‘s SSDs. Which naturally means more space for games. I can access all my documents on my phone over SMB when I’m out of the house, too. Also, I can have access to far more storage than I could fit in my computer. There’s no way I’m fitting 144TB of redundant storage in there.

Another unexpected benefit: I can come downstairs to the PC connected to my TV, log in with the same account, and everything is just as it was on my gaming PC (more or less). Same desktop config, same wallpapers, same software configs, etc. All my files are exactly as they were before.

This is a little dangerous, but if something gets messed up, I can just roll back to a daily snapshot. If my house burns down, well basically my entire computer is (by default) backed up to a server at my parents house.

Sure it’s a little bit slower, but not that much. I can even do photo/video editing from my NAS like this (2.5GbE). I barely notice it, especially since I keep games on the local NVME drive.

137
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/grnrngr on 2025-12-08 18:18:53+00:00.

Original Title: A Practical Appliance Combining Self-Hosted Services: This is "KitchenAide," my DIY kitchen appliance. In our apartment, Mealie is king. So is Home Assistant. We wanted a way to bring our lab smarts into the kitchen safe and convenient.

138
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Solid-Dog-6616 on 2025-12-08 16:27:38+00:00.


I am working on a small project and I need to test account creation flows on a few platforms. The issue is that some of them require phone verification, and I do not want to use my personal number for every test.

What is the simplest way to generate temporary or disposable phone numbers that actually work for verification. I see a lot of sketchy sites online and I do not know which ones are safe or reliable.

How do developers or self hosters usually handle this. Looking for something easy to manage that will not leak my real number or expose it to random services.

139
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/No-Card-2312 on 2025-12-08 06:39:19+00:00.


Hey folks, I’d like to hear how you prepare a fresh Linux server before deploying a new web application.

Scenario: A web API, a web frontend, background jobs/workers, and a few internal-only routes that should be reachable from specific IPs only (though I’m not sure how to handle IP rotation reliably).

These are the areas I’m trying to understand:


  1. Security and basic hardening

What are the first things you lock down on a new server?

How do you handle firewall rules, SSH configuration, and restricting internal-only endpoints?

  1. Users and access management

When a developer joins or leaves, how do you add/remove their access?

Separate system users, SSH keys only, or automated provisioning tools (Ansible/Terraform)?

  1. Deployment workflow

What do you use to run your services: systemd, Docker, PM2, something else?

CI/CD or manual deployments?

Do you deploy the web API, web frontend, and workers through separate pipelines, or a single pipeline that handles everything?

  1. Monitoring and notifications

What do you keep an eye on (CPU, memory, logs, service health, uptime)?

Which tools do you prefer (Prometheus/Grafana, BetterStack, etc.)?

How do you deliver alerts?

  1. Backups

What exactly do you back up (database only, configs, full system snapshots)?

How do you trigger and schedule backups?

How often do you test restoring them?

  1. Database setup

Do you host the database on the same VPS or use a managed service?

If it's local, how do you secure it and handle updates and backups?

  1. Reverse proxy and TLS

What reverse proxy do you use (Nginx, Traefik, Caddy)?

How do you automate certificates and TLS management?

  1. Logging

How do you handle logs? Local storage, log rotation, or remote logging?

Do you use ELK/EFK stacks or simpler solutions?

  1. Resource isolation

Do you isolate services with containers or run everything directly on the host?

How do you set CPU/memory limits for different components?

  1. Automatic restarts and health checks

What ensures your services restart automatically when they fail?

systemd, Docker health checks, or another tool?

  1. Secrets management

How do you store environment variables and secrets?

Simple .env files, encrypted storage, or tools like Vault/SOPS?

  1. Auditing and configuration tracking

How do you track changes made on the server?

Do you rely on audit logs, command history, or Git-backed config management?

  1. Network architecture

Do you use private/internal networks for internal services?

What do you expose publicly, and what stays behind a reverse proxy?

  1. Background job handling

On Windows, Task Scheduler caused deployment issues when jobs were still running. How should this be handled on Linux? If a job is still running during a new deployment, do you stop it, let it finish, or rely on a queue system to avoid conflicts?

  1. Securing tools like Grafana and admin-only routes

What’s the best way to prevent tools like Grafana from being publicly reachable?

Is IP allowlisting reliable, or does IP rotation make it impractical?

For admin-only routes, would using a VPN be a better approach—especially for non-developers who need the simplest workflow?


I asked ChatGPT these questions as well, but I’m more interested in how people actually handle these things in real-world.

140
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/juli409 on 2025-12-08 15:53:20+00:00.

141
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/MarceloLinhares on 2025-12-08 14:33:42+00:00.

142
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/No-Anchovies on 2025-12-08 13:03:49+00:00.

143
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/stefufu on 2025-12-08 10:57:02+00:00.


Traefik introduced a more restrictive way of handling encoded characters in paths.

Link: https://doc.traefik.io/traefik/migrate/v3/#v364

This made Collabora (or Nextcloud Office) not work anymore, with the error "Failed to establish socket connection or socket connection closed unexpectedly. The reverse proxy might be misconfigured, please contact the administrator. For more info on proxy configuration please checkout https://sdk.collaboraonline.com/docs/installation/Proxy_settings.html"

The fix I found consists in adding the options allowEncodedSlash and allowEndodedQuestionMark in the static configuration of Traefik

The link shows the configuration option for the CLI.

Below you can find the options for the yaml file (traefik.yaml)

entryPoints:
  <name>:
    http:
      encodedCharacters:
        allowEncodedSlash: true
        # allowEncodedBackSlash: true
        # allowEncodedNullCharacter: true
        # allowEncodedSemicolon: true
        # allowEncodedPercent: true
        allowEncodedQuestionMark: true
        # allowEncodedHash: true

(Pay attention that only allowEncodedSlah and allowEncodedQuestionMark are used, the others are commented out and I put them in case anyone need that configuration for other situations)

I wanted to share this fix, hoping it will help others, but i'm no expert! So if you find problems with my fix, or if you found a better solution, feel free to post a comment below!

PS: I didn't specify if but I'm using Nextcloud AIO on Ubuntu 24.04 with the latest docker version

I assume that it's the same for other ways of running Nextcloud, though.

144
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/OkSwordfish8878 on 2025-12-08 06:19:46+00:00.


I’ve been running my Docker stack and a couple small VMs on local hardware for a long time. It works, but I’m kinda over the random shutdowns and worrying about drives failing. I’m not trying to migrate everything away, just want to offload 1–2 heavier services to a simple cloud VM so my home box breathes a bit.

Most people recommend the usual Hetzner / Vultr type stuff, but I’m curious if anyone has experience with smaller EU-based cloud hosts. ideally something with fast VM provisioning and straightforward pricing.

Would love to hear what people here are using.

145
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/skiguy0123 on 2025-12-07 18:32:48+00:00.


I recently noticed that my nextcloud instance was missing photos. I have the android app set to automatically upload my photos. When I need to clear up space on my phone, I make a separate backup (because I'm a paranoid SOB and hard drives are relatively cheap). I noticed that nextcloud auto upload missed about 10% of the photos. I'm not going to bash the nextcloud devs, as I recognize that I am using a free product and am owed nothing, but I'm making this post so others are aware of this risk. Apparently I'm not alone https://help.nextcloud.com/t/android-client-does-not-auto-upload-all-images/216849/14

146
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/cookiedude25 on 2025-12-08 00:02:25+00:00.


Hi, I'm currently developing an alternative to Sonarr/Radarr/Jellyseer that I called MediaManager.

Since I last posted here, I added the ability to import media from an existing library!

Why you might want to use MediaManager:

  • OAuth/OIDC support for authentication
  • movie AND tv show management
  • multiple qualities of the same Show/Movie (i.e. you can have a 720p and a 4K version)
  • you can select if you want the metadata from TMDB or TVDB on a per show/movie basis
  • Built-in media requests (kinda like Jellyserr)
  • support for torrents containing multiple seasons of a tv show (Season packs)
  • Support for multiple users
  • config file support (.toml)
  • addition of Scoring Rules, they kinda mimic the functionality of Quality/Release/Custom format profiles
  • addition of media libraries, i.e. multiple library sources not just /data/tv and /data/movies
  • addition of Usenet/Sabnzbd support
  • addition of Transmission support

MediaManager also doesn't completely rely on a central service for metadata, you can self host the MetadataRelay or use the public instance that is hosted by me.

Notable changes since I last posted:

  • Added the ability to import media from an existing library!

Features like these are a lot of work, please consider supporting my work ❤️

Github Repo Link: https://github.com/maxdorninger/MediaManager

Main dashboard

TV Show Details View

147
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/pozinux84 on 2025-12-07 21:17:43+00:00.


I built a self-hosted app. People download it and I can see some activity on the demo instance, but aside from GitHub stars and increasing Docker pulls, I have no real way to know whether the app is actually being used or at what scale.

When I had an Android app on the Play Store, I could at least see active install stats and user comments, so even without exact numbers I could tell it was being used.

For those of you who maintain open-source apps: how do you get even a rough sense of real-world usage without adding telemetry? Is telemetry the only realistic option? Would something like a built-in comment/feedback system make sense?

148
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Appropriate_Monk1552 on 2025-12-07 18:16:12+00:00.


Not paid, not involved with the project other than using it at home (I'm a part-time Infoblox engineer at my day job). I had been running nebula-sync to keep two pihole servers running and had switched over to Technitium a couple of months ago because #big_kid_dns and/or more challenging or something.

Technitium does DNS blacklists just fine, so that's covered. And?

Technitium just released clustering. Yes, I had been doing primary/seconday zones and serials and all that between the two dns servers. But now I'm managing the cluster from one spot and not relying on a 3rd-party service to sync records and settings between two DNS servers.

Astounding project for DNS. Truly deserves way more attention in /selfhosting and anywhere else IMHO.

EDIT: I run these on two Dell 3040 Wyse thin clients with minimal Debian, which takes up about 40% of the local storage. Installing the OS just takes one tweak using advanced install mode.

149
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/rickrock6666 on 2025-12-07 16:00:01+00:00.


Been running Stash for a while and it always bugged me that generating previews and sprites would peg my CPU at 100% for hours while my GPU sat there doing nothing. Turns out Stash only uses hardware acceleration for playback, not for generating stuff.

Patched it to use CUDA for decoding and NVENC for encoding on all generation tasks - previews, sprites, phash, screenshots, markers. stuff generates 3-5x faster now.

Pre-built container if anyone wants it:

docker pull ghcr.io/rufftruffles/stash-nvenc-patches:latest

Repo: https://github.com/rufftruffles/stash-nvenc-patches

Only works with NVIDIA cards, hardcoded for CUDA/NVENC.

Built this with help from claude, I'm not a go developer but wanted this to exist.

150
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/redux_0x5 on 2025-12-07 08:51:04+00:00.

view more: ‹ prev next ›