Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I don't know what kind of server you're running, but if you plan to host any video then you want a dedicated GPU or a cpu with integrated graphics (and even if you're not, I think it's a good idea anyway), which the 5600x doesn't.
I also think it's overboard to get $90 watercooling. Just get an air cooler for half or a third of the price
Someone else mentioned the cooler too, so that's out for sure. To be honest, I never really thought about graphics in the traditional sense, but I do need something for at least Jellyfin transcoding. And maybe a small llm. Would it be better to get a dedicated GPU or CPU with integrated?
People buils jellyfin media servers out nases which have complete garbage computing power. It really does not take much unless you have like 4 4k tvs and fell for dolby atmos scam.
The gold standard for transcoding these days is a newer gen Intel CPU with integrated graphics.
The integrated GPU on those intel chips can do Intel QuickSync / QSV which will handle a dozen streams at once without breaking a sweat.
Dedicated GPUs are obviously going to be more powerful. I've never run ai before so maybe someone else can weigh in on the requirements for it, but I can say for sure that an igpu is good enough for jellyfin transcoding. It also depends on your budget, do you want to spend the extra money just for a dedicated GPU?
If you go igpu route I think that Intel is recommended over AMD, but you should probably do extra research on that before buying
My amd igpu works just fine for jellyfin. LLMs are a little slow, but that's to be expected.
Yeah, I'm not sure if I really want to deal with an llm. It would mostly be for home assistant, so nothing too crazy.
I have a very similar NAS I built. The Home Assistant usage doesn't really even move the needle. I'm running around 50 docker containers and chilling at about 10% cpu.
The LLM for home assistant, or just HA in general doesn't move the needle? My HA is also pretty low key, but I was considering the idea of running my own small llm to use with HA to get off of OpenAI. My current AI usage is very small, so I wouldn't need too much on the GPU side I'd imagine, but I don't know what's sufficient.
Just home assistant doesn't move the needle. The llms hit the igpu hard and my cpu usage spikes to 70-80% when one is thinking.
But my llms i'm running are ollama and invokeai each with several different models just for fun.