this post was submitted on 05 Nov 2025
47 points (98.0% liked)

Selfhosted

53057 readers
588 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I'm trying to migrate services from my NAS (currently docker) to this machine.

How should Jellyfin be set up, lxc or vm? I don't have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech's setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn't working for me: curl doesn't work on my machine, most install scripts don't work, nano edits crash, and mounts are inconsistent.

My Synology NAS is mounted to the host, but making mount points to the lxc doesn't actually connect data. For example, if my NAS's media is in /data/media/movies or /data/media/shows and the host's SMB mount is /data/, choosing the lxc mount point /data/media should work, right?

Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don't persist.

Any suggestions for resource allocation? I've been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.

If you suggest command lines, please keep them simple as I have to manually type them in.

Here's the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe

you are viewing a single comment's thread
view the rest of the comments
[–] LazerDickMcCheese@sh.itjust.works 1 points 1 week ago (1 children)

I solved the LXC boot error; there was a typo in the mount (my keyboard sometimes double presses letters, makes command lines rough).

So just to recap where I am: main NAS data share is looking good, jelly's LXC seems fine (minus transcoding, "fatal player error"), my "docker" VM seems good as well. Truly, you're saving the day here, and I can't thank you enough.

What I can't make sense of is that I made 2 NAS shares: "A" (main, which has been fixed) and "B" (currently used docker configs). "B" is correctly connected to the docker VM now, but "B" is refusing to connect to the Proxmox host which I think I need to move Jellyfin user data and config. Before I go down the process of trying to force the NFS or SMB connection, is there any easier way?

[–] curbstickle@anarchist.nexus 2 points 1 week ago (1 children)

Great!

Transcoding we should be able to sort out pretty easily. How did you make the lxc? Was it manual, did you use one of the proxmox community scripts, etc?

For transferring all your JF goodies over, there are a few ways you can do it.

If both are on the NAS, I believe you said you have a synology. You can go to the browser and go to http://nasip:5000/ and just copy around what you want if its stored on the NAS as a mount and not inside the container. If its inside the container only its going to be a bit trickier, like mounting the host as a volume on the container, copying to that mount, then moving around. But even Jellyfin says its complex - https://jellyfin.org/docs/general/administration/migrate/ - so be aware that could be rough.

The other option is to bring your docker container over to the new VM, but then you've got a new complication in needing to pass through your GPU entirely rather than giving the lxc access to the hosts resource, which is much simpler IMO.

[–] LazerDickMcCheese@sh.itjust.works 2 points 1 week ago (1 children)

I used the community script's lxc for jelly. With that said, the docker compose I've been using is great, and I wouldn't mind just transferring that over 1:1 either...whichever has the best transcoding and streaming performance. Either way, I'm unfortunately going to need a bit more hand-holding

[–] curbstickle@anarchist.nexus 2 points 1 week ago* (last edited 1 week ago) (1 children)

LXC is going to be better, IMO. And we can definitely get hardware acceleration going.

So first, let's do this from the console of the lxc:

ls -la /dev/dri/

Is there something like card0 and renderD128 listed?

[–] LazerDickMcCheese@sh.itjust.works 1 points 1 week ago (1 children)

LXC is fine with me, the "new Jellyfin" instance is mostly working anyway. It just has a few issues:

  1. Config and user data from "old Jellyfin" isn't there and doesn't want to connect. I tried connecting my NAS's docker data to Prox host like the previous mount, but it doesn't like it.
  2. Aforementioned HWA errors (I'm guessing I checked an incorrect box)
  3. Most data from the NAS isn't showing up. I added all libraries and did a full rescan and reboot, but most of the media still isn't there. I'm hoping passing config data will fix that

And yes, I see card0 and renderD128 entries. 'vainfo' shows VA-API version: 1.20 and Driver version: Intel iHD driver...24.1.0

[–] curbstickle@anarchist.nexus 2 points 1 week ago (1 children)

Ok lets start with that rendering - seeing those is good! You should only need to add some group access, so run this:

groups jellyfin

The output should just say "jellyfin" right now. Thats the user thats running the Jellyfin service. So lets go ahead and....

usermod -a -G video,render jellyfin
groups jellyfin

You should now see the jellyfin user as a member of jellyfin, video, and render. This gives access for the jellyfin user to make use of the gpu/hardware acceleration.

Now restart that jellyfin and try again!

[–] LazerDickMcCheese@sh.itjust.works 1 points 1 week ago (1 children)

Ok, consider it done! My concern is this section of the admin settings:

I followed Intel's decode/encode specs for my CPU, but there's no feedback on my selection. I'm still getting "Playback failed due to a fatal player error."

[–] curbstickle@anarchist.nexus 1 points 1 week ago (1 children)

What do you have above that?

There should be a hardware acceleration dropdown, and then a device below that. Since you have /dev/dri/renderD128, that should be in the "device" field, and the Hardware Acceleration dropdown should be QSV or VAAPI (if one doesn't work, do the other)

[–] LazerDickMcCheese@sh.itjust.works 1 points 1 week ago* (last edited 1 week ago) (1 children)

QSV and '/dev/dri/renderD128'. I'll switch to VAAPI and see... Edit: no luck, same error

[–] curbstickle@anarchist.nexus 1 points 1 week ago (1 children)

Just checked one of mine, VAAPI is where I'm set, with acceleration working. 7th or 8th gen or so on that box, so VAAPI should do the trick for you.

[–] LazerDickMcCheese@sh.itjust.works 1 points 1 week ago (1 children)

So should I be disabling some hardware decoding options then?

[–] curbstickle@anarchist.nexus 1 points 1 week ago (1 children)

Might be a better question for someone who knows more JF ffmpeg configs, but I think the HEVC up top should be checked and the bottom range extended hevc should be unchecked. I think you should have AV1 support too.

Worst case, start with h264 and move down the list

[–] LazerDickMcCheese@sh.itjust.works 1 points 1 week ago (1 children)

Great point actually, time for c/jellyfin I think. Would you mind helping me with the transferal of config and user data? Is "NFS mount NAS docker data to host" > "pass NFS to jelly LXC" > "copy data from NAS folder to LXC folder" the right idea?

[–] curbstickle@anarchist.nexus 2 points 1 week ago (1 children)

Also may be good for c/jellyfin, but what I'd see if you could do is leverage a backup tool. Export and download, then import, all from the web. I know there is a built in backup function, and I recall a few plugins as well that handled backups.

Seems to me that might be the most straightforward method - but again, probably better with a more jellyfin focused comm for that. I have moved that LXC around between a bunch of machines at this point, so snapshots and backups via proxmox backup server are all I need.

[–] LazerDickMcCheese@sh.itjust.works 2 points 1 week ago (1 children)

Yeah, it seems like the transplanting of LXCs, VMs, and docker is fairly pain-free...where I really shot myself in the foot is starting on an underpowered NAS and network transfers are clearly not my friend.

I'm not familiar with the backup stuff, but I remember hearing about it being added recently. I'll look into it, thanks for the recommendation.

You taught me a lot of stuff in just a couple days. The overwhelming/anxious part of dealing with Proxmox for me is still the pass-through of data from outside devices. VMs aren't bad at all, but everything else seems like a roll of the dice to see if the machine will allow the connection or not

[–] curbstickle@anarchist.nexus 2 points 1 week ago (1 children)

It definitely is, especially if you get a cluster going. FWIW, my media is all on a synology NAS (well technically two, but one is a backup) that I got used through work, so your setup isn't the wrong approach (imo) by any stretch.

What it comes down to in the connection is how you look at it - with a VM, its a full fledged system, all by its lonesome, that just happens to live inside another computer. A container though is an extension of that host, so think of it less like a VM and more like resource sharing, and you'll start to see where the different approaches have different advantages.

For example, I have transcode nodes running on my proxmox cluster. If I had JF as a VM, I'd need another GPU to do that - but since its a container for both JF and my transcode node, they get to share that resource happily. Whats the right answer is always going to depend on individual needs though.

And glad I could be of some help!

In case you want to keep following, I did make that post in c/jellyfin