this post was submitted on 02 Jul 2025
1 points (100.0% liked)

It's A Digital Disease!

23 readers
1 users here now

This is a sub that aims at bringing data hoarders together to share their passion with like minded people.

founded 2 years ago
MODERATORS
 
The original post: /r/datahoarder by /u/miscawelo on 2025-07-02 02:37:37.

I'm in the process of moving my ZFS pool from my Proxmox server to a dedicated TrueNAS (Community Edition) server and since I'm upgrading to larger drives, I'm also testing different pool configurations.

So far performance has been as expected with my testing config, but I’m seeing some behavior that I’m unsure about. For testing I created a pool with a single mirrored vdev (Toshiba N300 drives: 7200 rpm and 512 MB buffer, if it matters), some datasets and different share types. The issue appears on the NFS share: when transferring a single large file (~120 GiB MKV file) from Proxmox, at first I get the expected speeds of around gigabit, but I do see consistent dips in both network and disk I/O.

I've been digging through docs and forum posts to learn about vdev types and performance tuning. I don’t think I’d benefit much from any kind of vdev like ZIL, for example (maybe a metadata vdev but even that seems unnecessary for my use case).

That said, I’ve read that a SLOG might help in this specific case? Since NFS is sync by default.

My main questions:

  • Are these performance dips expected with just a single mirrored vdev? Will adding the other 3 mirrors (for a total of 4) smooth things out?
  • Would a SLOG improve this specific case scenario? If not, what else might help optimize large file transfers over NFS?

Below are the network and I/O graphs during the transfer. Please let me know if more info is needed, any insight is helpful. Thanks in advance!

Disk I/O

Network activity

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here