sofakng

joined 2 years ago
 

I have a few ZFS ZPOOLs configured:

NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backup   1.81T  1.41M  1.81T        -         -     0%     0%  1.00x    ONLINE  -
storage  21.8T  7.47T  14.3T        -         -     0%    34%  1.00x    ONLINE  -
tank     87.3T  19.2T  68.1T        -         -     0%    22%  1.00x    ONLINE  -

...but when I enter a directory ("cd /tank") it takes about two seconds or longer. When I perform a directory listing it's the same result. However, after it "warms up" then everything is instant.

What might be causing this?

I'm using a Norco RPC-4220 enclosure with LSI/Avago 9211-8i cards in IT mode. The drives are WD Ultrastar DC 550s (on one array) and WD Reds (different array) and are connected to the backplane using SATA and then cards are connected using SAS (I think?).

I don't think the drives themselves are spinning down because they don't seem to support it?

hdparm -B /dev/disk/by-id/ata-WDC_WD40EFZX-68AWUN0_WD-WX12DB0LN94R

/dev/disk/by-id/ata-WDC_WD40EFZX-68AWUN0_WD-WX12DB0LN94R:

APM_level = not supported

The only thing I can think of is that my server's RAM is 70% full which might be causing some problems?

 

I've purchased some recertified drives (from serverpartdeals.com) and I was researching how to test before using them. (I know that some people are OK with just using them as-is but I'd like to perform a test if possible and not incredibly long)

badblocks sounds like the standard approach but Arch Linux doesn't recommend it and I'm also unsure how long it will take for 16 TB drives. I saw a post where 8 TB took almost 4 days so I'm guessing these would take over a week (assuming no crashes, etc).

Instead, Arch Linux suggests encrypting the drive (cryptsetup), filling with zeros, and then reading it back. Is this sufficient or should be this be acceptable?

There is also a thread here that talks about filling an encrypted drive with random bytes and then checking the sha256 hash. This sounds better than the last approach since every byte could be unique but I'm not sure.

 

I'm using an old Norco RPC-4220 enclosure with two LSI 9211-8i cards and I'd like to purchase some WD HC550 drives but the ones I'm looking at have the power disable "feature".

How can I use these drives in my enclosure?

I can't use an molex-to-sata adapter with the drive bays and I definitely don't want to modify the drives themselves.

The backplanes are powered by molex connectors so maybe it will just work as-is but I'm not sure?

Drive bay

Backplanes

I guess I can use kapton tape (or similar) but I'm either hoping they will work as-is or I can easily (?) modify my backplane to support them.

 

I need some advice on replacing my existing 16 TB array (10x2TB RAIDZ2). I don't think I need a ton of storage so I'm looking at about 3x the storage (~50-60TB) and here is what I'm considering (serverpartdeals.com):

Size Model Type Price Price/tb No. Drives Total Price Usuable storage (RAIDZ2) Total price/tb
16TB HC530 recertified $157 $9.8/tb 5 $785 48 TB $16.3/tb
18TB X18 refurbished $161 $8.9/tb 5 $805 54 TB $14.9/tb
18TB X20 recertified $179 $9.9/tb 5 $895 54 TB $16.5/tb
~~14TB~~ ~~HC530~~ ~~refurbished~~ ~~$128~~ ~~$9.1/tb~~ ~~6~~ ~~$768~~ ~~56 TB~~ ~~$13.7/tb~~
14TB HC530 recertified $144 $10.2/tb 6 $864 56 TB $15.4/tb
16TB HC550 recertified $157 $9.8/tb 6 $942 64 TB $14.7/tb

* 14TB HC530 only has only available in-stock.

I'd like to continue using ZFS (RAIDZ2) and if I understand correctly, expandable RAIDZ(2) is coming soon so I'm assuming I can expand in the future if needed.

Can anybody give me any advice?

I've never used enterprise drives before and I've also never purchased refurbished or recertified but it's a lot cheaper than buying new and/or WD RED, etc.

I'm also unsure the differences between Exos and Ultrastar but I think X18 is older and so is HC530. All the drives listed have a 2-year warranty which I guess is sufficient.