this post was submitted on 08 Aug 2023
225 points (95.2% liked)

Selfhosted

46672 readers
343 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I can't say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which... is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What's the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs....

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

top 50 comments
sorted by: hot top controversial new old
[–] Decronym@lemmy.decronym.xyz 82 points 2 years ago* (last edited 2 years ago) (2 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NVMe Non-Volatile Memory Express interface for mass storage
PCIe Peripheral Component Interconnect Express
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage

4 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.

[Thread #13 for this sub, first seen 8th Aug 2023, 21:55] [FAQ] [Full list] [Contact] [Source code]

[–] steal_your_face@lemmy.ml 15 points 2 years ago (1 children)
[–] 21Cabbage@lemmynsfw.com 9 points 2 years ago

Fantastic bot, honestly.

[–] 21Cabbage@lemmynsfw.com 4 points 2 years ago
[–] infinitevalence@discuss.online 22 points 2 years ago (1 children)

I dont see any issues!

/me hides his 16 4tb 12g SAS drives.....

[–] brygphilomena@lemmy.world 7 points 2 years ago (2 children)

I think I'm at 7x 18tb drives. I'm slowly replacing all the smaller 8tb disks in my server. Only 5 more to go. After that it's a new server with more bays and/or a jbod shelf.

[–] infinitevalence@discuss.online 1 points 2 years ago

the SAS drives are all SSD, I also have 8x 12tb in rust, and an LTO robot though its not currently in service.

load more comments (1 replies)
[–] maxprime@lemmy.ml 10 points 2 years ago (1 children)

If that’s a problem then I don’t want to be solved.

[–] xtremeownage@lemmyonline.com 2 points 2 years ago (2 children)

Its only a problem when you get the electric bill! (Or the wife finds your ebay receipts)

[–] I_Miss_Daniel@kbin.social 4 points 2 years ago (1 children)

I doubt these use much power compared to their spinning rust anticedents.

[–] xtremeownage@lemmyonline.com 4 points 2 years ago

I meant my general electric bill. My server room averages 500-700watts.

[–] steeev@midwest.social 2 points 2 years ago (1 children)

Was curious how many watts this machine pulls? Also curious if you had ever filled it will spinning disks - would flash be less power hungry?

[–] xtremeownage@lemmyonline.com 2 points 2 years ago

This one averages around 220-250.

It's completely full of spinning disks. Flash would be less power usage, but, would cost significantly more, and would end up being drastically more expensive.

[–] Millie@lemm.ee 9 points 2 years ago (2 children)

I dream of this kind of storage. I just added a second m.2 with a couple of TB on it and the space is lovely but I can already see I'll fill it sooner than I'd like.

[–] xtremeownage@lemmyonline.com 6 points 2 years ago (1 children)

I will say, it's nice not having to nickel and dime my storage.

But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

But, still PLENTY of usable storage, and- highly available at that!

[–] krolden@lemmy.ml 1 points 2 years ago* (last edited 2 years ago) (1 children)

Any reason you went with a striped mirror instead of raidz5/6?

[–] xtremeownage@lemmyonline.com 2 points 2 years ago

The two ZFS pools are only 4 devices. One pool is spinning rust, the other is all NVMe.

I don't use raid 5 for large disks, and instead go for raid6/z2. Given z2 and striped mirrors both have 50% overhead with only 4 disks- striped mirrors has the advantage of being much faster, double the IOPs, and faster rebuilds. For these particular pools, performance was more important than overall disk space.

However, before all of these disks were moved from TrueNAS to Unraid- there was a 8x8T Z2 pool, which worked exceptionally well.

Cripes I was stoked I managed to upgrade from 4x 2tb to 4x 4tb recently.

[–] loudWaterEnjoyer@lemmy.dbzer0.com 6 points 2 years ago (1 children)

Is your problem that you are bragging about your drives?

[–] xtremeownage@lemmyonline.com 1 points 2 years ago (1 children)

I'm out of room to add more drives!

Every one of my servers is basically completely full on disks. I need more servers.

I need some drives

[–] joel@aussie.zone 4 points 2 years ago (1 children)

Love this. Apart from hosting an instance, what are you using it for? Self-cloud?

[–] xtremeownage@lemmyonline.com 5 points 2 years ago (1 children)

I host a few handfuls of websites, some discord bots.

I hoard Linux isos. I use it for general purpose learning and experimentation.

There is also kubernetes running, source control, and a bit of everything else.

[–] tinysalamander@lemmy.world 1 points 2 years ago (1 children)

Amateur data hoarder here; teach me your ways

[–] xtremeownage@lemmyonline.com 1 points 2 years ago

Backups backups backups.

Anything you don't want to lose, follow the 3.2.1. rule.

Snapshots / Raid are not backups.

Also, unraid is fantastic for handling bulk media. ZFS is fantastic for keeping things safe. (and fast).

And ceph is great for squeezing 20k IOPs out of 6 million IOPs worth of enterprise SSDs!

[–] Heggico@lemmy.world 4 points 2 years ago (1 children)

I'm confused. Why do those cards have a heatsink? I needed a card like that because my motherboard did not support bifurcation. So had to use a splitting card. The cards I know that require bifurcation do not even need a controller or heatsink. They are just wired pretty much directly to the pci-e bus.

[–] xtremeownage@lemmyonline.com 2 points 2 years ago (3 children)

I actually looked up the chip numbers, and its a "splitter".

I, don't know WHY there is a splitter, as a splitter isn't needed, and these cards are advertised to only work on motherboards supporting bifurcation. However, there is indeed, a splitter.

The documentation is also, REALLY horribly translated.

Note: Without pcie splitter function in this host adapter (ASM1182E chip), so motherboard must support PCIe Bifurcation. Otherwise, only one M.2 PCIe SSD will be recognized. If you are not sure PCIe Bifurcation of your motherboard, please consult motherboard munufacture or contact us via amazon message

Here is the documentation for the chip itself: https://www.asmedia.com.tw/product/213yQcasx8gNAzS4/b7FyQBCxz2URbzg0

I, am not 100% certain how, where, or why it fits in there. Perhaps, its for link power management? Or something.

But, I can confirm, these cards DO require bifurcation to be enabled. Without bifurcation, you only see the first drive.

load more comments (3 replies)
[–] YonatanAvhar@programming.dev 4 points 2 years ago

This does seem like an issue, I can help you free up some PCIe slots if you'd like

[–] platysalty@kbin.social 4 points 2 years ago

I'll gladly take those problems out of your hands for free

[–] krolden@lemmy.ml 3 points 2 years ago (1 children)

Having a large flash pool really makes your life so much better.

Until you fill up all your space and have to buy more :p

[–] xtremeownage@lemmyonline.com 1 points 2 years ago

Hopefully that doesn't happen soon! I don't have too much room for more flash, lol.

But, I have quite a bit of available space, so, there shouldn't be any concerns. Also- tomorrow, after a few adapters arrives, I'll be adding another 2x 1T flash drives my Optiplex 5060 SFF.

[–] webuge@lemmy.dbzer0.com 3 points 2 years ago (1 children)

Well this seems to be a good problem to have hahah. If you need to get rid of some of those ssds count with me.

[–] xtremeownage@lemmyonline.com 4 points 2 years ago (1 children)

ebay! You can pick up these "used" enterprise NVMe and SSDs for CHEAP. All 10 arrived with less than 5% wear.

[–] webuge@lemmy.dbzer0.com 1 points 2 years ago

Good to know I will take a look thank you.

[–] SuperSecretThrowaway@lemmynsfw.com 3 points 2 years ago (1 children)

The only problem I see is using 8x slots instead of 16x slots for double the storage

[–] xtremeownage@lemmyonline.com 6 points 2 years ago* (last edited 2 years ago) (1 children)

Whats the problem?

Each NVMe uses 4 lanes. For each of these x8 slots, they have two NVMes, for a total of 8 lanes.

The x16 slot already has 4x NVMe in it, lol. The other x16 slot has a GPU, which is located in that particular slot due to the lovely 3d-printed fan shroud.

One of the other full-height x8 slots also has a PLX switch, and is loaded with 4 more NVMes.

[–] feitingen@lemmy.world 1 points 2 years ago (1 children)

Does the plx introduce noticeable latency, and does it get hot?

I want to get a few, but I don't really have the airflow you do, so I'm a bit worried.

[–] xtremeownage@lemmyonline.com 1 points 2 years ago

I have not noticed any issues with it.

And- prior to Jan of this year, I used two of them in an r720xd because it didn't support bifurcation. And- can't say I ran into any issues.

I also, have not checked to see if it was hot either though.

[–] Vake@lemmy.world 2 points 2 years ago (1 children)

Wondering what software you’re running to have all the storage managed and then your containers and things on top? Is it all on the 730XD?

[–] towerful@programming.dev 4 points 2 years ago

The picture of the GUI at the end is Proxmox.
Proxmox is really powerful and great for a few servers.

[–] Rollio@lemmy.ml 1 points 2 years ago (1 children)

I don’t see any problem here…

[–] xtremeownage@lemmyonline.com 1 points 2 years ago

There are no free PCIe slots left! That is a huge problem!

load more comments
view more: next ›