A new build, some OTJ lessons I wish I had known, and looking to the H1110

Tl;dr - 1. Be careful with consumer ssd’s on LSI hba’s. 2. The Rosewill server chassis doesn’t have space for a random card if using an ATX motherboard. 3. Consider the Scythe Kaze Flex for your fan wall

I’m moving from NK 4 to a new build. On top of this I’m consolidating a lot of my devices.

OS: I bought Unraid today. My backups were scattered but mirrored to a degree on Windows, Xpenology, Synology, and FreeNAS.
Chassis: Rosewill 15-bay (the old model, not the 2021 refresh)
Mobo: Asrock atx that has two x16 that can split to x8/x8. This is rare in the consumer market because most people want a gpu in x16. As a result you see x16/x4 frequently. I wanted x8/x8 and got a deal on this motherboard.
CPU: i5-8600
Cards: Dell H330 flashed to LSI stock. This is SAS3. Connect-x3 in the other x16
Network: The motherboard has 2.5 built in. I’m using a separate 2.5 usb dongle to my current backup for a direct connection. The Connect-x3 runs 10/40gbe to my switch.

The dilemma is shoe-horning tech that I’ve bought into the fray. I bought a A2U8X25S3DPDK. It’s sas3 8x2.5" that fits into 2x5.25 (kinda). Cooling on it is an issue with bitdeals 2.5 sas spinners. In the Rosewill, I removed the center disk cage and set this in place. It is not at all flush, it’s just sitting there. I removed the Arctic 120mm right behind it, because it doesn’t have enough static pressure to pull air. I replaced it with Scythe Kaze Flex, and I would recommend replacing all 3 fans in the fan wall to these.

For a few “I wish I had known that” things.

First: LSI HBA + consumer SSD don’t play well for trim. I found this out last night. The 3008 chipset (like my Dell) is apparently better, but the SSD allegedly still requires specific on-device TRIM abilities (DRAT, ZRAT) for it to work. There isn’t really a way to know ahead of time if a consumer drive has those features. Reddit outlines only a few ssd’s that do/don’t. Unraid may have its own issues with TRIM as well, according to the forums.

Second: If I populate my motherboard with 2x NVME, it leaves me with 4 sata ports (out of 6). For our type of monster builds, getting sata ports isn’t too much of a problem because we can add hba’s. I knew this going in.

The Rosewill allows 15 drives. I took out the center cage. That leaves me with 10 spaces. I used a pcie mining riser to hook my 9207 up to an x1, and it functions. But placing a card like that in the Rosewill is complicated. It can fit in some places, but would block airflow. Like, it could fit in front or behind the fan wall parallel to the wall, but it’s just going to block that fan. Mounting the riser isn’t easy, because the connection with the card is flimsy and htey are designed for vertical placement.

Enter the H1110. I was ebay’ing for adaptec cards, and found the IBM H1110. This is an older sas card that is 4i. It’s also a fraction of the size of a regular LSI card. For $35, I bought one in the hope that it can be more easily placed in the Rosewill case. I’ll eventually post pics once it arrives and I finish experimenting.

Meanwhile, I am likely to cobble together a TrueNas system as cold storage in the background in an old Antec tower (circa 2002).

What I wish I had done differently: consider a tower build for use with the 2x5.25 cage. I still can, but I need to consolidate before adding more hardware to my room. I am still hoping the new Rosewill’s come down in price and I will buy another one. I’d stick with 4u, the 2u’s don’t look to have enough fans.

Skip the idea of “more flash” to a backup strategy. I didn’t realize the TRIM issue, and although I was going to use limited ssd, it’s still a bummer. Now if I want longevity for ssd’s, I’ll need to go with enterprise ssd’s. Not a problem last year; this year the prices are horrific.

Last, I wish I had bought all this shit in February before prices of literally everything increased.

As a note the info about the TRIM and SSDs is very well documented in a variety of places including several in the forums.

Yes, I see that now. But in the course of a day and everything available, there’s only so much to retain.

100%. Just making sure you were aware. If you’re going all flash array I would love to see some benchmarks

No, not all flash. Considering 2xNVME, and who knows what sata ssd. Hoping to score some sas3 ssd’s, but we’ll see.

I’m not doing anything beyond what we do here. Not like /r/homelab where they are running bizarre db’s at home and things like that.

Updates, as I’m finishing my build.

The H1110 works great in a pcie x1. Recommended if you have a niche use like I did.

I would not buy the Intel 8x2.5" thing again. It’s not too useful except keeping 2.5" ssd’s connected with three cables (2 x SAS and 1 x 4pin power). If cable management is a problem, this fixes it, and that’s all it fixes.

Thanks to bitdeals.tech, I have 4x 1.8tb 2.5 drives. I didn’t bite on their latest ssd products, but found an old 800gb sas ssd on ebay (it’s slow), and 200gb sas ssd (Hitachi rebadged as a Sun).

The 1.8tb spinners are an absolute steal in today’s world. I had also bought a pack of 300gb to test as cache, and in mirrored mode could max out their speed (220MB/s in Windows Explorer copy/paste). I switched to nvme and the ssd’s for cache going forward.

Attached is a printout of spinner benchmarks in Unraid. You can see the 1.8tb’s performing pretty well for the bulk of the test.

1 Like