Seanho 36U rack

Hi folks! It feels so vulnerable to post my rack – particularly the back side! There’s always something that could be improved! It’s grown piecemeal over the last four years or so, and supports my data science / survey analysis work.

8 nodes currently, mostly HCI with Debian + k3s and rook/ceph. 10/40GbE all around, and most nodes have both SAS HDDs and enterprise NVMe. I’m particularly happy that I can bring any node down for maintenance without impacting storage availability. The whole rack pulls somewhere around 700W idle, I believe. The noise level is low enough to easily hold a conversation next to it. I wouldn’t want to have it in my office, but my ears are pretty sensitive, and I can work on the rack for extended periods without needing hearing protection.


  • 2x Hyve Zeus v1 (X9DRD) for compute (usually off)
  • CSE-825 with X10SDV (D-1521)
  • R4000 with X11SSL and 1240 v5
  • L4500 with X10DRH and 2680 v4
  • L4500 with GA-7PESH2 and 2670 v2
  • L4500 with GA-7PESH2 and 2670 v2
  • Unused TI-24x and ICX6610-24F, a small portion of my “switch graveyard”

I had to leave 1U open above each of the L4500 since mine are slightly more than 4U.


Power cables on left, data on right, roughly. SlimRun 30awg cat6 and OS2. Enough slack to pull each server out fully on its sliding rails, while still running and connected. I should make some cable management arms so the weight doesn’t pull on the fiber so much.

There are two 1500VA UPSes on the bottom of the rack, off frame.

The 825 doesn’t have a NIC for the fiber to connect to yet; I’m waiting for a ribbon cable to help me bifurcate the X10SDV’s one and only PCIe slot, so I can have both an HBA and fiber NIC.

Rear Networking

This view shows the TOR switch (ICX6610-48) and mess of cables. It used to be an even bigger mess; switching from 24awg to 30awg patch cables and from DAC to OS2 helped a lot. The two Hyve servers are using a 4xSFP+ breakout DAC from one of the QSFP ports in back. The switch has been fan modded with a 555 timer to fake tach to the PSU and fan tray and two Arctic F14 fans on top.

Beneath the switch is the backside of a nearly-stock HP290 used as a head-end unit and ingress node for the cluster. No 10GbE networking on that for now.

Beneath the 290 is a cheap 15A PDU. You may have noticed in the previous photo an ancient surge protector being used as a zero-U PDU, which is a big no-no. I’m in the process of moving stuff over to the PDU.

Off frame above the ICX6610 is a SX6018 I got as a cheap experiment in FDR IB but haven’t got around to fan modding.

The rack has a gigabit cat5e line to a network closet elsewhere with the modem, router (m73, OPNSense), PoE switch, and ATA.

As with all homelabs, it will continue to change over time, and there are a number of things I would have done differently in hindsight. Feel free to ask questions here or on the Discord!

Also see my 65" workstation setup!


Are you using this for work or purely for home?

which boxes are storage boxes?

Mostly for work; I’m freelance.

All the 2U/4U have HBAs and have Ceph OSDs (SAS HDDs, NVMe flash), so I have 5 OSD nodes.