NK4 Build Log - Migrating from a Pi Cluster to a bona fide NAS

Hi everyone,

Welcome to my first time ever building a PC, let alone a server/NAS-type box. I have been chatting with a bunch of you on Discord and asking question after question that I could probably solve with Google Fu, but you’ve been amazing and so helpful. I decided to build a NAS Killer 4.0 based on the needs for my server and all of the parts are almost here. In follow-up posts, I’ll talk a bit about what the needs are from this box + provide updates as I put it together. So without further ado, let’s get started.

2 Likes

1. What I need this box to do (Server Wish List/Needs):

This box needs to primarily be my own personal NAS, but also a server with a varied list of applications that need a ton of uptime. Almost all of these were running on my old Pi cluster and will now live on here.

Since I’m finally building my own server, I want to make sure I have:

  • Near 100% uptime (My wife can only tolerate “sorry hun, the light don’t work because the server is broken” so many times - and I’m out of times)
  • Solid kernel support (x86 solves this mostly)
  • Data integrity (I’ve been using ZFS on Ubuntu)
  • Ability to upgrade as we go (both in terms of drives and PCIe)
  • Minimum 1Gb Ethernet
  • Local network storage for my Surface Pro 7 (Samba access)
  • Ability to play nice with the Pis (for fun)

So here’s the plan:

OS - Ubuntu

  • Prefer Ubuntu because of previous experience
  • Supports ZFS-on-Root
  • Want to keep compatibility with my existing architecture (detailed later)
  • Must be Linux-based

Data Storage Backend - ZFS

  • Going to use ZFS (pools already created, data integrity features are a huge plus)
  • Finally have access to ECC RAM to improve ZFS data integrity checking

File Sharing / Backup

  • Samba shares direct access to files from Windows/Mac client machines
  • NFS shares for direct access to files from other nodes in Docker Swarm
  • TFTP for direct access to files from PXE Boot Devices
  • rclone for remote backup

Apps

  • Docker (Already using Docker Swarm)
  • Pi-Hole
  • Wireguard

Media Apps (Docker)

  • Plex - 2-5 streams
  • The Rs (Sonarr/Radarr/Lidarr)
  • Transmission
  • Tautulli
  • Ombi

“Cloud”-like Apps (Docker)

  • Nextcloud
  • Wordpress
  • BitwardenRS

Home Automation Apps (Docker)

  • Home Assistant

System/Cluster Monitoring Apps (Docker)

  • Prometheus
  • Grafana
  • Alertmanager
  • cAdvisor
  • Node Exporter
  • Docker Daemon Exporter

Docker Swarm “Backend”

  • Traefik (Reverse Proxy)
  • MariaDB (DB Backend for Bitwarden, Home Assistant, Nextcloud and Wordpress)

2. My existing setup (and general architecture):

3. The parts list:

Type Part Quantity Total Cost (incl. tax) Comments
Case Cooler Master Elite 350 1 $81.65 7x 3.5" slots + 4x 5.25" Slots
PSU Cooler Master Elite V3 500W 1 $0 75% Efficiency, Included w/ case
5.25" Adapter Icy Dock 5.25" to 1x 3.5" + 2x 2.5" ExpressCage 2 $80.54 2x 2.5" Slots are External & Hot-Swappable
Motherboard Supermicro X9SCM-F 1 $54.38 Used, missing I/O plate
CPU Intel Xeon E3-1230v2 1 $58.79 Used
RAM 8GB Kingston PC3-12800E DDR3-1600MHz ECC UDIMM 2 $97.98 -
CPU Cooler Thermaltake Gravity i2 1 $13.05 -
Case Fans Arctic P12 120mm PWM Fan 2 $28.76 Fans are positioned at opposite ends of case
HBA LSI SAS9211-4i + Full Height Bracket 1 $31.49 Used, x4 PCIe 2.0 so 4Gbps per SATA drive
SATA Cables Relper-Lineso 18" Straight SATA3 Cable 6 $8.70 Yellow
SATA Cables Relper-Lineso 18" Right-Angle SATA3 Cable 6 $8.70 Blue
SATA Cables Cable Matters 15-pin SATA to 4 SATA Power Splitter Cable - 18" 2 $13.05 -
SAS-to-SATA Cables CableCreation Mini SAS 36-pin Male to 4xSATA 7-pin Female Cable 1 $8.70 -
Total Excluding Drives - - $485.79 -
2.5" SSD Kingston 240GB A400 SATA3 2 $60.94 $126.96/TB - 500MB/s Read / 350MB/s Write, 7mm Height
2.5" HDD Seagate Barracuda 1TB 2 $90.09 $45.05/TB - Had from previous cluster, Up to 140MB/s, SMR, 7mm Height
3.5" HDD Seagate IronWolf 4TB 4 $440.90 $27.56/TB - Had from previous cluster, Up to 180MB/s
Total Including Drives - - $1077.72 -

You should change your HDDs to price/TB, that’s what we use here. :slight_smile:

Subscribed, waiting for more updates!

Just changed it, and cool! I’m taking a break for dinner but should have some more stuff up later tonight :slight_smile:

1 Like

Ok time for…

4. Starting the Build!

So most of the parts are here, time to start this build for real.

First thing to show up was the case which I immediately started plugging drives into:

Although, my first batch of SATA cables were right angle, so I decided to rearrange them a bit.

I’ll be updating this topic with more pictures as I go, but depending on when this mobo gets here, I’ll either have pictures up today or on Monday night

2 Likes

I’m curious, was your previous experience Unraid or FreeNAS? I use Unraid a lot and it’s really great.

Actually, I have no experience with either believe it or not. Most of my experience with Linux has been with my little Pi/ODROID cluster (that I need to fill in info on up above I just remembered). So for that I’ve been running mostly Raspbian Buster, some Debian Buster, and Ubuntu Focal/Groovy.

I don’t really have anything against Unraid or FreeNAS specifically, like you said I’ve heard from many people that they’re great. I just really like the regular linux experience having spent 2-3 years learning how to build a small compute/docker cluster on it.

I’ve also been hesitant about being locked into a proprietary platform (even if it’s based on Debian Linux). My biggest issue with the ODROID/Pi cluster was that it was 32-bit/64-bit custom ARM architectures, so if the device makers/community didn’t feel like supporting a package (like ZFS) then I was forced to compile it on my own and deal with the headaches of supporting it myself

You should really give Unraid a shot then. You can have dual parity, along with the ability to use various drive sizes and add drives one a time.

1 Like

Ok, first thing to go was the stock case fan, replaced this and the other one with the Arctic P12s. I decided to go without Power Sharing on these since they’re on opposite ends of the case

Next, we add the Icy Dock to hold those 2.5" SSDs (second one gets here tomorrow)


So far so good, pre-running the cabling is coming up next

Oh and here’s a picture of all the parts here minus that pesky X9SCM-F + the second Icy Dock

Oh and bonus puppy, his name is Whiskey and he’s very tired

1 Like

Pre-running cables now, I have 8 drives total, so planning to use the LSI card + 4 SATA cables. Until that second Icy Dock arrives though, best to just hold 2 cables back.

Ok back to building! Finally back home and that pesky motherboard finally got here! I decided to go with the X9SCM-F for the port combos + Supermicro still seems to be supportive of this board all these years later. This is the board standalone finally awaiting that CPU

For the CPU we’re going with an Intel Xeon E3-1230v2 for some good performance at a slightly lower cost than the E3-1270v2. That’s going in next.

Now that our CPU is in, my next job is the CPU fan. For price, I decided to go with the Thermaltake Gravity i2 since this chip doesn’t even reach the rated 95W TDP we should be fine. Also, this was my first time installing a cpu fan so thanks @JDM_WAAAT for that NK4 build video which I used for reference!

1 Like

Ok, after way too long, finally got memory populated, BIOS updated and LSI HBA flashed!

The process took longer because I had to figure out that the Supermicro X9 motherboards have a limited option ROM, so sas2flsh.exe fails on FreeDOS. The solution was to use the UEFI flashing method using the built in EFI shell. This worked like a charm and we’re now fully flashed and back to building!

Ok now that we’re back on track. Installed the second Icy Dock ExpressCage and reinstalled the front cover, now to start cabling things up!

Woohoo! Took a bit, but now all cables are plugged and we’re ready to go!

Cable management isn’t my strong suit, but I did my best

1 Like

Do you have plans for the 2 open 5.25 bays?