Welcome to my first time ever building a PC, let alone a server/NAS-type box. I have been chatting with a bunch of you on Discord and asking question after question that I could probably solve with Google Fu, but you’ve been amazing and so helpful. I decided to build a NAS Killer 4.0 based on the needs for my server and all of the parts are almost here. In follow-up posts, I’ll talk a bit about what the needs are from this box + provide updates as I put it together. So without further ado, let’s get started.
1. What I need this box to do (Server Wish List/Needs):
This box needs to primarily be my own personal NAS, but also a server with a varied list of applications that need a ton of uptime. Almost all of these were running on my old Pi cluster and will now live on here.
Since I’m finally building my own server, I want to make sure I have:
Near 100% uptime (My wife can only tolerate “sorry hun, the light don’t work because the server is broken” so many times - and I’m out of times)
Solid kernel support (x86 solves this mostly)
Data integrity (I’ve been using ZFS on Ubuntu)
Ability to upgrade as we go (both in terms of drives and PCIe)
Minimum 1Gb Ethernet
Local network storage for my Surface Pro 7 (Samba access)
Ability to play nice with the Pis (for fun)
So here’s the plan:
OS - Ubuntu
Prefer Ubuntu because of previous experience
Supports ZFS-on-Root
Want to keep compatibility with my existing architecture (detailed later)
Must be Linux-based
Data Storage Backend - ZFS
Going to use ZFS (pools already created, data integrity features are a huge plus)
Finally have access to ECC RAM to improve ZFS data integrity checking
File Sharing / Backup
Samba shares direct access to files from Windows/Mac client machines
NFS shares for direct access to files from other nodes in Docker Swarm
TFTP for direct access to files from PXE Boot Devices
rclone for remote backup
Apps
Docker (Already using Docker Swarm)
Pi-Hole
Wireguard
Media Apps (Docker)
Plex - 2-5 streams
The Rs (Sonarr/Radarr/Lidarr)
Transmission
Tautulli
Ombi
“Cloud”-like Apps (Docker)
Nextcloud
Wordpress
BitwardenRS
Home Automation Apps (Docker)
Home Assistant
System/Cluster Monitoring Apps (Docker)
Prometheus
Grafana
Alertmanager
cAdvisor
Node Exporter
Docker Daemon Exporter
Docker Swarm “Backend”
Traefik (Reverse Proxy)
MariaDB (DB Backend for Bitwarden, Home Assistant, Nextcloud and Wordpress)
Currently, my NK4 (finally built so it makes the drawing ) is the black PC on the bottom.
Working backwards from the network, here’s what we’re working with and how it’s set up:
My TP-Link Archer C7 v5 router is not pictured, but on the other side of the apartment in the living room. It’s an AC1750 router, so we get pretty solid speeds, but we’re limited by our 330M down/30M up internet. It’s cheap, but we’re bottlenecked on the upload side
Working back from the network we have a TP-Link Range Extender underneath my desk, which provides an AC1750 Wi-Fi uplink to the cluster on nonsense on the left. The extender has a single 1GbE port
So that’s the network. Now for the cluster. This project originally started as just a BeagleBone Black, but grew to eventually be a small SBC cluster. Here’s what we got:
An ODROID-N2 w/ 4GB of RAM - Pictured underneath the Ethernet switches on the left with a single, blue Ethernet port and 4x Blue USB3.0 ports
A BeagleBone Black Rev C - Pictured on the right side, middle of the stack of Pis with a single, blue Ethernet port)
3x Raspberry Pi 3B+ with the official PoE HAT (pictured on the right in the bottom of the stack of Pis with single, Orange PoE Ethernet port + 4x Grey USB 2.0 ports)
1x Raspberry Pi 4 w/ 4GB of RAM with the official PoE HAT (pictured on the right in the stack of Pis with a single, Orange PoE Ethernet port + 2x Blue USB 3.0 ports + 2x Grey USB 2.0 ports)
1x Raspberry Pi 4 w/ 4GB of RAM with a ClusterHAT v2.3 (pictured on the right, in the stack of Pis with a single, Blue Ethernet port + 2x Blue USB 3.0 ports + 2x Grey USB 2.0 ports)
4x Raspberry Pi Zero connected to the ClusterHAT and the top Pi 4 (pictured on top of the stack of Pis, arranged vertically)
That’s a long list. This project was capable of hosting all of those docker services above + a pair of PiHoles and Wireguard. I mainly had issues with storage which is what this project needs to solve. Finally a bona fide NAS will actually allow Plex and Nextcloud to run continuously.
The way these boxes ran before was as follows:
The ODROID-N2 was my “master” node and ran ZFS with a separate set of USB 3.0 disk enclosures - it ran my MariaDB instance + Plex and anything that couldn’t run over NFS exports
The BeagleBone and Pi Zeros barely run anymore, but originally started it all. Now they would run mostly a Wordpress container at most + maybe a Tautulli container. Anything that is basically just a webserver runs fine on these little guys. One of the Pi Zeros has a temp sensor HAT that I use to gather ambient temp data around the cluster.
The Pi 3B+ and 4s combined to run most of the heavy stuff like Nextcloud, Home Assistant, Bitwarden, the R’s, etc. One Pi ran a Pi-Hole and the other Pi 4 ran a WireGuard VPN
The cluster’s focus was high availability of these services, so I used glusterfs + NFS shares to spread out data so if a node failed the services would degrade but at least keep running. This worked in a fashion, but the central location of the MariaDB instance + the file storage killed most services if the ODROID went down or the USB acted up (which happened frequently)
Although, my first batch of SATA cables were right angle, so I decided to rearrange them a bit.
I’ll be updating this topic with more pictures as I go, but depending on when this mobo gets here, I’ll either have pictures up today or on Monday night
Actually, I have no experience with either believe it or not. Most of my experience with Linux has been with my little Pi/ODROID cluster (that I need to fill in info on up above I just remembered). So for that I’ve been running mostly Raspbian Buster, some Debian Buster, and Ubuntu Focal/Groovy.
I don’t really have anything against Unraid or FreeNAS specifically, like you said I’ve heard from many people that they’re great. I just really like the regular linux experience having spent 2-3 years learning how to build a small compute/docker cluster on it.
I’ve also been hesitant about being locked into a proprietary platform (even if it’s based on Debian Linux). My biggest issue with the ODROID/Pi cluster was that it was 32-bit/64-bit custom ARM architectures, so if the device makers/community didn’t feel like supporting a package (like ZFS) then I was forced to compile it on my own and deal with the headaches of supporting it myself
Ok, first thing to go was the stock case fan, replaced this and the other one with the Arctic P12s. I decided to go without Power Sharing on these since they’re on opposite ends of the case
Pre-running cables now, I have 8 drives total, so planning to use the LSI card + 4 SATA cables. Until that second Icy Dock arrives though, best to just hold 2 cables back.
Ok back to building! Finally back home and that pesky motherboard finally got here! I decided to go with the X9SCM-F for the port combos + Supermicro still seems to be supportive of this board all these years later. This is the board standalone finally awaiting that CPU
Now that our CPU is in, my next job is the CPU fan. For price, I decided to go with the Thermaltake Gravity i2 since this chip doesn’t even reach the rated 95W TDP we should be fine. Also, this was my first time installing a cpu fan so thanks @JDM_WAAAT for that NK4 build video which I used for reference!
The process took longer because I had to figure out that the Supermicro X9 motherboards have a limited option ROM, so sas2flsh.exe fails on FreeDOS. The solution was to use the UEFI flashing method using the built in EFI shell. This worked like a charm and we’re now fully flashed and back to building!
Right now? Cable storage but I do have some hazy plans for the future
Right now I’ve got 4x4TB (3.5") + 2x1GB (2.5") + 2x240GB (2.5" SSD) with capacity for 5 more 3.5" drives. Until I can get it enough 3.5s to fill those spaces, I don’t really see any immediate need for the 5.25s
I have thought about getting a 6x2.5" adapter for one of them though and setting up a bank of SSDs. Otherwise I’d likely just add more of the same 1x3.5+2x2.5 adapters.