NSFW Anniversary Build Complete: Introducing Wintermute!

This was my first build and a fairly ambitious one, but it was an absolute blast, so much so that I have already resolved to build another in the near future (hence the name - with a high enough combined passmark score, I’m fairly confident that they will find an alien AI … otherwise, I’ll experiment with failover clustering or something else ridiculously excessive for a home lab server).

Part Model Number Quantity Price Subtotal
Bones
Motherboard Gigabyte GA-7PESH2 1 200 200
Processors Xeon E5-2620v2 2 13.5 27
RAM 64 gb Samsung 1Rx4 PC3-12800R (8 x 8gb) 4 24.5 98
Power Supply Corsair RM850x 1 on-hand 0
Case Rosewill L4500 1 140 140
Other Parts
CPU Coolers Arctic 33 2 30 60
Case Fans Arctic F8 PWM PST CO 2 9 18
Case Fans Artic P12 PWM PST 6 6 36
2.5 to 3.5 drive adapter Random 2-pack 1 6 6
SATA Cables 2 pack of random 1 10 10
SAS Cable SFF-8087 to SFF-8482 1 14 14
Fan Extension Cables 4 pack 1 7 7
Storage
SSD (Cache) Inland 240gb 1 25 25
SSD (VMs + Dockers) Inland 240gb 1 25 25
SAS Drives (Pool) HGST 3TB 4 23 92
USB Flash Drive Sandisk Cruzer Fit 32gb 1 9 9
Luxury Add-Ons
Fan hub Electop hub 1 11 11
Video Card Visiontek HD5450 1 35 35
USB Card VANTEC 4P USB3 PCIE W/INT 20PIN 1 20 20
Total 833

Obligatory Glamour Shot:

Current setup:
Running Windows Server 2016 Datacenter for lab environment. Four more 2016DC VMs are running on it and it’s not breaking a sweat. Seriously, I ran Prime95 and other CPU torture tests to break it in and the CPUs didn’t even touch 40deg C. I’m going to wipe the whole thing and install UnRAID in the next month, as soon as I can shut down the lab.

Also pictured are two Cisco 1941 routers, two Cisco 3560E switches, and a Ubuntu 18.04 server on an old laptop (tucked away in the space under the switches). Servers on one switch and clients on the other. When I tear down the lab in the next month, I’ll cut down to one switch and one router, and just run two separate VLANs via RoaS. While the expansion slots on the right side of the switches are 10GbE, adapting them to base-t for the PESH2 is crazy expensive and I get perfectly fine throughput by configuring the PESH2 as a 2x 1Gb NIC Team and running them into an etherchannel on the switch.

I didn’t take any pictures of the inside of the build (it looks the same as everyone else’s)…

Future mods:

  1. Processor upgrades: I bought 2620v2s to keep the price down while I got it up and running and worked out any bugs. Next up are some more beastly processors.

  2. A large drive or two for the parity: future-proof pool expansion.

  3. More SAS drives for the pool: as needed/convenient.

  4. SATA3 Card + SSDs: unassigned drives for VMs.

  5. Liquid cooling: at current prices for used Asetek AIOs, there’s really no reason to not do it, especially if I decide to turn the processors up to eleven.

  6. A proper rack for all of it and my home theater gear, but that’s Craigslist-dependent and I’ll probably keep spending the rack set-aside money on server parts. The Lack Rack will perform fine for the time being (it’s braced with a lot of aluminum angle).

FINALLY…

I can’t thank @JDM_WAAAT and the rest of the ServerBuilds community enough. Buying a server with this sort of power and storage new would likely exceed $5000 and it wouldn’t be nearly as fun.

3 Likes

image

If you’re looking to build another server, if you’re running Plex you should consider splitting it off entirely from your main server via this guide: [Guide] Hardware Transcoding: The JDM way! QuickSync and NVENC