Replacing 2 Servers with a single new server

I posted this on reddit and was told to come post here.

I am replacing my current setup that I built in late 2009.

Goals:

  • Reduce Power Consumption
  • Simplify Setup (Moving from 2 Servers to 1)
  • Increase compute power (Can’t currently run plex, etc)
  • Roughly 10 year lifespan replacing components as they fail (like my last build)

Current Setup:

Node 1:

  • ESXI 6.5
  • Intel i7 860
  • 2 drives configured in RAID 1 used as a boot.
  • All VMs storage is over iSCSI to node 2.
  • 3 VMs:
    • Windows XP VM (Mostly used to configure things that require windows)
    • PFSense
    • Linux VM running Openhab (home automation) and 8 or so containers (grafana, influxdb, sagetv, mosquitto, unifi, frontail).

Machine isn’t powerful enough to handle plex streams and I am running into memory pressure issues.

Node 2:

  • OmniOS
  • AMD Athlon II X4 630
  • 8GB of ECC RAM.
  • 20 bay hot swap case with 2 interior bays.
  • 6 1 TB Samsung spinning disks (7200 RPM) (raidz2 - Storage Pool)
  • 2 Western Digital Black 640GB drives (zfs mirror) for VMs/ISCSI
  • 2 western digital laptop drives (rpool zfs mirror)
  • 1 1TB WD Green (dumping ground).

Proposed new setup:

  • Moving to Proxmox for VMs with ZFS for storage on a single machine, using my current OmniOS box with that has 20 hotswap bays. I am not a data-hoarder so I just doubling the size of my raidz2 tank.

  • CPU: AMD Ryzen 7 3700X 3.6 GHz 8-Core Processor

  • Motherboard: ASRock AsRock Rack X470D4U

  • Storage Pool: 6 WD RED 2 TB Drives (Raidz-2)

  • Storage HBA: 2 LSI 9211-8i

  • System Pool: 2 Kingston A400 120 GB 2.5" Solid State Drive (zfs mirror)

  • Log/Var Pool: 2 WD 80GB Scorpio Blue

  • VM/Container Pool: Samsung 860 Evo 500 GB 2.5" Solid State Drive (zfs mirror)

  • Surveillance Drive (New Requirement): 1 WD 8TB Purple drive (no mirror)

  • Power Supply: Corsair SF 450 W 80+ Platinum

  • RAM: 32GB ECC Memory (2x16 with room to upgrade)

Areas of concern:

  • Been a long time since I have built a PC/Server so my skills a bit rusty on making sure everything players well together.
  • The Storage HBAs are bit “dated”, but anything newer gets expensive really quickly
  • I will probably need to add another LAN card as I had a lot of trouble doing macvlan w/ docker on using the host network adapter. That will consume all my PCI slots. I am assuming that if I need to expand again, 16 port HBAs will be cheaper and I can free a slot that way
  • ZFS using non-enterprise SSD… Internet is full of horror stories and some people sayings… nah its fine. I would like SSD for the power consumption aspects. Enterprise grade SSDs are really expensive.
  • With the whole non-enterprise SSD should I go back to laptop spinners for my boot pool? I could also add two laptop spinners just for the /var/log? As that appears to be most of the concern is that logging will kill non-enterprise SSDs.

Any feedback on the proposed build and comments on my areas of concern would be appreciated.

I think any build that requires DDR4 memory is going to throw your costs out of whack. Having built multiple builds here I think if I were to build another I would just buy this:

https://www.theserverstore.com/SuperMicro-4U-CSE-846-24-Bay-SAS2-BP-w-X9DRi-F2x-6-Core-E5-2620-2Ghz-IT-MODE-W-Rails_p_925.html

throw some ddr3 RAM in it ($20 for 16gb sticks, $8 8GB sticks) and add my drives and be done with it. As is this will run everything you have listed there, and if you ever need to do more, you can upgrade the processors with a drop in upgrade for $80 that would probably double the compute.

From a software standpoint I would use ESXI or proxmox, Add a storage VM via freenas, pfsense vm, windows, linux vms for your docker containers. And be done.

Unraid is an option but I didnt care for it.

I appreciate the suggestions. Unfortunately, I don’t think that meet my goals of reducing my current power consumption.

Moving to a dual LGA2011 v1/v2 Xeon build in a Rosewill RSV-L4500 chassis is going to be your best choice in reducing systems and condensing power consumption to a single machine that can run at a lower power when you’re idle. You can also save power by getting rid of your smaller HDDs and go with denser drives 8TB/10TB.

The new Ryzen 3000 series is a great chip, but the cost to value in a whole build isn’t going to beat what you’re going get with going with a dual LGA2011 v1/v2 build.

Here’s some references to look into:
https://blog.linuxserver.io/2019/07/16/perfect-media-server-2019/

In this video you’ll see the creator of ServerBuilds.net talk about the steps he’s taken to reduce power consumption and swapping out his smaller sized drives for 8TB/10TB.

Whats your target watttage? They have some low power 2011 processors like the e5-2630L v2 which run about $25 per and only sip 60W TDP. Also think its important to distinguish that the supermicro build has 1200W power supplies but used a very small fraction of that. My 836 build with the highest TDP 2011 processors and all drives full runs 400W and is never really idle. With much lower TDP processors and less drives you would be looking at considerably less.

If you are thinking you are going to find a box that runs 100W yet has enough cores to run 6-8 vms you are likely setting yourself up for failure.

Hi Raize1 -
My understanding of power supplies is that unless you are getting the platinum+ power supplies they are extremely inefficient if run well below values. I would like to be around 100W. Here is some back of the napkin calculations of what I have:

  • AMD Processor (65W TDP)
  • Motherboard (??? I can’t find any info)
  • Tank drives (27 Watts combined)
  • Surveillance Drive (5.3 watts)
  • VM/Container SSDs (5 watts combined)
  • Boot Drives (4 watts combined)
  • Log Drives ( 3.2 wats combined)
  • RAM (internet says average 4 watts per stick) (8 watts combined)

That gets me to 115 watts. What parts am I not accounting for? I suppose fans, etc?

I don’t know if that efficiency breakdown applies the same to these server psus. They are all gold at a minimum and the SQ model that I actually upgraded to is platinum. If you are concerned about he 1200W psus, you can get the non SQ psus (700W -900W) for $15. $55 for the SQ ones. I would also compare how many years worth of energy savings it will take to recoup the price difference between the supermicro build and the ryzen. Best of luck whichever way you go.

Also previously I mentioned that my server used 400W, thats actually the output from my UPS which also includes a 24 drive shelf and 2 switches (1 poe) the server by itself is at around 300W

65W TDP is not the maximum power draw of the CPU. It’s a measurement of heat output in watts. (AMD and Intel do not specify how this is measured.) Which means that peak power draw must be higher than heat output.

Tourture test on that Ryzen shows it running at 105W. 12.7W at idle.