COVID Budget Anniversary 2.0 Build

I want to first say thank you for all the great information provided on the website that guided my component selection for my budget build.

Unfortunately due to the current COVID pandemic, the selection of some components has become few and far between, especially for newer items (case and PSU specifically). The purpose of my build was to create a virtualization server to host either Hyper-V server, or Proxmox, as cheap as possible, and still leave room for future upgrades.

Type Part Cost
CPU 2 x E5-2630 (SR0KV) $16.22
CPU cooler 2 x Supermicro 2U Passive $20.00
CPU Fans 2 x Nidec UltraFlow 60mm $13.82
Motherboard Supermicro X9DRD-LF $54.06
Case DeepCool MATREXX 55 $49.18
RAM 2 x Samsung PC3L-10600R 16GB $44.13
SSD ADATA 240GB SATA 3 2.5" $35.16
Power Supply Thermaltake 500W $60.53
Fans Arctic F12 PWM PST 5 Pack $36.12
Cable EPS 8-pin splitter $6.69
Total $335.91
  • Prices listed account for tax and shipping charges, if any

Also installed and not included in the cost, as I already had these items:

  • PCIe M.2 Adapter with 500GB Mushkin Helix-L NVMe
  • WD 500GB Blue
  • Arctic Thermal Paste

Some things I still need to take care of include buying an IO shield and provide active cooling for the CPUs. I had thought the listing for the motherboard included the shield, but it did not, easy fix. As for the CPU cooling, the case I selected limits the intake on the 3 front mounted case fans to just the sides of the front. Under a CPU stress test CPU2 overheated after a couple minutes, but if the front panel is removed the CPU temps stay manageable (70 C). I plan on installing 60mm PWM fans in front of the CPU heat sinks as a cheaper alternative.

Update: 06/05/20
I have purchased (prices provided above have been updated) and installed 2x 60mm fans onto the 2U heatsinks. After running a burn test for two hours the CPU temps stayed steady at 58 C (ambient 24 C) with the CPU fan speeds spinning at 3700 rpm and the case fans at 880 rpm.

1 Like

Looks like you already have a plan for my (minor) criticisms, so good on you for that!
I’m looking forward to seeing updates for your build in the future.

Oh and, two other things -

  1. I’d personally flip your top fan and use it as an exhaust, you already have 3 intake so 2 exhaust would be ideal.
  2. You should try making a cardboard air duct that reduces the available area and funnels the air over the CPUs and motherboards, you should be able to reduce CPU temperatures significantly without active cooling.

My initial configuration had the top fan setup as exhaust, but since CPU2 (top) is the one expirencing the overheating issue I figured intake would be better for now.
And good suggestion on the cardboard ducting.

This is freakin’ AWESOME!! The prices you got on these parts is really good. Especially ram. If you stick with 16gb sticks and fill this thing out, 128gb is great. The only issue i have with this board, single pcie slot. I do think that i might support bifurcation so maybe could get a riser/extension card and add another nic if you need it at some point.

For a hypervisor, i personally like proxmox. free as in free, zfs support, “fancy” debian base, easy to use(imo). to be fair though, haven’t used hyper v and haven’t used xen/esxi/etc in years.

I am sure the prices could go lower, but I was limited to some selections (case and PSU specifically) due to the low stock caused by the pandemic.

The single PCIe slot was also one of my initial concerns but since my intention for this build was to do a low level VE test bench of sorts, the two onboard SATA3 connections should suffice for now. For future expansion, my intention was to utilize the single slot for an 8x HBA to run some low cost SATA SSD’s in a RAID 5 or 6.

I briefly checked for a proper bifurcation card, but the ones I found were expensive enough that it would be cheaper to upgrade the motherboard than to install the pcie expander/riser cards.

As for the hypervisor, the only reason I started off with Hyper-V was because the office is a Windows shop and the Hyper-V server software, which is also GUI-less for the most part, is provided for free from Microsoft. And Microsoft has an Admin Center software for web based management or you can manage the deployment from another windows server through the Hyper-V console.

i think overall the prices you got for the main components(cpu/ram/mobo) is pretty great. i haven’t found that cpu for that price, the motherboard i found at $60+ and the ram was $40/a stick instead of a pricewise, you hit the gold mine.

as a test machine, i would agree. imo, the only reason to swap the nvme adapter would be more networking ports or like you mention, hba.

for bifurcation i meant more so the pcie slot on the motherboard. some SM boards have a riser card that splits out the x16 slot into two x8 slots. haven’t looked fully into it but i feel like there are extentions or something that would be cheaper and able to split it.something like this. with the space at the bottom of the case, should be able to slide the adapter in and the plug up two pcie devices. i looked at the motherboard manual and it mentions bifurcation but it’s a x9drd-if/lf manual according to their site so i’m sure if it’s specific to your board, both or just the if. in the bios it’s under the north bridge setting.

ah. i misread your OP and thought it was more a question. my bad. that makes sense. sounds like a good setup. never messed with hyper-v so might have to try it out.

The ram sticks I purchased were used, since they are ECC I didn’t think purchasing new for twice the price was warranted.

as for the bifurcation, checking the BIOS the board will support x4x4x4x4, but finding a board to break a single x16 into 4x4 at a discount may be challenging. The one I found was a $100 pre-order

My other thought was since the X9DRD-LF shares the -iF mainboard and BIOS firmware, if I can solder the PCIe slot and associated capacitors to populate the board. I am still researching to see if anyone has done this before.

I also updated the original post with information about the additional CPU fans added to the build.


That is a really, really cool breakout board…

Hi @silverkorn

I have the -if variant of this board, and I’m having trouble getting the nvme card to work right. Specifically, it works, but only in pcie gen 1 mode (4 lanes). I tried what seems like every permutation of bios settings with no luck. The card looks similar to yours, 4 lanes w/one m2 port.

Did you get it to work correctly at full speed? If so, did you change anything in the bios? And what card are you using?

Any help would be appreciated.


I no longer have that card installed in the machine, but from what I remember I did not have any issue getting the card to provide access to the drive, I did start with default BIOS settings.

I have since changed out that card for an Asus Hyper M.2 x16 Card V2 which allows me to install 4 M.2 cards into the single PCIe slot and read each drive separately with the bifurcation. However each card would only access 4 of the 16 lanes. Here are my current BIOS settings:

I just ran a CrystalDiskMark test on a Windows VM located on one of the drives and got read of 8,000 MB/s and Write of 7,400 MB/s which seem correct for the Gen 3 slot

1 Like

That seems extraordinarily high, I be the drive maxes out at around 3000 MB/s actually. Windows has an issue reporting drive speeds in VMs, they are usually wildly incorrect.

Thanks @silverkorn .

I still can’t get either of my 2 cheapo PCIe adapters to work in anything other than Gen1 mode, with speeds to match (~ 900 MB/s). And the same adapter works in Gen2 – twice as fast – on another motherboard (Asus I think, belongs to a friend). So it can’t be the adapter, can it? Those Asus Hyper cards are expensive and in short supply apparently, I would really like to confirm that my motherboard is capable of this before I buy one.

In any case, the single m.2 slot adapter looks very simple to me, there no microchips on it, why would it cause this degraded state anyway?

Also: do you mind checking what your BIOS version is?

To me, it sounds more like a problem with the motherboard/BIOS config than the adapter. Which slot are you using? Pics?