[Guide] NAS Killer 6.0 - DDR4 is finally cheap

Would that work?

Go to your Unraid shares screen, click compute, and wait a while. Once it’s done, show a screenshot.

It just crashed again and now it is starting, never finished the Parity check nor the Compute All

Cancel the parity check and perform the actions that I suggested. It shouldn’t take that long.

What is that pcie card

Looks like a 10Gb card that can be removed for troubleshooting purposes.

Doing it now!

10gb NIC - 10Gb SFP+

Here it is!

I am about a week out receiving two Supermicro X11SCA-F motherboards on eBay. I ordered during the Chinese New Year so there was about a 20 day delay on shipping.

NICE!!! The board is amazing!! I am thinking on getting more to use as Proxmox servers, and replaced what I have today!
What is going to be your setup?
It has its little things like every board out there but over all GREAT!
A piece of advice, if you are going to use Unraid, make sure you break out the Network Bond and separate the IPMI port from it. I just noticed that yesterday and I did it. In my case it was using it to access the system, which really surprised that it actually worked, I thought that it was for IPMI only.

They are replacing two Proxmox servers with Dual Xeon E5-2678v3 and E5-2680v4 processors. They have TrueNAS VM’s on each (Primary / Backup server). Thinking of going bare metal and just use TrueNAS Scale on both. I have migrated my other Proxmox VM’s to a HP290 machine that I upgarded to 32gb of RAM and I5-8500 CPU.

1 Like

Can someone dumb down PCIe slots for me? I went with the X11SCA-F motherboard. I have one NVME in the upper M.2 and an Adaptec card. I want to add a second NVME and a 10gb NIC. Would that let me add a video card down the road if I needed it?

Any recommendations on a NIC?

Here are the slots on the motherboard:

  • 1 PCIe 3.0 x4,
  • 1 PCIe 3.0 x1,
  • 2 PCIe x16 slots are running at NA/16, or 8/8
  • 1 - 5V PCI 32bit

The PCIe x4 slot can run an 10gb card. The two x16 slots when both populated run at x8 speeds and can run a video card and the Adaptec card. The x1 slot you could use for an USB or SATA controller card. You can ignore the 5V PCI 32bit slot unless you need to run a legacy piece of hardware.

I run HP 560SFP+ and HP NC523SFP 10Gbps SFP+ network cards in my Proxmox servers.

Hi there,
If you see my picture above, I have 2 NVME and 1 10G card. I still have 1 x 16 left over for a video card if needed in the future.
Something to keep in mind with this motherboard is that the x4 PCIe shares the same lane with the NVME in the bottom of the board, so if you use one the other one gets disabled. Just keep it in mind.

No, you ran out of PCIe lanes, you need to choose the second NVME or the 10G card if you want to also have a Video card.

On the X11SCA-F you have the following PCIe slots:
2x PCIe 16x slots (the 2 long metal clad ones)
1x PCIe 4x slot (open ended)
1x PCIe 1x slot (open ended)

Additionally you have:
2x M.2 PCIe 4x connectors (NVME SSD slots)
1x U.2 PCIe 4x connector

What you need to keep in mind is that both M.2 slots share PCIe lanes with another slot meaning you can use one or the other but not both at the same time.

The Top M.2 Slot shares lanes with the U.2 slot.
The Lower M.2 Slot shares lanes with the PCIe 4x slot.

That means if you want to use both M.2 Slots for NVME SSDs then you will not be able to the PCIe 4x slot.

You should be able to do everything you described 2x NVME SSD, HBA, 10Gbps NIC, and even a future video card without issue.

  1. Install the 2 NVME SSDs in the 2 M.2 slots. This will disable the PCIe 4x slot.
  2. Install the HBA and the 10Gbps NIC in the 2 16x slots.

This gets you everything you outlined in your initial build and leaves the PCIe 1x slot open.

The 1x and 4x PCIe slots are “open ended” meaning they are an open slot at the back allowing you to install a larger 4x, 8x or even 16x physical card in the slot. For the 1x slot the card will only get 1 PCIe lane but for PCIe 3.0 that is still 8Gbps which is far more than the HBA will ever use. Technically it is a bottle neck for a 10Gbps NIC but in real world usage 8Gbps is still very fast.

That means if you do decide to add a Video card you can:

  1. Move the HBA to the 1x slot
  2. Install the video card in the vacated 16x slot.

Why you need the video card? Is it just for transcoding? Or do you have another use case for it?

For the NIC, what switch are you using? Do you have the option to use SFP+ or are you already set up for 10Gbps over copper ethernet cable?

I don’t know anything about your use case, do you even need 10Gbps?

I use a 2x 2.5Gbps PCIe 1x card and I never come close to saturating it in real wold use.

I don’t know what I don’t know here, very open to suggestions. My media server education is pretty much all from this thread and Space Invader’s videos.

My thought on the video card was for transcoding. I’m running Emby and always had issues with 4K content on my previous Windows 10 based system.

My current switch is a few year old Netgear and only runs at 1GB anyway. Most of my devices are using 5G wifi to stream so maybe 10GB is overkill? I am slowly adding Cat6 around the house for hard line connections.

So long as your CPU has an iGPU you don’t need a GPU for transcodeing. The Intel “Quick Sync Video” accelerator included in all their iGPUs is going to be the best hardware for video transcoding.

In general I would say if you don’t have a specific use case that requires 10Gbps networking then it is not going to be worth doing an expensive network upgrade to 10Gbps.

Even if you do decide to go 10Gbps I would recommend going with SFP+ modules rather than RJ45 copper Ethernet. 10Gbps over copper is very expensive, takes a lot of power, generates a lot of heat, and tends to be pretty finicky about cabling and connectors. With SFP+ you can use fiber optics for long distances and DAC cables for short connections. Both are much cheaper and more power efficient than 10Gbps RJ45 Copper Ethernet.

For a home media server Gigabit is typically plenty. If you want to upgrade your network or are starting out fresh I would recommend going with 2.5Gbps; it is a cheap and easy upgrade and you are more than doubling your bandwidth. It works with your existing cabling you just need a new switch and maybe a couple of NICs and you are good to go.

Starting one of my NAS Killer builds. Waiting on my M.2 Intel drives to arrive.



1 Like

I received my X11SCA-F motherboard and planned to install a cooler compatible with 1151 socket. However, I found that the board has an extra backplate, obstructing the screw holes. I’m unsure how to proceed since the backplate seems firmly attached and may require force to remove. I have included a screenshot for reference.

Did anybody encounter such an additional backplate and/or has an idea how to proceed?

I used the Thermalright Assassin X120 R SE cooler and used the existing back plate. I used the 1151 socket spacers that came with the cooler and attached the brackets using the screws that came with it.