[Guide] NAS Killer 6.0 - DDR4 is finally cheap

Neither of those builds seems particularly related to NK6

3 Likes

The first motherboard recommendation in the guide is the Supermicro X11SCQ However keep in mind that it does not support ECC memory if that is something that matters to you.

Intel Xeon, Celeron, Pentium, and Core i3 lines all support ECC memory.

Core i5, i7, and i9 CPUs Do Not support ECC memory.

The optimum intersection of QSV support, ECC support, and value is likely the Core i3. If you don’t care about ECC you can get a bit more CPU muscle for not much more money with a Core i5.

1 Like

Thanks for the feedback. Sorry I meant to write that I was planning on using 4x32GB =128 GB of RAM. The costs were for 4 DIMMS.

I’m located in the US. I simply did Amazon searches for the prices I listed.

The purpose of this guide is to build a value centric home server using previous generation hardware that is cheap and abundant on the second hand market. Think Ebay, not Amazon.

I would recommend clicking on the links in the guide for each item and looking there. You should be able to make an equivalent build for a fraction of the price you list in your post.

Buying new from Amazon kills any value proposition; since most of the parts listed are out of production the cost new on Amazon may be greater now than it was when the parts were still in production.

If you are going to buy new you should probably stick to current generation hardware; you will get a lot more performance for the price. However in the case of a typical home server / NAS you will likely not see much benefit which is why the recommendation is to stick to older used hardware.

I appreciate the tips. Perhaps I went down the geek rabbit hole. Currently I’m just trying build an over the top server / NAS (i know they’re not exactly the same thing) that I can afford just for fun. The idea is build something cool and then find a use for it. Currently my budget is <$5k. I actually found a used combo 7302P, Supermicro H11SSL-i, and 128 (4x32) 3200 MHz RAM for $770 on Ebay. I know this doesn’t exactly fit the NAS-killer concept but NAS-killer was the initial inspiration that got out of hand quickly.

Both of these builds are complete overkill. You should start with an upgraded NAS Killer 6.0 and focus on learning how to use it, instead of just throwing money at it.

I was afraid you were going to be the voice of reason.

How do you find this setup? Looks like a good option.

What OS did you try and are you running Plex / similar and passing IGPU for transcoding?

Trying to summerize this great thread - is X11SCA-F + i5 9500t the new NK6 best option?

Also, I’m getting confused - does the X11SCA-F + i5 9500t support ECC? If not, does the community suggest this mobo cpu with non ecc or another cpu option and ecc?

Would there be any recommended resources/guides for, say, actually setting up a RAID controller for someone who’s brand new to unRAID/this kind of hardware?

I’ve just built a new rig to NAS Killer 6.0 specs (the only difference being storage. One SSD, , one SAS parity drive and one SAS drive for the array until I can buy some more drives).

Everything seems to be hunky-dory EXCEPT the Adaptec card is doing it’s red LED Knight Rider/Cylon bouncing light thing and unRAID has no idea the SAS drives exist. From what I’ve dug up online I at least now know that’s the card saying it has no I/O signal but I’ve not the foggiest what to do about that.

The good news is the drives hum to life when the server’s on so I feel like that solves for ā€œthe two adapter cables are badā€ but that just leaves the component I’ve never used before :sweat_smile:

UPDATE: While the problem isn’t fixed, I can confirm it’s not a 3.3v issue as I did the cable mod yesterday. Both drives still spin up, but no dice.

The good news is one of the two drives I bought form Rhino is SATA so I’ve just started using my modded power cable and an old-fashioned SATA connector to start actually using the server. If I never figure out the HBA, oh well, no harm no foul. I’m quite a few paychecks away from justifying having more than four or five huge-ass drives anyway. Now to actually learn how to use unRAID/linux :sweat_smile:

I found the setup easy and efficient. I moved over from an older more power hungry build, going from 550W to under 200W. I did keep my 1660ti in the new system to just help with transcoding (e.g. Tdarr and Plex), but isn’t really needed if you don’t have GPU already. I tried intel quicksync and work perfectly. As for processor, there is minimal difference between the 8500 and 9500, I was just able to find a good deal on ebay for the 9500 (they are same socket). As for ECC, I think it does, but I just purchased normal DDR4 ram and it’s running smoothly. The only constraint I ran into on the board was the PCI slot layouts. My GPU is 3 slot, so I had to put it in the bottom 8X slot and my HBA in the top 16X slot (operates at 8X), as it covered the bottom 8X slot if I put it in the top 16X slot. Although the GPU can technically support 16X, the difference in performance is minimal for this kind of use.

Hi, time to upgrade my NK4 so I just ordered the parts to build this out per the recommendations and I am looking forward to the build. Keeping everything that I can from my NK4 which includes the Rosewill 4u server case and EVGA 500W power supply, the Arctic case fans, and my storage array. I also have an LSI 9201-8i HBA that I am going to keep but wondering if I should just go ahead and get the Adaptec for $9?

My current storage array is a WD Red 10TB drive for parity, and then a collection of mostly WD Red drives in sizes from 10TB to 4TB, two 1TB SSD’s in Raid1 for my cache pool, and one 1TB SSD for my plex data. All the drives are SATA. I have ordered the nvme drive per the recommendation so I will replace my cache pool with the nvme drive. Any issues with keeping my current storage solution? Should I be working to upgrade the SATA drives to SAS drives over time?

My use case is mostly for plex (no more than 2 concurrent streams), pihole, and I run most of the arr’s. Will also be using tdarr to convert my movies to h265 to reduce the size of my media library. I backup my home pc’s and all my icloud and google photos on the server.

Thanks for any comments/cautions/suggestions on my build.

Thank you for this guide. It is awesome.

I am moving from an old Windows Home Server. For my new build I am leaning towards the x11sca-f plus i5-9500T with 32gb non-ECC. I am steering this way as seems to provide good enough performance with QuickSync for transcoding, will work with Plex, has plenty of SATA / M.2 and is not crazy budget. And, should be low power / heat which is great.

Re storage - I have 2 x 12TB drives and 2 x 3TB drives (28TB total). I have a mixture of stuff I care about (documents, photos) and stuff I care less about (recorded TV, movies, cctv). I would say the stuff I care about is c.4-5 TB. The stuff care less about is 15+TB. I assume I would want parity on the stuff I care about only, and this is backed up elsewhere anyway.

I am looking at Unraid for the new OS. I would want to rebuild this server and not change it for another several years. I assume as of today I would need to use XFS as I can not use ECC with the i5 and it is recommended to use ECC with ZFS.

As ZFS is new to Unrain, probably not a huge issue now. But, it may require another rebuild again if Unraid moved more towards ZFS. Which I assume it will.

So, my questions are:

  1. is it better to get a processor that can handle ECC now and add ECC now?
  2. is XFS the better option anyway as I have a mixture of drive sizes?
  3. is it really just a myth that you need ECC for ZFS?

Hi gang. I’m upgrading my old NAS from the 4.0 guide because my motherboard finally crapped out. I purchased the recommended i5 chip along with the X11SCQ board. However, the board I got is busted, and I can’t find a replacement on ebay that doesn’t take more than a week to arrive.

I found an ASRock board on Amazon that I can get delivered tomorrow, though: Amazon.com: ASRock H370M-HDV LGA1151/ Intel H370/ DDR4/ SATA3&USB3.2/ Micro ATX Motherboard : Electronics

It’s cheap-ish and seems capable, with the only caveat being that it has just 1 PCIE x1 connector. Given that it has 4 SATA ports and I currently have 4 SATA drives + 1 SAS drive, I think it shouldn’t be an issue for a few years, so long as the recommended HBA works with the board.

Is there any reason why I shouldn’t get this board?

It appears to have a x16 slot and a x1 slot.
Relatively speaking, it’s fairly expensive for what you’re getting.

Yeah, I have a GPU that’ll go in the x16. I guess the main issue is that the HBA is x4, so this is a no-go. :frowning:

I’m a bit annoyed because my server has been dead for over a week and I was looking forward to getting it back up yesterday, but got a dead board. At the moment, for all the recommended boards here, either the lead time to delivery is over a week or they’re just not available, so I’m trying to see what I can get sooner.

Anyone have any examples of a build with enough PCI lanes for 10gb, an HBA, and atleast 4x NVME bifurcation? Would prefer 4.0 but i’m thinking budget would have to be 3.0.

Newegg has AsRock Rack in stock with 1151 socket.

Is the HBA in IT mode and can you see the drives in the card’s firmware?

If not you may need to look up what the diagnostic lights are indicating.

That card expects a fairly significant amount of airflow over the heatsink to stay cool, if it isn’t getting enough airflow it could be overheating. You could try positioning a fan so it blows over the heatsink and see if that helps at all.

If you already have a working HBA there is no reason to get the Adaptec card, they will perform identically and the LSI will run cooler. Also no reason to swap out your SATA drives for SAS drives, the performance will be the same.

You should be able to move over your current array without issue, maybe take a quick screenshot that shows which drives are in which position in the array just in case, but other than the parity drive I’m not sure if it matters if they end up in the same positions.

Unless the SATA SSDs you are currently using for cache are nearing end of life I’m not sure I would even replace them with the single NVME drive. Sure the NVME drive will be faster, but unless you have 10Gbp network the cache is not going to be your bottleneck. Plus if you only have the one NVME drive and it fails you will lose all data currently in the cache. tdarr might run faster writing to the nvme drive, but I think the recommendation is to use a Ram disk anyway if performance or wear an tear on your drives is a concern.

2 Likes

Does the case you are using only fit uATX or smaller? If you have space and PCIe slot availability is a concern your life will be much easier with an ATX board.

uATX boards don’t typically have a lot of real estate for PCIe slots. There are other boards that will have an extra 4x slot, but that and the fact that you are looking to get it in a hurry might end up being expensive.

Otherwise if you are not using all your M.2 slots you can get an M.2 to PCIe adapter since the M.2 slot is just a PCIe x4 slot in a different format. I’m not really sure where you would mount the actual card though. If there are additional mounting slots on your case that extend beyond the bottom of the mother board you could potentially use one of those, or if your case supports it you could mount your HBA in a vertical slot. Worst case you could just zip tie it into the case and call it a day, might not be the prettiest setup but it would work.