[Guide] NAS Killer 6.0 - DDR4 is finally cheap

Thanks! Are there alternative recommendations for the HBA? Is the recommendation for hard drives to look for the best space-to-price ratio, regardless of the manufacturer, starting from 8TB and above?

I would recommend just getting the largest drives in your budget from ServerPartDeals. The Space-to-Price ratio is only for a moment in time. Larger drives will come out, you will use more data. You can build once and be happy for a while with larger drives, vs trying to go with an ideal space/price which might be 8tb drives, but you could afford 14tb drives. If you went with 8tb (or 12, or whatever the ideal ratio is), you could be going through drive upgrades in a year which can pose a pain.

If you have enough onboard SATA, you don’t need an HBA. If you need more sata, HBA is more stable and durable than pcie sata cards. The sata cards are not built for reliability, HBA’s are designed for (legacy at this point) enterprise use.

I am considering growing the storage on a monthly basis to stretch the spendings across a few months :thinking: Are there recommendations on the number of disks needed for parity/performance reasons? Happy to be directed to a popular guide on the topic!

Only you know how much space you need. My friend is building a NAS for the first time and I recommended him to start with 1 parity drive and start at 12 TB because it is the best price by GB now on days. Also I recommended him to buy the parity as brand new drive, that is what I did, I guess I want that drive to last the longest, lol.

HBA are not my thing, I decided on the supermicro board because it has 8 SATA ports, plus it has 2 m.2 slots which can be used to extend the amount of SATAs by 6 in each slot, which it will give you a total of 20 possible drives. Future proof in my opinion! Adding drives by when you needed and not all at once is the way to go in my opinion, that way you are not wasting energy on powering up drives that are not being used.

Given that you have 8 SATA ports on your motherboard you probably won’t need an HBA unless you decide to get SAS drives.

LSI and Adaptec both make enterprise HBAs that are inexpensive used. Any HBA should do the trick; Your bottleneck will be drive speed not anything on the HBA.

Just get one that is actually branded LSI or Adaptec. The OEM HBAs branded Dell, IBM, HP, etc cost a little less for the same hardware, but may have proprietary firmware or connectors that could cause issues.

Assuming you are planning using Unraid for your OS, (Unraid uses a JBOD setup instead of RAID) then yes you can and probably should just start with the amount of storage you need now and expand as necessary.

You could start with just 2 drives; 1 drive for data and 1 drive for parity. The one rule you need to keep in mind though is that the parity drive must be the same size or larger than any of the data drives. So if you are considering getting some larger drives down the line you might want to start with at least 1 large capacity drive as your parity so you don’t need to swap it out later.

You will need to decide if you want to get new consumer drives or used enterprise drives.

New consumer drives will cost a lot more, and have less endurance, but will have a warranty and potentially data recover service, which you should not need if you are doing proper backups.

Used enterprise drives have better endurance, but by the time they hit Ebay have probably seen heavy use. They are however much less expensive to the point where you should be able to buy an extra drive in case one of them does die and still save quite a bit of money.

A good resource for used enterprise drives is Rino Technology Group on Ebay. They are a member of Serverbuilds and will often extend deals to other Serverbuilds members. Unfortunately it looks like the deal thread has not been updated in a while, but if you reach out here or on Ebay they can probably help you out.

Your replies are very much appreciated, grateful for your patience :pray:

I was considering an HBA because it would be more sensible to PCI pass through the card and all connected hard disks via Proxmox to unraid.

Unfortunately, I am from Europe and cannot benefit from the Rino Group offers.

We definitely recommend Not virtualizing your storage, particularly not Unraid, within Proxmox

I am aware of this and tried to avoid bringing it up :slight_smile:

Ah, cheap used drives may not be an option where you are, although if you do find a source please post on here as there are other folks in Europe who would be interested to know.

Makes sense about the HBA. I haven’t tried it, but it is possible that you can just pass the entire SATA controller on the motherboard through to a VM running Unraid. Otherwise you will likely need to use an HBA like you were planning.

In general the hyper-converged NAS / hypervisors like Unraid and Truenas prefer to be running on bare metal, but if you have experience with Proxmox and you can pass the controller through to Unraid I don’t see any reason why it would be a problem. You probably don’t want to run VM’s in Unraid under Proxmox, but you have Proxmox for that. If you want video transcoding in Unraid you will also need to pass the iGPU through, however if that is an issue you could always run Plex / Jellyfin in an LXC under Proxmox and just mount a share from Unraid for the libraries.

Basically you would be using Unraid mostly as a pure NAS and you would want to use Proxmox for your virtualization needs.

1 Like

I noticed that I have 16GB (2x) G.Skill Flare X DDR4 memory lying around. How crucial is it to have the same RAM installed? I planned to obtain 2x32 GB and was curious if I could mix in my existing RAM.

I wouldn’t really recommend mixing and matching different RAM brands and capacities.

Theoretically it should work so long as all long as the modules can run at the same timings, but hard to say how that will work out in practice.

You can always get started with your 16GB DIMMs now and get 32GB DIMMs later. Then you can try them all out together and find out if you are upgrading to 64GB or 96GB.

Even if it works it might be unstable long term. Your mileage may vary.

Hi Folks, I’m looking to downsize a dual LGA2011 (SNAFU) build to something NK-like to reduce idle power consumption, but I’m struggling with PCIe lanes and looking for guidance.

I have 2x PCIe x4 SSDs (DC P3605, HHHL card form factor, mirrored), an HBA, and a 10Gbe NIC I want to fit in, so I’m looking for 4 PCIE slots, three that are at least x4 (HBA and SSDs), and one at least x2 (10Gbe) electrical. My use case includes a lot of traffic over the 10Gbe LAN to/from the SSDs, though not a lot of CPU load anymore.

It looks like the Asrock Z370/OEM aka Z370 IB-R is the only LGA1151 board out there with four greater-than-x1-electrical PCIe slots? In pictures of that board it looks like a quick switch chip between the middle two full length PCIe slots. I suspect it’s arranged x16 (CPU), x4/x0 or x2/x2 (PCH), and x4 (PCH). I think that should work, but at least one device (SSD or HBA) is going to be limited in an x2 slot, and DMI bandwidth might be limiting with combined read/write/network transfers.

Another option might be to use M.2 to x4 PCIe adapters on a board like the X11SCA-F, so I’d have x8/x8 (CPU), x4 (shared with M.2), and (x4 adapter)->M.2 (shared with U.2). I can put the SSDs on the CPU slots and hang the NIC and HBA off the PCH. I’m not sure how practical and robust those adapters are though.

Lastly I’m looking at Z690 options, with more PCIe lanes to work with. I know LGA1700 is out of scope of NK6.0/this thread, but some of the DDR4 boards are in the $100-$120 range used. In particular the Asus Prime Z690-P D4 has a x16 (CPU), 3x x4 (PCH) layout, with three non-contending M.2 slots (and of course much higher DMI bandwidth). Downside is the CPU isn’t as cost effective - looks like the i3-12100 is going for $95 used and the i5-12600k is currently on sale for $170 new, vs $50 for an i5-8500. Upside is ~50% higher single thread performance on Adler Lake vs Coffee Lake, and 4 more e-cores in the case of the 12600k, but I’m hesitant to spend extra when an i5-8500 would be enough CPU for me these days.

Any thoughts on what direction I might take, or options I haven’t considered?

That is a tricky one since you have full form factor PCIe SSDs and by this generation the board manufactures were dedicating much of the PCIe lanes to M.2 slots.

There are 10 Gbps Nics that come in an M.2 form factor which would be cheaper to replace than both the SSDs, but still would be an extra expense.

You could get an M.2 to PCIe 16x adapter to get back some of those lanes that are dedicated to M.2. I’m not sure exactly how you would mount the card in the case. Maybe just Zip Ttie your HBA in somewhere. You could probably use a vertical GPU mounting solution and show off your 10 Gbps Nic or even your HBA in a case window.

Actually, Check out the Gigabyte C246-WU4

4x 16 physical slots that can provide up to 8x 8x 4x 4x electrical. Could be what you are looking for. Seems like its not super common, so finding a good deal on a used one may require some patience.

I tried downgrading but it didn’t help. Just put a new LSI card in and the problem is gone. Getting a full 200 Mb/s on parity sync. Something about the Adaptec card didn’t jive with my setup.

Converting my HP 290 into a NAS/Plex Server. Originally was going to buy a Synology but decided I rather build and use Unraid. Based on what I’ve read, stability seems to be fine and I’ll have a back up drive elsewhere for critical items.

Anyways, here is what I’ve landed on - am I missing anything?

  • CPU - i3-8100 (from HP290)
  • Case - N400
  • Ram - Silicon Power Gaming 32GB (2x16, 3200)
  • Mobo - Supermicro X11SCA-F
  • M2 Drive - Inland Premium 1tb
  • Drives - (2) Seagate Exos X18 ST16000NM000J 16TB, (1) Shucked 8tb WD Elements (from HP290 build)
  • Power Supply - EVGA 500 W1
  • Case Fans - Artic P12

Not sure if I need additional cables or the HBA card at this point.

Looks good so far, seems like you don’t need the HBA or cables at the moment.
You can always add them later if you need the additional drive capacity.

Hi all,
Since I finished the build, I had been having random crashes and each time it comes back up, Unraid require a parity check (12+ hours). This morning alone it crashed twice.
Has anyone experienced this with the setup I am replying to?
I had enabled syslog this morning so I can have some logs to go from, but until next crash I have no idea what is going on.
Thanks

Take a picture of the inside