@adamsir2, @JDM_WAAAT
As an update to this old chat FYI, I ended up building the new system in July. The specs of what I purchased are listed here. Note that I haven’t purchased the listed additional custom SATA power leads as I don’t have the case fully populated with drives yet and I’ve replaced all case fans with the listed Artics…
I built the new system running TrueNAS Core 13.0 U1 (the old NAS was running TrueNAS Core 12 U7). I had tried building it as TrueNAS Scale, but I found out that Scale doesn’t currently support 12th Gen Intel CPU’s for passthrough of the CPU’s inbuilt GPU, even if I assign a separate PCIe installed GPU to the OS (hence the existence of the GT710 GPU in the build) and so I just went back to a Core server for the time being as it was easier to migrate to the new system that way (I might upgrade it at a later date to Scale once the CPU/GPU is supported for passthrough).
I haven’t as yet cleaned up the cabling etc within the case, but I do plan to get around to it at some time by using something like this cable tie mount throughout the case.
Currently I have the 2 x 2.5" SSDs just sitting beside the mobo loose against the wall (I need to mount them as well), and I’m using a spare 1m (3 1/3ft) SAS 8087>SATA breakout cable as I found the 0.75m (2 1/2 ft) breakout cables I bought were too short for my liking to reach the furthest drive bay where I have installed the 4 x 16TB drives I’ve started with.
The 2 drives in the middle cage (refer image below) contained data backed up from my old NAS that I imported locally into the new NAS to speed up file transfer (about 17TB of data locally copied rather than copied over the network). Doing this saved me I believe about 15 hours over a 1Gb network.
I believe that there are plans from Xi Systems to bring in the ability to expand the existing 64TB RaidZ vDev Pool with individual disks rather than spanning Pools over multiple vDevs so I hope to use an extra 16TB drive (one of the 2 drives in the middle bay) to add it to the existing 64Tb Raidz I have setup in the near future (as its just an additional drive plugged in but not being used at the moment).
Pics of build currently:
Overhead view of the case open (rack mounted, fully extended from the rack on the iStar rails. (yes I know about the cable ties, I plan to fix that at a later date whilst trying to hide those fan power cables underneath the PSU if possible).
An issue I had with the mounting bolts for the iStar rails is that they don’t tighten fully to the rack, so there is a little play in respect to movement of the rails (most likely contributing to an issue I have when fully extending/reinserting the server where the rail gets stuck on the rack’s front vertical mount and I have to push the server case sideways slightly to make it fully extend)
Close up looking at the mobo compartment
Close up looking at the rear of the drive cages. the main drive volume are the 16TB drives on the left. The 2 drives in the middle are a 3TB & extra 16TB drive I used to copy the data to the server. You can see one of the fan hubs I used to the right, which I plan to mount properly soon as part of the cabling cleanup
Front on look at the case, fully inserted into my 18RU rack with my old NAS Fractal Design Node 804 case and a HPE Proliant DL360 Gen9 server I use as a lab for work purposes. The modules to the right of the Node 804 are fan controllers for the roof mounted fans (2 x AC Infinity Dual Rack Roof Fans with controller). The white cables on the left running into the rack are dual temperature sensors with a remote display that is mounted on the side of the rack as the room can hit low 40C’s in the middle of summer without the room A/C on so I use them to monitor the internal rack temps and when needed turn on the A/C and the rack roof fans (currently the room and rack are around 20-25C during the day as its winter here atm, so I don’t need the A/C or the fans on as the server at the CPU is only reading about 28C under load at the warmest room temp and the drives are running cooler than the CPU, which is alot cooler than the Node 804’s drives ever ran at).
Of interest regarding power usage, the below is a capture of the rack’s power usage from a Meross Smart Plug I have the rack plugged into, showing the rack’s power usage over the past month. The lower horizontal on the left side was the Node 804 running with a TPlink 24 port switch in the rack (The HPE Proliant was unplugged). The spikes are from when I was running both the old and new NAS together (the really big spike also includes running the HPE Proliant with them). The slightly higher horizontal in the middle is the RSV-L4500U on its own with the TPLink switch (again the HPE Proliant being unplugged and the Node 804 turned off). The rest of the line is the RSV-L4500U with the HPE Proliant turned on a couple of times over the past week or 2, plus with my Unifi Dream Machine Pro added to the rack (rear mounted so not visible in the above rack pic) within the past week. Excluding the Proliant’s power usage spikes, the new NAS is using only slightly more power than my previous NAS in its current state which I’m pretty happy with (considering the old NAS had 8 spinning drives and 1 SSD, whereas the new box has 6 spinning drives, 2 SSDs and 2 x M2 drives, with a similar sized PSU installed but a more power hungry CPU over the Node 804’s old i3 CPU).
Thank you to both of you for your advice and suggestions on the build