I started with replacing the CPUs and adding the GAMMAXX 400s. They were a pain to install - between the spring wanting to make the screw move at an angle and the slightly elongated holes in the bracket, it was just difficult to get them to line up. I’m glad I won’t be taking that apart often - one time when I get the v2 to test on this motherboard and possibly one more time after that to put new procs in if the v2s will boot.
After it worked, I managed an outside-the case POST and posted it on Discord. Got several bits of good advice such as ‘don’t test on the anti-static bag; test on cardboard’, and ‘Don’t use the onboard SAS device because it’s worthless’.
Moving from the PE2900, I discovered that the disk names were all different. Good advice from JDM had me snapshot the configuration and map the disks back to the same location that they should have been, then re-run the parity check.
The parity check will have to wait until later; now it’s time to install the board into the case.
Because there aren’t standoffs in the right places and there are some in spots that don’t match up with mounting holes, I had to identify the wrong ones. I marked the holes on the board to a cardboard base in order to duplicate the mounting pattern, and am now removing the extra standoffs and putting in non-metallic standoffs wherever a hole is on the board but no standoff possible.
This build remains virtually silent, and airflow must be pretty good even with just the stock case fans and the 2 120mms on the GAMMAX’s. IPMI reports most temps around 38 C, with only the PCH hitting 45-48 C. It’s also a drastic improvement over the former hardware. Among other improvements, the parity check mentioned above has hit 81% in about 9 running hours, where the PE2900 with the same drives ran for just over 3 days solid.
I believe rev 1.10 of that board support only v1 CPUs - it’s one of the strange boards from SM that doesn’t support both v1 and v2 - most support v2 with bios upgrade. But I think this board revision 1.10 does not.
Update in case anyone else is planning to use the same case. My build put the SSD behind the motherboard, where there is no cooling. With it used as cache disk I would get to Temps above 50c on the drive. There are 2 other mount points for the SSD tray on the frame for the drives, and I have moved it there to see if I can get better thermal control.
Update: Moving the SSD was not a sufficient fix in this case. After adding another hdd to use as cache for my file ingest activities (until the paired ones arrive), I had three shares which still used the SSD for their cache pool - appdata, system, and one I missed somehow when I moved the rest.
Long story short - I backed up an external system to that share. I wrote 500G of data to the drive at gigabit transfer speeds and it reached 53C! I completely filled the cache drive, which probably contributed to corrupting the docker image. Fortunately, the procedure for fixing the latter is straightforward. Comparing the data to the original after it was moved to the array was also successful, so it was mainly a nuisance.
I’ve now confirmed that the only active thing on that drive IS appdata (system is there, but I’m not using it). All caching is going to a spinning disk. I’ll replace the consumer-grade ssd with a U.2 SSD once the shipment arrives, as well as replacing the single disk cache with a mirrored pair.
I don’t know whether moving the SSD to a more actively-cooled location would help. That location is not good for this drive seeing that much write activity - but then again the drive probably can’t survive that much activity for long anyhow, so I’m changing the usage pattern instead.
OK. I should probably break out the content of my media share into several as well but I read those posts after I’d already set up plex using a guide elsewhere, and haven’t spent the time to fix them yet.
I don’t really understand what the functional difference will be for Prefer:Cache vs. Only:Cache when the only data using that drive is appdata and system. I probably need to go back and either re-watch your videos or re-read some posts. I don’t have an issue making the change; I just don’t understand the ‘why’ well enough to be sure that I won’t make a similar mistake later.
That was a useful learning exercise, as I hadn’t clicked that ‘compute all’ button and didn’t realize what it could show me until now.
I think I know this answer, but: Is there any reason to be concerned with the fact that most of my shares seem to concentrate on disk 1? I don’t think so because they are all protected by parity - so even losing disk 1 is OK unless I also lose that parity drive. As I expand the array I plan on moving to dual parity as well, and 1 disk is already capable of saturating my network so there’s not a real performance gain to be had from trying to shuffle the data.