[Guide] "Lego" 8-bay, dual Xeon, ultra-quiet build (2019 updated)

It’s fine, we’ve recommended them for a while.

How about 2 molex to eps adapter?

No, I wouldn’t use that. I’ve never recommended it.

any reason why you don’t recommend it? I had the impression that splitting a EPS would cause more strain/heat on the cable not designed for that in worse case scenario.

@JDM_WAAAT
Hello,

I want to build a FreeNAS server that will just do file serving; namely Office documents, Emby and FLAC.

I followed your guides but since I have got a sleeping IBM M3 server (LGA1366) in the locker room, I think it would be the fastest way to start. So, since this thread was started in June 2019, I just wanted to check if it still a good choice.

Parts I have are:

  • 2x Xeon L5630 withe their heatsinks
  • 8x 2Gb ECC RAM (the server has 18 slots)
  • an IBM/LSI SAS Card
  • 2x redundant 650W PSU (don’t think I can use that in an ATX case).

So if I go the ATX way, I would only be able to install 12 GB of RAM unless I buy some other RAM.
If I go the eATX way, I would be able to use all of my 8 sticks = 16 GB but those cases are BIG and expensive.

Q1: Would a FreeNAS make use of the two CPU since I have them or should I just install one ?
Only one CPY means I will have to buy new RAM sticks

Q2: What about only 12 GB or RAM

Q3: Overall, is the LGA1366 still a good path or not ?

Thanks for your advice.

LGA1366 is OK if you’re on a budget. You can always get more RAM, it’s pretty cheap. 16GB sticks are around $17 each, I’d ditch all of those 2GB, they are nearly worthless nowadays.

  1. Yes, it can use both.
  2. 12GB is plenty.
  3. It’s OK.

Question for you: have you considered Unraid?

Thank you @JDM_WAAAT,

The thing is I have this server in my downstair locker doing nothing since 2018 because it was a mistake bying it in the first place…

It’s true that going from 18 RAM slots to only 6 will call for new memory sticks thanks to those 2 GB sticks.
The L5630 are almost worthless on eBay but are 40W TDP, which I like.
The 2U IBM M3 X3650 is meant for 2.5" disks that are expensive while I already have 4 RED 3.5"

I was considering FreeNAS since I have a friend running it so that’s the first I heard of.

The HBA in the M3 is an LSI-01141(B) for which I can’t find much information and I think I read on of your build that it’s better to buy an LSI branded card than an OEM branded…

Are you aware if any cooler made for LGA115x will fit on the LGA1366 using your washers trick ?

My L5630 are only 40W TDP so I’d go with a small cooler and the Arctic Freezer is actually unavailable…

Could you point out the link of the seller who has same set you bought, please?

@JDM_WAAAT I want to start this build, but It seems the Arctic Freezer 12 is no longer available, could you help me recommend what would be other good cooling option that will replace and fits for this setup?

@JJjuby Today, I learned the hard way. As the Freezer 12 was unavailable, I got an Alpine 12. Mistake. Don’t do that.

LGA115x and LGA1366 mobo have different spacing in the mobo for installing heatsink. The Alpine is a toolless push trough pins so it does not fit on an LGA mobo.

If you want a CPU cooler that you can mod with metal washers (like the Freezer 12), it needs to be multi socket and have screw mounts, not push mounts.

I just ordered a CoolerMaster Hyper 212 EVO. More expensive at 35 C$ but since I already got the mobo, I am still moving forward.

@JJjuby Do you already own any parts for this build? If not, let me give you some advice from experience I learned in the last two weeks.

[EDIT]
I went on Ebay and searched for « LGA 1366 passive heatsink » and found 2U units for 10 U$ each. ebay

As I have 2x L5630 40W TDP, I ordered two units and cancelled the two Cooler Master 212 EVO. I won’t be able to fit those heatsink on anything else than LGA1366 but I am pretty confident it will do and I just saved 50C$.

If you are wanting a reference for future heatsink compatibility…

Geez! I wish I had browsed the whole site befor and found that…

Thanks!

@NinthWave I see what you mean… I actually changed completed from this build to this setup:

X9DRH-iTF motherboard
dual E5-2667V2 CPUs
8 sticks of 16GB DDR3 ECC ram
LSI 9211-8i PCIE card
two asetek aio CPU water coolers

a bit more robust setup!

Ask yourself if you really need 2 cpus. I had a blade server with 2 cpus so I wanted to reuse the parts I had. I went on ebay to find an LGA1366 board. The cheapest I found was this X8DT3-LN4F. It’s quite a nice board. 4 LAN, 6 SATA, 8 SAS, IPMI (this I like very much).

Once the board was already paid, I realized that eATX (SSI-EEB) is damn huge and that finding a case to hold it was to be cumbersome. I found a Cooler Master Cosmos 1000 for 75 C$. This is an enormous box that natively can hold 11 HDD and since there is 5x 5.25" external, a slight mod could bring it to higher capacity. :slight_smile:

And yesterday, the drawback with the CPU cooler that won’t fit… This is annoying but I have to remind myself that I decided to build it as hobby instead of going with an embedded second hand solution like a Dell server tower…

So, if you have not paid for the parts, just ask yourself if two CPUs are required. If not, a single CPU mobo is way less complicated for just NAS purposes.

I also must admit that what mislead me was how Xeon prices get deprecated. Honestly, it’s late in the process that I realized that my 2 L5630 I had in my blade server could be acquired for a mere 12$ on ebay (for the pair !!!).

The reason I wanted to get rid of the blade is that as a 2U rackmount, it can only take 2.5" HDD so once I played with it for 2 years, this is another big chunk that I don’t need if I have to pay a premium for the 2.5" HDD.

And then last night, I read the thread about DAS and realized that with an external SAS card and a very cheap case, I could have kept the blade and solve my 2.5" HDD problem.

So for me, it’s been a hard learning curve.

If I am lucky, the Supermicro X8DT3 will last a decade and I will just have to worry about replacing the wearing HDD once in a while.

Honestly, I can understand that sockets can change in time as technology evolve and things in the SB or NB get moved to the CPU… so pins change accodingly on the CPU but the holes spacing for the freaking CPU Heatsink, this I don’t get.

Were those 2-3 mm difference from 115x, 1366 and 2011 really really necessary… Good grief

In the last years, AMD has made a good job in keeping the socket as “stable” as possible. Until EPYC and Threadripper bit then again, those should not be consumer products except for those who just have unlimited funds and “can”.

For NAS purpose makes totally sense, however, in my case, I will mainly use this system for virtualization of some networking software appliances and also some penetration test environment. Need CPU horsepower. storage is something that I will also consider, have a FreeNas as a guest VM, but not much a bigger volume, maybe 3 or 4 10tb SAS drives, and a few SSDs for proxmox boot and VMs. Thiking ahead, this system can last a good time.

1 Like

Out of curiosity, are you in Vancouver ?