3.5 on pcie x1’s, ssd on sata

I’m on vacation, so I can’t test this.

Theory: 3.5” sata shucks probably run at 250MB/s or so. Well below the sata 3 (6g) max. Sata ssd’s run much higher. Using an hba in x16/x8 with 3.5’s are still capped at the drive speed.

For the useless x1 slots, why not get an adapter to power the hba’s, run your 3.5 shucks off that. Keep your on board sata for ssd’s. Save your x16/8/4’s for whatever else (10gbe, pcie ssd’s, etc).

Is there a downside to running your server like that?

PCIe 3.0 x1 max bandwidth is around 985 MB/s.
The problem is, it’s pretty hard to find a SATA card that’s PCIe 3.0 x1, most are PCIe 2.0 x1.

PCIe 2.0 x1 max bandwidth is around 490 MB/s.

It’s not a problem in most cases, just something to be aware of.

The gpu mining extenders are x1 on the board, x16 at the card. So for $10 you can get the hba onto the x1 channel

Yes, that’s still x1 bandwidth. The amount of data that needs to flow for GPU mining is minimal.

I get that, but how much bandwidth can a 3.5” easy shuck use at maximum read/write? Not 6GBps. Tests I’ve seen online are 250MBps. Running an hba off the x1 lane for a media server seems like it wouldn’t be problematic, and preserves x16 for the things that could really use it (10gbe, ssd).

To stress the differences between B (Bytes) and b (bits), I prefer to use B/s and bps.

First of all, SATA 3 peak throughput is 6 Gbps, not GB/s. I agree with you, most hard drives will top out at 250 MB/s (around 2 Gbps). However, if you’re running 2 hard drives at full crank on a PCIe 2.0 x1 lane, you’re already past the 490 MB/s bandwidth limit. It’s not necessarily a problem, but it’s certainly something to consider - especially in the case of SATA SSDs, when a single one can easily max it out.

10Gbps Ethernet doesn’t need x16 PCIe, as most 10Gb cards don’t even have an x16 bus. Most are on PCIe 3.0 x4 or similar.

I didn’t say it was “problematic” but you will definitely run into throughput restrictions. Obviously the use-case and specific details matter.

1 Like

That’s the rub: most (consumer) boards don’t have the configurations needed for what we do here. X16 with an x4 or something with tons of x1’s (for what use, I don’t know. I see a Realtek x1 2.5gbe nic on Amazon)

Anything to preserve the higher lanes for cool stuff like the enterprise cards (40gbe, ssd) we get. I will start working to test this out and report back through time