I am an avid pc builder. My first was a 100 mhz single core. I am in need of some advice. I understand to run 1 pcie 3.0 at x16 I need at least 16 pcie lanes on the cpu. So for 13 pcie 3.0 to all run at 16x i need 208 lanes. Probably from multiple cpus ? Any suggestion on the most economic way to achieve this ? I also need it to be ddr4. I run a small mining farm and am looking to up my game. All help and suggestions are greatly appreciated.
For mining, x1 is good enough.
Not something I have first hand experience with, but my understanding is that they use bifurcation to split single slots and also PICe switches to share a lane across multiple cards.
Some level of motherboard support is required to use one or both of those solutions.
are you sure you really need to run all of those PCIE lanes? Graphics cards rarely need all of it. For mining even less so.
But if you need a boatload of PCIE, the way to do it is dual EPYC cpus. Dual Xeons are second best.
Some boards with use PCIE Switchs (PLXs) to let you use more devices, but ultimately some bandwidth gets shared before it gets back to the cpu.
Thanks for the reply. The project i am working on does require high bandwidth between the gpu and system ram. You are right most mining like eth only require a x1 slot. I run a bunch of asus 250 with 13 gpu and 6 nvidia mining cards on g3900 cpus with no issue. I have done some testing and x8 is working. I know x4 is to slow though.
I have read about the switches. I think that would be to detrimental.
Is there a way to know if a motherboard has the switches? I guess if it advertise 5 pcie x 16 on a xeon with only 40 lanes would be a dead giveaway.
Something like this:
The PLX 8747 is the switch
They’re also on PCI hosts for sharing slots.
If you’re mining, you do not need a x16 connection. Most bulk GPU miners use x1 connections with mining risers.
Thanks. I do know that mining ethereum only requires a 1x. Hell i run 1x to 4 1x splitters. Unfortunately, i am not mining ethereum in this instance. I require 6 gb of bandwidth per gpu. Also, 2.0 x 16 would work but i would need a cpu with 2x the lanes as a cpu running gen 3. I do appreciate the help. I totally understand why u think i need only 1x. Just sadly not in my case. I did get 4 slots working on a gigabyte gaming 5p with i7-5930K. I also have the xeon for the board I’ll be testing. I order the asus x99 ws with 7 pcie that can do 16x,8x,8x,8x,8x,8x,8x. I am pretty sure that uses a switch to achieve that. IDK? I can run either 1 cpu board with 7 or 2 cpu board with 13. I never had a dual cpu board before. Since posting I have done a bit of research. I have a better understanding how to achieve high bandwidth on max slots.
If money is no object - look to AMD Epyc motherboards - the Asrock rack ROMED8-2T has 7 pxie 4 x16 slots.