Dirt cheap 10GbE stack via the largely unknown 10GbE-CX4 (<$200)

Note: Passive CX4 cables are limited to 15m, they’re basically DACs that were used for stacking switches in datacenters.

These listings will get you a 10G network up and running for an extremely low cost compared to new hardware, and is cheaper still than our usual routes for cheap 10G.

Super cheap, but this stuff is definitely not in large supply, so if you want it, secure it quick.

It’s not a particularly necessary route to take, just an interesting setup that’s different and inexpensive. I’m posting it because I kinda want the full stack but am restraining myself, so here’s the outlet.

I’d primarily suggest going this route just for the cheapo direct connect solution (unless you’re feeling adventurous).

Drivers for the Chelsio cards are here if you run into any trouble.

Want to direct connect your NAS and your main server?

Super easy direct connect solution at a low cost.

Have some more machines in your rack you want to connect with 10G? Pick up a switch.

Need to connect your 1GbE devices to your 10GbE devices?


Sample stack for 3x 10GbE machines on a 10GbE switch connected to a 48 port GbE switch:

Cost to add another 10GbE machine to your new stack:

Sub-$200 for 10GbE is insane.

10GBASE-CX4 is dead. No reason for bringing it back. You can get more recent SFP+ hardware at the same price level as you listed.

You are aware that the entire premise of this website is repurposing old gear?

Show me your >4 port SFP+ switch for $35

Just couple of weeks ago I paid $90 shipped for an Aruba S2500-48P - 48x POE 1Gb + 4x SFP+.
This switch pretty much replaces the E6400-6XG + GS648-POE combo in your config. I paid $5 extra for a switch which is more power efficient and quiet and takes less space.

48 port switches with 4 SFP+ ports are mostly around $50-70, you overpaid. It doesn’t replace the combo at all. 4 uplink ports =/= a dedicated switch with 6 ports.

A switch with >4 10GbE ports is expensive. Well over $100. This one is $35.

I guess I don’t see the problem with a cheap solution that serves its intended purpose.

You only included 3 clients in your original sample stack. My point was that it makes no sense to use CX4 gear for such config.

As for overpaying for my switch - I am happy to pay $20 extra for a switch that I can sit in the same room with.

It’s a sample config. The idea being that when it costs $13 to add a client, a dedicated switch with greater than 4 ports that’s also dirt cheap makes expansion easier. If I changed the sample stack to 5 devices and have it still cost barely over $200 would that make it easier for you?

This is a far cry from “it’s dead, spend your money on something else.”
It doesn’t “make no sense.” CX4 does 10Gb at the same speed as SFP+, and dedicated switches with >4 ports are cheaper. As I’ve said and as you’ve avoided responding to. It’s for a home lab (for lack of a better term), it’s cheaper, and it works. That makes plenty of sense.
Just because something isn’t deployed in production anymore doesn’t make it useless.

CX4 makes a little bit of sense in your very narrow use case of a “completely broke ass homelab enthusiast” but still zero sense for everybody else.

99% of home networks have less than 5 10GbE devices (i.e. cheap 4x SFP+ switch would be more than enough) and 1% of enthusiasts (having 5+ systems worthy of being connected at 10GbE) surely can afford to pay extra ~$150 for a decent switch ($200 will get you something like Brocade ICX7250-24P - 24x 1GbE POE + 8x 10GbE SFP+). It would take well under 2 years to make those $150 back in electric bill savings vs your combo (i.e 50W vs 200W+ before POE budget running it 24/7).

You like to tinker with old crap. I get it. But pretending that it is a viable solution for other people? Just does not make sense.

Lots of people here are “pretty broke”. Options are never a bad thing. Sure it’s outdated, but it surely could be useful to a few people here, even if it’s not the latest and greatest. That’s why it’s in #technology:ebay-finds-research, not #marketplace:tech-deals.

I have nothing against broke people. My point was that being broke and having access to very cheap (free?) electricity is pretty unique situation. For the rest of us the proposed config may seem cheap to acquire but the reality is that it is more expensive to run day to day than other less outdated options.

Thanks a ton for putting the effort into this post. I’ve been kicking around the idea of installing 10G between my workstation and my Unraid box. I didn’t even know CX4 was a thing and will take a close look at your suggested options.

I might need a full 15m cable run between my office and my basement rack. Looks like the 15m cables are as expensive as two of the cards.

At any rate, thanks for taking the time.

1 Like

good post.

I ended up getting a Brocade switch ICX6450 (4 SFP+ cages and either 24 or 48 gigabit RJ45 POE/non POE) for $100 and bought HP 649281-b21 which are OEM Mellanox ConnectX-3 NICS dual port 10G/40G/56G infiniband/Ethernet for $26 each (flashable to non-OEM latest and greatest firmware from Mellanox) and then had to get QSFP 40G–>SFP+ 10G adapters ($15) and SR fiber transceivers for $8 each and OM4 LC-LC fiber patch cables for $4 each. My total ended up around $260 for connecting 3 machines to the network via 10G.

Not planning on going to 40G QSFP but my NICS are ready for it and the Brocade ICX 6610 are cheap enough.