SAS/SATA breakout cable lengths for Rosewill RSV-L4500U case

Just purchased a Rosewill RSV-L4500U case (current in transit, wont receive till next week).
As I cannot locate any specific details on other peoples builds using this case, can anyone advise on the SAS/SATA breakout cable lengths required to reach the 3 x drive cages from a PCIe slot in the case?

I found one link referencing the use of 3ft/1m cables but I would rather not have excess length cables rolled up in the case where possible.

0.5m cables can be tight, but generally they will work.

There’s little to no downside to having extra cable length in your case, there’s plenty of room for it.

Hmm if the 0.5m cables are tight, I might consider going with the mid-length cables (0.75m) cables instead for the furthest and middle cages, so I have sufficent length without being too excessive… I also have a spare 0.5m cable from my existing NAS I might use to test on the closest cage otherwise I’ll go with a 0.75m for it too once the case arrives.

Its all about tidy cable management and the look of it… plus not having mass bunches of cables laying around would help with airflow inside the case.

I have 1m cables and I’m not a huge fan. Like jdm said, plenty of room and they’re not tight. But I’ve got so much extra cable and there’s no way to hide them, it looks like a rats nest. Plus the fact I’ve got a fan controller and fan cables to hide/tuck. It’s rough. Haha. I could probably do better but in the long run no one will see it so I can’t really complain.

When you get the build done I’d be curious to know how it goes. I’ve got the 4500 and was looking at another one but only see the 4500u. Curious what’s the difference if any.

I’m thinking of probably going with 0.75m for the furthest and middle bay, and probably do a 0.5m cable for the closest bay (the case I’ve ordered hasn’t been deliverd yet, so I haven’t been able to determine my actual required cable lengths)…basing off your L4000 case setup would you say the 0.75m cable would also be excessive (they would be about 1 foot shorter than your existing cables).
Happy to share some photos of my setup once I’ve got it, but I expect to be building it over a couple of months as I slowly gather the required parts as I’m looking at a cost upwards of $AU4000 for the base system.
In case your interested, this is the setup I’m currently plan to initally install in the case - https://au.pcpartpicker.com/list/pXq2Vw.
The quoted Custom parts and non-Amazon prices are all $AU pricing, whereas Amazon prices are auto-converted to your local currency pricing.

Just took delivery of the case… as its my first Rosewill case I don’t have anything to compare against to identify the differences, but I’m assuming you may be able to…
From what I have read up about it, I’m suspecting the main (noticeable) difference/s are to do with the front panel, which as a USB 3 port (I believe the older cases only had a USB 2 port?)

I also noted that the Mobo location appears to be a removable tray, but it would be pointless to remove the tray and mount the drive to it before installing the mobo in the case, as the mounting screws holes for the tray to the case would be underneath the mobo

Anyhow, here’s some pics of the case, freshly removed from the box, as I figure you might be able to spot any differences between the L4000 & L4500 from them. I’ll also note for the middle fan frame has a gap between it and case underneath it to freely run cables from the mobo to any of the drive bays in a direct line.

Front Panel

Front-Top view with door open

Front-Top view with door closed

Rear-Top view

Close up of the drive bays from behind

thanks for the info! it seems to be identical to the 4500 other than usb ports being upgraded to usb3. guess it’s time to add one to the wishlist.

Just curious to know in your previous cases, which PSU did you go with? As per my PCPartPicker link I provided earlier I’m considering going with the Corsair HX 750W Platnium PSU as it comes with 16 SATA power connectors, but I’m curious to know what PSUs others have used to power the bays and fans in case there is a better PSU for the job available

I have a couple different rosewill 4u cases(gaming pc,nas, vm host, nvr). depending on the use case i either have a seasonic or corsair semi modular or full modular psu. My nas has a seasonic 80 plus gold as well as my backup server. gaming pc and vm host use corsair cx series. the only machine that has more than 6-8 drives is the nas. for the sata power connectors i’m using sata extension cables. in another thread jdm recommended these adapters. they get the job done and somewhat help with cable management. in all honesty,i’d stick with the hx 750.it’s only $30 more than the seasonics i have and with the sata cable extensions it brings the total price closer to the $145 the hx is going for on amazon. not to mention it’s platinum instead of gold.

to power the fans i have two fan controllers connected to the onboard fan headers of my motherboard. i’ve replaced the stock fans with either p12 from artic or noctua fans.i only went noctua for one case because at the time that’s all that was available that had good performance. actually just got a 5 pack for p12 fans for my desktop and need to get another order to replace the noctuas.

Hmm I was actually thinking of burning a little more money so I have direct power leads rather than using extenders by getting custom SATA power cables made for the drive bays (refer Configurator – CableMod Global Store), where I’m considering using 1 x 4 connection SATA power lead in each drive bay, and have a separate SATA power lead that is long enough to power the 5th drive in each bay (so 4 x custom SATA power leads in total). I would then use one of the factory supplied Molex leads to power the case fans plus use a Molex to SATA adapter to power some SSD’s I’ll mount separately in the mobo area.
I’m yet to decide if I’ll run 1 or 2 fan controllers to manage the case fans but suspect I’ll use 2 (1 for the CPU cooler and rear 80mm fans, the 2nd controller to run the 6 x 120’s).

As I only have the case at this point, I am open to suggestions on how to configure the internals cabling/power, although I’ve spec’ed out at PCPartPicker what I’m intending to install in it components wise initially, with the eventual plan of filling all the drive bays with 3.5" drives over time.

Sounds like a good plan. One thing I would suggest is to use the 1 to 4 sata adapters for each cage and plug them in from the bottom up. If you have the connector that plugs into the sata power from the psu at the bottom, it’ll make management a bit easier. Then you use a direct sata power to the top drive. There is surprisingly not a whole lot of room behind the drive cage with drives installed and the fan wall. Even with the wall flipped, which i second on doing.

If you use the adapters for the drive cages you should have enough extra sata power connectors you shouldn’t need a molex to sata adapter. But if you do go that route, I would recommend these. Looking at the corsair page for the hx750 you should be able to get away with using two of the sata cables to power the 5th drive per cage and the sata adapters. With the third power cable being able to be routed to the ssd by the motherboard.

The noctua fans will work just fine but I would suggest going with these arctic fans. I’m not entirely sure if i got the pricing right(used au partpicker) but for a few dollars more then one noctua you get 5 fans that slightly out perform the noctua.The arctics have more static pressure and airflow compared to the noctua on your list.The only downside is the arctics are 1800rpm fans instead of 1850 like the noctua. Not really a big deal but to some people that matters(see reddit). I’d also suggest the arctic p8 for the rear 80mm fans.

Looking at the spec sheet for your motherboard, it has 3 onboard fan headers so you’ll need 2 fan controllers/adapters for the cage and wall fans. Or one controller if you get something like corsair has(corsair commander). the fan hub I use has 4 ports, one for the “main” fan and the 3 it controls. Since there are 6 120mm fans you’d need two of those hubs to power them all. I have one hub connected to the cage fans and one hub connected to the wall fans. The 2 rear 80mm fans can plug into each other and use the last motherboard fan header.

One side note, i see the motherboard has a realtek nic. they aren’t terrible but if you’re running a linux or bsd based operating system it’s highly recommended to go with an intel nic. Widely supported compared to realtek. if you’re going with a windows operating system then you should be fine.

For refernce, my nas consists of a supermicro matx motherboard with a hp 10gb nic,2 lsi controllers, 6 arctic p12 pwm pst fans, 2 80mm arctic p8 fans, the deepcool fan hubs, the sata extension cables and sata breakout cables. been running it for a couple years now and have no had any issues. Granted it’s a nas so not much heat going through it but still ole reliable. My vm host has the same fans and hub installed and that system has a xeon v3 that gets toasty and temps are fine even when i stressed it before deployment.

Trying to picture/understand your description here. I’ve drawn below how I was thinking of powering the drives… Black boxes are the drive cages, light blue boxes are the individual drives, Yellow, Red, Green & purple boxes represent the separate power leads/connectors to each of the drives with the trailing lead going back to the PSU (and I would end up with an unused/spare SATA connector on the purple lead)…
I’m uncertain if this is what you mean, or to use a SATA power Y adapter or multi-splitter (that would effectively give me at least 5 SATA power connectors per lead) to run all drives in each cage from a single source lead and so I would only be running 3 leads total to the drive cages (one per cage)? My only concern in doing this would be causing too much power draw on the power rail and thereby fry the drives.

The comment about the Molex to Sata was due to me thinking of not (at least initially) replacing the factory fans (which are all powered by Molex adapters), and thereby I’d need a Molex power lead for them, but if I replace them with Nocturas (or your recommended Arctics) then power should be supplied via a fan controller and therefore I would not have any need for Molex at all. I would therefore just be running a SATA power lead to the SSD’s mounted in the Mobo area, plus also be able to power up the fan controllers using individual sata power extenders so the fan hub/controller can be located closer to the respective fans

I’m actually tossing up whether to bulld the box as TrueNAS Scale (it will be replacing my existing TrueNAS Core I’ve been running the past 8 years since originally building it as FreeNAS v8 and updating it over time) or as a Wintel server (all clients that would be accessing it as a file share are Wintel systems), and to be honest I hadn’t considered the NIC on that motherboard, so good catch there (if I go TrueNAS Scale on the build thats definitely something I need to consider with it being Linux based as I may have to look at an alternative board instead).

With the multi splitter that you linked to, that’s what i would recommend using. it’s actually a 1 to 4 adapter meaning that one of the plugs would take the sata power from the psu and then the other 4 plugs would connect directly to the hard drive. in a sense its a sata power daisy chain. I actually miscounted the psu cables from corsairs site. the psu supply’s 4 sata power cables with 4 plugs per cable. So in that case you can use a single sata power cable from the psu per drive cage and then the fourth sata power psu cable can power the ssd by the motherboard. the picture you provided is actually a better idea than what i was talking about, cleaner cable management. However, the purple cables i wouldn’t attach to only the bottom drives. i would use one sata power cable per cage, sata splitter/sata psu cable per cage. i’ve modified your picture to kind of help(hopefully).

i’m with you now on the molex adapter. i use and recommend this hub. gets to or really close to the fans and can be tucked along the edge of the case. two are all that’s needed since they are only for the 120mm fans.

never heard of wintel, had to look that one up. personally i like linux/bsd because its more powerful than windows and uses less resources out the box, IMO. I wouldn’t suggest using truenas scale just yet. It’s come a long way since it’s announcement and initial release but a few weeks ago i had a go at it and it failed doing updates for no apparent reason. there are also people having issues with importing zfs pools and such. if you want an all in one build (vm host/nas) that has zfs support it’d suggest proxmox. its a hypervisor built on debian. perfect media server is a site that is dedicated to just such a setup. you can run a vm for pretty much anything you want and there is container support(lxc). Or spin up a vm and install docker on that(currently have this for a homer docker container). you have zfs underneath for all its greatness and then vm/container support for anything you want. my vm host runs proxmox and i have a couple of vms for various things(shinobi, minecraft, website hosting, adblocking, docker testing,jellyfin, nas[ubuntu samba share]) and to be honest, its a great setup. at least for me. i have proxmox send snapshots and backups to my truenas box just in case. I know on the forums here its recommended to use unraid which is pretty good software but it’s not for me.

For the power supply, rather than going with a 4 way splitter for the drive cages, I’m thinking of just going with a 2 way, so I have no excess SATA connectors/cable hanging around, which still only needing to run 3 x power to the front of the case.
I’ll use one of the factory supplied 4xsata leads to power the SSDs and fan controller (thinking of getting this controller to power all the 120mm fans).
However, I managed to get my hands on a 10Gb SFP switch, so now I’m looking for a decent (but reasonably cheap) set of 10Gb NICs, one to run my existing TrueNAS Core server, one to run in the new (to be built) TrueNas Scale server and a 3rd to install in my Windows PC… Clearly I’ll need a spare PCI x8 slot in each system I expect for the cards, so I’m trying to work out both an appropriate card to purchase that is compatible with both versions of TrueNAS, and a motherboard for the new server with enough PCI slots for 2 x SAS/Sata controllers and the 10Gb NIC (therefore having an onboard Realtek NIC is not important)… Any suggestions?

@adamsir2, @JDM_WAAAT
As an update to this old chat FYI, I ended up building the new system in July. The specs of what I purchased are listed here. Note that I haven’t purchased the listed additional custom SATA power leads as I don’t have the case fully populated with drives yet and I’ve replaced all case fans with the listed Artics…

I built the new system running TrueNAS Core 13.0 U1 (the old NAS was running TrueNAS Core 12 U7). I had tried building it as TrueNAS Scale, but I found out that Scale doesn’t currently support 12th Gen Intel CPU’s for passthrough of the CPU’s inbuilt GPU, even if I assign a separate PCIe installed GPU to the OS (hence the existence of the GT710 GPU in the build) and so I just went back to a Core server for the time being as it was easier to migrate to the new system that way (I might upgrade it at a later date to Scale once the CPU/GPU is supported for passthrough).

I haven’t as yet cleaned up the cabling etc within the case, but I do plan to get around to it at some time by using something like this cable tie mount throughout the case.
Currently I have the 2 x 2.5" SSDs just sitting beside the mobo loose against the wall (I need to mount them as well), and I’m using a spare 1m (3 1/3ft) SAS 8087>SATA breakout cable as I found the 0.75m (2 1/2 ft) breakout cables I bought were too short for my liking to reach the furthest drive bay where I have installed the 4 x 16TB drives I’ve started with.
The 2 drives in the middle cage (refer image below) contained data backed up from my old NAS that I imported locally into the new NAS to speed up file transfer (about 17TB of data locally copied rather than copied over the network). Doing this saved me I believe about 15 hours over a 1Gb network.
I believe that there are plans from Xi Systems to bring in the ability to expand the existing 64TB RaidZ vDev Pool with individual disks rather than spanning Pools over multiple vDevs so I hope to use an extra 16TB drive (one of the 2 drives in the middle bay) to add it to the existing 64Tb Raidz I have setup in the near future (as its just an additional drive plugged in but not being used at the moment).

Pics of build currently:
Overhead view of the case open (rack mounted, fully extended from the rack on the iStar rails. (yes I know about the cable ties, I plan to fix that at a later date whilst trying to hide those fan power cables underneath the PSU if possible).
An issue I had with the mounting bolts for the iStar rails is that they don’t tighten fully to the rack, so there is a little play in respect to movement of the rails (most likely contributing to an issue I have when fully extending/reinserting the server where the rail gets stuck on the rack’s front vertical mount and I have to push the server case sideways slightly to make it fully extend)

Close up looking at the mobo compartment

Close up looking at the rear of the drive cages. the main drive volume are the 16TB drives on the left. The 2 drives in the middle are a 3TB & extra 16TB drive I used to copy the data to the server. You can see one of the fan hubs I used to the right, which I plan to mount properly soon as part of the cabling cleanup

Front on look at the case, fully inserted into my 18RU rack with my old NAS Fractal Design Node 804 case and a HPE Proliant DL360 Gen9 server I use as a lab for work purposes. The modules to the right of the Node 804 are fan controllers for the roof mounted fans (2 x AC Infinity Dual Rack Roof Fans with controller). The white cables on the left running into the rack are dual temperature sensors with a remote display that is mounted on the side of the rack as the room can hit low 40C’s in the middle of summer without the room A/C on so I use them to monitor the internal rack temps and when needed turn on the A/C and the rack roof fans (currently the room and rack are around 20-25C during the day as its winter here atm, so I don’t need the A/C or the fans on as the server at the CPU is only reading about 28C under load at the warmest room temp and the drives are running cooler than the CPU, which is alot cooler than the Node 804’s drives ever ran at).

Of interest regarding power usage, the below is a capture of the rack’s power usage from a Meross Smart Plug I have the rack plugged into, showing the rack’s power usage over the past month. The lower horizontal on the left side was the Node 804 running with a TPlink 24 port switch in the rack (The HPE Proliant was unplugged). The spikes are from when I was running both the old and new NAS together (the really big spike also includes running the HPE Proliant with them). The slightly higher horizontal in the middle is the RSV-L4500U on its own with the TPLink switch (again the HPE Proliant being unplugged and the Node 804 turned off). The rest of the line is the RSV-L4500U with the HPE Proliant turned on a couple of times over the past week or 2, plus with my Unifi Dream Machine Pro added to the rack (rear mounted so not visible in the above rack pic) within the past week. Excluding the Proliant’s power usage spikes, the new NAS is using only slightly more power than my previous NAS in its current state which I’m pretty happy with (considering the old NAS had 8 spinning drives and 1 SSD, whereas the new box has 6 spinning drives, 2 SSDs and 2 x M2 drives, with a similar sized PSU installed but a more power hungry CPU over the Node 804’s old i3 CPU).

Thank you to both of you for your advice and suggestions on the build

1 Like