Help setting up Adaptec 71605

This is my first time setting up any kind of external RAID card so please bear with me and my questions. I have the card installed in a x16 slot on my ASRock B654M Pro4 MOBO. From what I have read on here when booting up the system I need to do a ctrl-A to get into some sort of config menu but as no point in booting up my Ubuntu setup am I seeing a crtl-A prompt showing up. I also read someplace about doing a ctrl-C but again outside of my normal boot to BIOS and boot options I am not seeing these prompts.

Once loaded into Ubuntu I ran the command lspci but I am not seeing the 71605 card listed in there. Does this mean that the card is not being seen by the system or is it not showing up because I have not enabled some function within the card? The card does have power for the led lights on the card are lite up. I looked in my BIOS to see if I could see the card there but I am not seeing it anyplace but then I could not be looking in the correct location either for as I mentioned at the start this is my first install of a controller card.

Well this is interesting. I updated the BIOS version to the latest version and then once that was done and my system rebooted I had an alarm go off. It was a constant buzz almost sounding like a smoke detector going off. From what I have researched this alarm is an over temperature alarm? I do have a 40mm fan zip tied to the heat sink with it blow out. I shutdown the system for about 10 mins and then booted it back up without the alarm going off.

Once booted into the Ubuntu OS when I ran the lspci this time I was able to see the card installed.

It appears that MaxView written about in the documentation of my controller card does not support the latest version of Ubuntu. How is everyone else setting up this controller in Ubuntu or is everyone pretty much using UnRAID?

Can you add some pics of what the physical setup looks like?

Thanks for the reply but I was able to get the RAID 5 initialized through Ubuntu’s mdadm. However, I now have another issue in that it appears that the card is seeing the drives as 10.7TB instead of 12TB. Because of this my RAID 5 setup is only 43TB in size when it should be closer to like 52-55TB in size according to the online RAID calculator I have used.

Any suggestions as to why I am seeing this and not 12TB drives?

sda 8:0 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdb 8:16 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdc 8:32 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdd 8:48 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sde 8:64 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdf 8:80 1 10.7T 0 disk
└─md0

What are the drive models and it appears to say Raid 6 in your last post

Oops! That was a mistake. I will see if redoing the setup in RAID 5 resolves my issue.

I am using 6 Western Digital 12TB 3.5" Ultrastar DC HC520 SAS drives in this setup.

So what have you ended up with so far? It seems like you never actually got into the HBA configuration, but Ubuntu sees individual disks so maybe you got there, and now you’re defining a RAID array in software on Ubuntu and then what

How can I check my HBA configuration and what should I be looking for? As I mentioned in my initial post I am not seeing any crtl-A option appearing at boot up. I assume that there is some BIOS setting that needs to be enabled but I’ll be darned if I can find it.

Try looking in your BIOS settings for a setting to “Enable Option ROMs” that would need to be enabled for the HBA firmware to load at boot.

Another thing you could try is to Disable “Quick Boot” or “Fast Boot” or similar which might skip Option ROMs even if they are enabled.

I’m not sure, but the fact that you see all the individual disks in the OS might mean that your card is flashed to IT mode instead of functioning as a hardware RAID controller. IMHO this is actually a better set up; These days Software Raid tends to be better than hardware RAID in just about every way. Both in terms of performance and especially features.

Finally I think it is correct that the 12TB drive is showing up as 10.7 TB. It is likely that the drive is labeled in Terabytes (TB) but the OS is reporting Tebibytes (TiB). This is common. They are the same amount of space but the drive manufacturers use the bigger number for marketing purposes.

If that is not the case and the OS is reporting Terabytes another possibility is that the drive has 4k physical sectors but is formatted 512e using 512 byte logical sectors. That causes some amount of disk space to be unavailable to the OS. You might be able to update that but the fix is complicated and might brick your drives.

Now I am getting confused. You are telling me that out of a manufacture listing a drive as 12TB in size I am actually getting a 10.7TB drive because the OS cannot see all the drive?

My current setup is 6 4TB drives setup in a RAID 5 setup with a total storage capacity of 18.2 TB. Now with these new drives which are suppose to be 12TB in size I am barely getting over double the space size with 42.8 when the advertised math works out to 3 times the space. Now to add even more confusion in when I look at online RAID 5 estimate calculators they range from 52-55TB in size estimates.
Here is the output for both RAID setups right now. As I mentioned I want to redo the RAID 6 drives to 5 for that was a mistake. Prior to doing this I want to ensure that my card is setup correctly so that I can use Hardware RAID. I do not want to put any stress on my CPU since it is a low end Intel T processor.

sda 8:0 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdb 8:16 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdc 8:32 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdd 8:48 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sde 8:64 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdf 8:80 1 10.7T 0 disk
└─md0 9:0 0 42.8T 0 raid6 /newarray
sdg 8:96 0 3.6T 0 disk
└─sdg1 8:97 0 3.6T 0 part
└─md127 9:127 0 18.2T 0 raid5 /md0
sdh 8:112 0 3.6T 0 disk
└─sdh1 8:113 0 3.6T 0 part
└─md127 9:127 0 18.2T 0 raid5 /md0
sdi 8:128 0 3.6T 0 disk
└─sdi1 8:129 0 3.6T 0 part
└─md127 9:127 0 18.2T 0 raid5 /md0
sdj 8:144 0 3.6T 0 disk
└─sdj1 8:145 0 3.6T 0 part
└─md127 9:127 0 18.2T 0 raid5 /md0
sdk 8:160 0 3.6T 0 disk
└─sdk1 8:161 0 3.6T 0 part
└─md127 9:127 0 18.2T 0 raid5 /md0
sdl 8:176 0 3.6T 0 disk
└─sdl1 8:177 0 3.6T 0 part
└─md127 9:127 0 18.2T 0 raid5 /md0

I believe I know where the calculators are getting their estimates from and that is the conversation of TB to Tebibyte.

Something still seems off since I am only 42.8. To me it appears that I am missing about 10 TB of space in my RAID setup. Will this change once I redo the RAID in 5 instead of 6?

You’ll notice that your 4 TB labeled drives are also showing up as less than an even 4 - 3.6 it seems. It’s not uncommon at all.
I assume it’s basically the same thing happening to the 12 TB drives, except we don’t seem to have confirmed that they aren’t in a special format that can eat into the space additionally, either with 520 byte sectors or protection bits or both or they’re the same or something I’m not sure.
Also, as you’ve observed, you’re still in the RAID 6. Switching to RAID 5 should at least change the amount of capacity, though probably still not what you seem to be expecting.
Also a T series CPU isn’t a “low end” cpu, it’s just power capped and therefore has a lower performance ceiling. Still more than sufficient for a NAS.

You are telling me that out of a manufacture listing a drive as 12TB in size I am actually getting a 10.7TB drive because the OS cannot see all the drive

I’m not saying that. I’m saying that depending on the software what is being reported as “T” might be TiB instead of TB and 12TB = 10.9 TiB.

If that is the case for you, then your 72TB would be 65.3TiB which is pretty close to the 64.2T or 10.7T per disk you are seeing.

The total reported 42.8T is the capacity in your raid after parity is accounted for. You have 6 x 10.7T = 64.2T - 2 x 10.7T (raid6 parity) = 42.8T Usable. Raid5 would net you 53.5T Usable.

In any case hopefully you can get into the firmware for the controller card so you can set up hardware RAID. Let us know if you have any luck with the Enable Option ROM and Disable Quick Boot BIOS options.

Wanted to give an update on this and thank everyone for their help and suggestions. I redid the RAID setup moving from 6 to 5 and once this was done I ended up with 53.5TB in size which is what I was expecting. My issue was that since RAID6 can accommodate up to 2 failed drives it has to use more space on the disks to accommodate the possible failures. It would seem that my issue was me accidentally setting up my RAID as 6 instead of 5 as I intended.

I did some research into the whole TB vs TiB and it appears that only Windows uses TiB from what I was able to tell. Both Mac and Linux fixed this issue and use TB when reporting on the size of drives.

Can I stop you for a second and recommend that you use a software based RAID system such as TrueNAS or Unraid? There’s really not a lot of reason to be running hardware-based RAID nowadays.

Put the RAID controller in HBA mode and pass the disks through to the OS directly.

It’s not exactly an issue. It’s just how they’ve chosen to report size.

Thanks for the suggestion but this is not a new setup so I would have to install a new OS and and then re-setup everything that is running on my system. Since it took me several hours to set it up the first time I really do not want to have to go through that hassle again. The setup appears to be working properly now that I redid the RAID from 6 to 5. The system has been running off a hardware RAID now since it was built for that is what I am familiar with and has been working without issue. If something happens to my OS drive then I may consider swapping to another OS as you all recommend most of the time.