Sun Oracle F80 800GB PCIe Flash Accelerator 7069200 - Fixing Incorrect Drive Capacity

I am also getting the mmap bar1: operation not permitted error.

Maybe you should post the command and re output you have used/gotten.
“opperation not permitted” sounds like you are not runing the command as root (superuser).

the commandi entered is
sudo ./lsirec 0000:01:00.0 readsbr sbr_backup.bin (correct identifier my case)

result is mmap bar1: operation not permitted

I followed the steps to a tee:
all files in a folder, terminal and go to folder,
sudo chmod +x ddcli lsiutil lsirec
sudo lsiutil -e
options (1 46 5) file name f80-bkup.bin
exit back out (0 0 0)

lspci -Dvvnn | Grep LSI (returns 0000:01:00.0)

sudo ./lsirec 0000:01:00.0 readsbr sbr_backup.bin

   result is mmap bar1: operation not permitted

did you ever figure this out, I’m in the same position. :confused:

edit: Also does anyone have a backup of the sbr files that are generated? I of course was an idiot and did it via a usb boot and no longer have them. I meant to back them up, but brain farted.

thanks.

to answer my own Q and the above.

  1. I had 2 other devices and they seem to have hte exact same firmware (or at least the sbr_backup was the same for all of them)

  2. I had a terrible time getting this to work. I couldn’t format after flashing to IR mode. What I ended up doing was this

a) erase steps
b) upload NWD.split
c) load sbr_new.bin
d) reboot
e) ddcli updatepkg NWD
f) reboot
h) erase steps
i) upload ELP.split.bin
j) writebr sbr_backup.bin
k) reboot
l) ddcli updatepkg ELP
m) reboot

  • at this point saw 4 200GB drives in linux
    and now repeat IR instructions
    n) erase
    o) upload NWD.split.bin
    p) lsirec writesbr sbr_new.bin
    q) reboot
    r) ddcli updatepg NWD…
    s) reboot
    t) and now finally ddcli formating works without a hitch and 800GB drive appears in linux

have done this for 3 drives now with the same sbr files (will get to a fourth soon)

do I need to do all those steps? no clue. but besides being an annoyance of reboot times (i.e. an r720xd can take 5 minutes to reboot, though generally it seemed under 3), it worked reliably, so I didn’t rock the boat

Apologies for the late reply. I think it was bad memory, in my case DEV4 was starting to fail which prevented the drive being formatted in IR mode (as one drive) due to the failing drive. It could still be used in IT mode as the devices are split into 4 individual drives but I decided to send the item back because if one SSD of 4 is failing, I suspect the other 3 would have followed suit.

A lot of people have been buying these drives after a few YouTube videos were made singing the praises of them as cheap storage with industrial grade memory but unfortunately, many of these F40/80’s are turning up from hardware suppliers via decommission server farms etc where they have been hammered.

I bought mine from Bargain Hardware UK which had loads of them for a reasonable price and they all sold out quite quickly. I can’t fault Bargain Hardware, they took the item back and refunded me no quibble but I wonder how many others have ticking time-bombs and haven’t realised.

I know, I watch craft computing too (though in my case I was actually shopping for them as his video came out, had 4 in my cart, then they disappeared, was wondering why. ended up getting 4 of them later that week for $70 shipped that had lots of hours of use, but in practice very little written to them (effectively 100% life left).

Anyways, I ended up figuring out my issue, flashed to IR then to IT then back to IR and was able to format them to one big 800GB disk.

I couldn’t find them anywhere, and am very much not an expert, but I managed to cobble this together: NWD-F40-SPLIT.bin

You also need to change the value in sbr.cfg from SubsysPID = 0x0504 (as in OP’s post) to SubsysPID = 0x0581.

1 Like

Than you I’ll give it a try!

as a total aside, in ddcli’s health listing, what does “trim count” mean?

I signed up for an account just to answer this question. Took a bit of google fu to get past the mmap errors on Debian.

What solved this issue for me was to edit grub to include iomem=relaxed in the kernel parameters. Then the lsirec commands started working without mmap errors.

I apologize for the long post, hopefully anyone else with problems might find and answer for their F80 in the future.

Got an F80 working as 800 gig drive last weekend in ubuntu 20.04. Even programmed the SAS address from back of the card to get rid of the Seagate BIOS boot error at POST. Mounted at 799.9 GB, speed test AOK, happy dance!

Didn’t do anything other than “sync; shutdown now”. Come back today, and the BIOS reports (on first boot)

“ERROR! Adapter Malfunctioning!” MPT BIOS Fault 05h encountered at adapter PCI(02h,00h,00h)
Updating Adapter List!

all other reboots report:

WarpDrive Storage Failure Detected

then the normal PCI SLOT, Vendor Name, etc with nothing listed. I know before the reflash it listed 4x200G drives, but I don’t remember what is was after flashing to IR.

ddcli reports “VPD data is not valid” using the health command
and lspci says (short version):

02:00.0 Serial Attached SCSI controller: Broadcom / LSI SSS6200 PCI-Express Flash SSD (rev 03)
Subsystem: Broadcom / LSI Nytro NWD-BLP4-800

Capabilities: [d0] Vital Product Data
Not readable
pcilib: sysfs_read_vpd: read failed: Input/output error

After reading about the config file to set the VPD, I decided to stop because I’d surely mess it up if I tried to make one from scratch…

The LEDS on the back are life: green Status: red steady Activity green, blinking
all LEDs on the flash modules are green, ddcli say max temp is 44C on any module, also it says “Backup Rail Monitor : GOOD”
the manual says to run ddcli -list. Mine gives:
WarpDrive Selected is NWD-BLP4-800

WarpDrive Information

WarpDrive ID : 1
PCI Address : 00:02:00:00
PCI Slot Number : 0x03
PCI SubSystem DeviceId : 0x504
PCI SubSystem VendorId : 0x1000
SAS Address : 500605B 00663FE60
Package Version : 13.00.08.00
Firmware Version : 113.00.00.00
Legacy BIOS Version : 110.00.01.00
UEFI BSD Version : 07.18.06.00
Chip Name : Nytro WarpDrive
Board Name : NWD-BLP4-800
Board Assembly Number : N/A
Board Tracer Number : N/A
NUMA : Enabled
RAID Support : YES

ddcli -health give a long output, that looks ok, with one line being “Seagate WarpDrive Management Utility: WarpDrive is in inactive state.”

I got this F80 from the ebay seller listed above. Has anyone gotten a bum card from them in the last few weeks?

I did an F40 before the F80 and its working fine; so I think I’ve gotten the hang of flashing things correctly.

Any suggestions?

I flashed the F80 back to IT mode and it works fine. The status LED is now green. Reflashing to IR mode always ends up with a red status light. In IR mode, when the BIOS boots, it finds the F80 in slot (3) (which is correct). Then, after the ‘Initializing…’ command, it says the card in slot (255) has an error.

The VPD is not programmed. I found this page ( Show the Vital Product Data Command - Sun Flash Accelerator F80 PCIe Card User's Guide ) with a dump of the F80 VPD, and I’d like to know the format of the file that the writevpd command wants in order to reprogram the VPD.

I’m not expert on this, but the sbr.cfg created by the python script seems to have mostly the same fields as the VPD log in that web page. Are they the same area in the card’s FLASH?

I got rid of this error by editing the SASAddr with an arbitrary hex number after the needed first 9 digits “500605B” in the sbr.cfg. I repeated the steps mentioned above starting from “Modifying device IDs”.
After that I had to go in lsiutil -e and paste the same SASAddr to the option 18. Change SAS WWID. After the reboot the card works as expected. I don’t think the VPD is necessarily needed to operate this card.

I’m now going to test the card by moving my proxmox installation from my previous SATA-SSD to the F80.

Edit 1: Formatting
Edit 2: Everything works as planned.

Turns out I got a bum card. The backup rail monitor starting showing intermittently bad and it won’t show 1x800G in IR mode at BIOS startup most of the time. Burn it back to IT mode and it works, but it was terribly slow. Come to find out Windows disables write caching when it sees the backup system is bad as it should, but now the F80 is just a PCIE 2.0 slow SSD. Already sent it back, but hoping to find another one soon.

In case someone is wondering, the card is not supported anymore in Windows 11.
If you update and your drive just does not show up, it’s likely the reason.

You can still use it, if you load the corresponding firmware drivers for your OS.
The Sun Oracle F80 is basically also a Seagate Nytro WarpDrive Accelerator Card.
You can use those drivers:
https://www.seagate.com/support/solid-state-flash-storage/accelerator-cards/nytro-warpdrive-accelerator-card/downloads/
I used “Nytro Windows Driver 2012 (2.10.72.01)” for Win11 and it works so far.
In case you forgot how to install drivers manually:
Go to your System → Advanced System Settings → Hardware → Device Manager.
Select the first of the unindentified devices and right click → update drivers → on my computer → select file/folder you downloaded.
It should now be a “LSI WarpDrive Solid State Storage” in the Storage controller section.
i guess the other 5 (or at least 4) unindentified devices are the individual ssds. Idk what to do with those, but if the OS can talk with the LSI Logic controller the raided volume should show up again.

2 Likes

I still can’t get any debian to work with this. I’ve tried iomem=relaxed, rebuilding the kernel without the memory restriction settings set to on, binding and unbinding the driver, running a certain command because I have an intel chip, I even tried installing an old ubuntu distro with linux 3.14.

Does anyone have any suggestions of another distro I can run to get this working, or should I try and find a cheap used amd machine and try there?

Hello did you ever get the F40 working?