Overview
I’ll admit up front, this is a bit of a niche guide, it probably won’t apply for the vast majority of unraid users. But, I wanted to put it out there anyway, just in case it helps one person.
A little background info first, I bought a few new 12TB white label drives on Amazon and eBay, because at the time they were the cheapest price per terabyte for 10TB+ drives. When I got the drives and installed them, I found that only one drive was showing in the array drop down list in unraid. After a bit of digging, I discovered that the serial number in the drives firmware wasn’t set to the value on the label, they were all set to 00000000
. And since unraid uses this serial number to access the drives, it was only showing the first drive.
Unraid and most other modern linux distros use udev to identify and label devices in your system, everything from USB ports to drives to PCI-E devices and more. Udev creates the folder structure /dev/disk/by-*
which contains links to the actual drive device nodes based on their serial number, UUID, label and physical path. These paths are created at boot, and updated as devices are inserted and removed, based on a set of pre-defined rules. Like most linux programs, the rules are stored as a series of text files to make them easy to customize, as long as you know what you’re doing. To make my drives show up separately, I needed to modify one of these files.
Procedure
CAUTION You will be modifying a few core system files when you follow this guide. Please triple-check your work before you save and test these modifications.
With a linux distro that is installed persistently, you could modify the files in place one time and be done with it, but since unraid creates the root filesystem from scratch on every boot, the process is a bit different. We will be putting a few extra files on the USB stick and modifying the unraid go
file to copy the updated files into their proper locations during boot.
First things first, we need to create a folder on the USB stick. You’ll want to run these first few steps directly from your unraid server. Open a terminal and run
mkdir -p /boot/config/rules.d
Remember, all files, folders and commands in linux are CaSe SeNsItIvE. Next, we need to add the rules file into this folder. You have two options for this:
The clean option
With this option, we will override the default persistent storage rules file with our updated copy. Advantage to this option is the white label drives will only show up in the drop down lists once. Disadvantage is if the default udev rules are ever updated, you’ll need to change your file to include the new updates. I can’t say how frequently these updates will occur, but considering what the file contains I would imagine it will only really update if/when the next new drive tech comes out. And even then, you wouldn’t even really need to update the file unless you’re adding one of these new drives to your server. If this works for you, run the following command in the terminal:
cp /lib/udev/rules.d/60-persistent-storage.rules /boot/config/rules.d
Next, open the file you copied in your favorite text editor. Scroll down until you see # SCSI devices
around line 39. It should look like this by default:
# SCSI devices
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", IMPORT{program}="scsi_id --export --whitelisted -d $devnode", ENV{ID_BUS}="scsi"
KERNEL=="cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}!="?*", IMPORT{program}="scsi_id --export --whitelisted -d $devnode", ENV{ID_BUS}="cciss"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="?*", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL}=="?*", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}-part%n"
You need to change the section to the following:
# SCSI devices
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", IMPORT{program}="scsi_id --export --whitelisted -d $devnode", ENV{ID_BUS}="scsi"
KERNEL=="cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}!="?*", IMPORT{program}="scsi_id --export --whitelisted -d $devnode", ENV{ID_BUS}="cciss"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL_SHORT}!="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL_SHORT}!="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}-part%n"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL_SHORT}=="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}-$env{ID_WWN}"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL_SHORT}=="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}-$env{ID_WWN}-part%n"
You’ll notice the first three lines are the same, but the next two are duplicated and updated. These lines work like so - if the drive has a proper serial number, it shows up like it always did, with the drive model number and the serial number. But, if the drive has a serial number exactly matching 00000000
, it switches to using the WWN, or World Wide Name. This WWN has (so far) been unique on all the drives I installed in my system.
Now that you’ve changed the rules file, you need to update the go
file to copy it into place on boot and apply the changes. Open /boot/config/go
in your favorite text editor, and add the following to the beginning of the file:
#!/bin/bash
# Copy and apply udev rules for white label drives
cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/
chmod 644 /etc/udev/rules.d/60-persistent-storage.rules
udevadm control --reload-rules
udevadm trigger --attr-match=subsystem=block
# Start the Management Utility
/usr/local/sbin/emhttp &
I would recommend keeping it at the top of the go
file, even if you’ve made other modifications.
The safe option
With the safe option, you won’t be replacing the entire rules file, just adding your own small override. Advantages are this is a one and done solution, you shouldn’t ever need to modify things again. Disadvantage is one of your white label drives is going to show up in the drop down two times, once with a proper serial number, and once with the generic zeroes number. This won’t cause any issues unless you try to mount the drive using both labels. Unraid will also count this as two storage devices, so if you aren’t running the Pro license you’ll lose out on one available drive. As long as you’re ok with that, here’s what you do:
Open your favorite text editor and paste the following text into it:
ACTION=="remove", GOTO="whitelabel_end"
ENV{UDEV_DISABLE_whitelabel_RULES_FLAG}=="1", GOTO="whitelabel_end"
SUBSYSTEM!="block", GOTO="whitelabel_end"
KERNEL!="sd*|sr*|cciss*", GOTO="whitelabel_end"
# for partitions import parent information
ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*"
# SCSI devices
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL_SHORT}=="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}-$env{ID_WWN}"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL_SHORT}=="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}-$env{ID_WWN}-part%n"
LABEL="whitelabel_end"
Save this file on your flash drive in the /boot/config/rules.d
folder, with the name 60-whitelabel.rules
.
Now that you’ve created the new rules file, you need to update the go
file to copy it into place on boot and apply the changes. Open /boot/config/go
in your favorite text editor, and add the following to the beginning of the file:
#!/bin/bash
# Copy and apply udev rules for white label drives
cp /boot/config/rules.d/60-whitelabel.rules /etc/udev/rules.d/
chmod 644 /etc/udev/rules.d/60-whitelabel.rules
udevadm control --reload-rules
udevadm trigger --attr-match=subsystem=block
# Start the Management Utility
/usr/local/sbin/emhttp &
I would recommend keeping it at the top of the go
file, even if you’ve made other modifications.
Testing
Once you’ve modified the rules and go
files, you just need to reboot to test your work. Before you reboot, temporarily disable auto-start of the array (under Settings
> Disk Settings
), just as a precaution. You can re-enable it once you’ve verified the drives show up properly. Also, one quick note regarding the web UI, whenever you reboot your system, the web UI will be inaccessible for 2 minutes after your system has completed the boot process. You can still log in locally or via SSH, but the web UI won’t respond to your requests. This is because when the udev rules are reloaded, there’s a 2 minute timeout period while it goes through the devices, parses the rules, and creates all the links in the /dev
folder.