[Guide] Auto-Mounting Filesystems in Linux

well shit. fingers faster than brain

Don’t worry about it. Anybody that has spent more than 5 minutes on a computer has done the same thing.

I’m running into stale file handle errors on my setup. See attached.
image

I do have hard links turned off, and I have rebooted both the Unraid and the Plex boxes.


This seems to happen when I restart the Unraid box (or stop and start the array) while the Plex box is on.
Contents of /etc/auto.master is +auto.master /mnt/nfs /etc/auto.nfsdb --timeout=0 --browse
Contents of /etc/auto.nfsdb is movies -fstype=nfs,ro,timeo=100,noatime 192.168.2.10:/mnt/user/movies tv -fstype=nfs,ro,timeo=100,noatime 192.168.2.10:/mnt/user/tv

Did I miss something?

hello everyone. I am new to Linux and am setting up a new plex server running Ubuntu. I have Ubuntu installed and have made to the end however nothing is working. I am thinking i am not doing the auto.nfsdb path right to my qnap nas but I cannot figure out what path it should be. when i do ls /mnt/nfs it shows movies tv
however when I do /mnt/nfs/tv it says it dose not exist. What am i doing wrong

I have been trying for a week to figure this out. I have a plex server(ubuntu 20.4.2) and an Unraid machine. in on the unriad machine i have a MEDIA folder within that is 3 folders(4kmovies,movies,tv). on the ubuntu machine i can see the folders in the files app(+ other locations). But I cant find the same dir path with I try to add the library in PLEX. i tried to go through the automount above but i cant get through it.

Can you share how you are mounting them?

I finally got around to implementing autofs on a client Ubuntu VM running 20.04.
Unlike some around here I have multiple Ubuntu VMs (am not using Unraid).
In my case the client would not see the NFS share when invoked by autofs, even though I could mount the share using the command-line (so the client was using NFS ok, the issue was autofs).

In my case I needed to stop the service:

systemctl stop autofs.service

And then run automount from the command line with debug and verbose flags so I could see what was going on.

automount -f -v -d

In my case, after invoking the automount, it didn’t show much to start. I used another terminal session to try and access the NFS share. Then I’d switch back to the first session to read the messages and google the status messages. It wasn’t very obvious what was broken, but I ran into an older bugzilla post (LINK). This gave me the idea to add another flag to auto.nfsdb:

-fstype=nfs,port=2049,timeo=100,noatime

Explicitly calling out the port fixed my issue. The client is running nfs v4.

In my case I run a strict firewall restricting incoming connections on the NFS server. The symptoms were exactly like what would happen if I had failed to add a firewall rule to allow NFS traffic from the client. But the firewall rule had been added, confirmed when I could successfully mount from the command-line.

1 Like

Hey, this has been happening to me a lot as well for the last few months. I haven’t been able to find anything either.

I am able to mount my NFS shares via the autofs guide you have kindly posted. I am however having issues when trying to mount a second drive that is on my network. This one is a mergerfs mount so i believe this should be mounted with the “fuse” para.

this is what i am trying,

mergerfs -fstype=fuse,ro,timeo=100,noatime :192.168.1.10/mnt/user/mount_mergerfs#:192.168.1.10:/mnt/user/mount_mergerfs

Did you, or anyone else figure this out? I’m also trying the same thing and haven’t figured out how to make it work yet. Keep banging my head against the wall

Mergerfs is a local filesystem. If you want to mount it on another system it needs to be exported via NFS or SMB like any other folder. The readme from the official mergerfs Github repo describes the specific options you need to use when exporting the mergerfs folder via NFS. Once exported, you just add the folder to your auto.nfsdb or auto.smbdb as appropriate and mount it like any other remote filesystem.

I’m running into stale file handle errors on my setup, daily sometimes more than once a day.
I do have hard links turned off, and rebooting my transcode box fixes the issue temporarily

Contents of /etc/auto.master
+dir:/etc/auto.master.d
+auto.master
/mnt/user/Media /etc/auto.nfsdb --timeout=0 --browse

Contents of /etc/auto.nfsdb
4k -fstype=nfs,ro,timeo=100,noatime 192.168.29.228:/mnt/user/Media/4k
Movies -fstype=nfs,ro,timeo=100,noatime 192.168.29.228:/mnt/user/Media/Movies
TVShows -fstype=nfs,ro,timeo=100,noatime 192.168.29.228:/mnt/user/Media/TVShows

This happens after an unknown period of time, at least a couple of hours. It seems that my TV share doesn’t go stale, but that is also the most accessed.
I’m wondering if there is something I missed or what’s the best way to investigate what might be causing the issue?

Any assistance is appreciated. Thanks!

my solution to this is entirely not recommended, and it still happens occasionally, but i turned my media shares to cache = no on unraid. and the only time i get stale files handles is when i stop the array and restart it without reconnecting the shares on transcode box.

its something i plan on sitting down and figuring out eventually, but for now this helps alot. again not recommended, writing directly to the array is rarely a good idea.

You might want to change the Tunable (fuse_remember) option in NFS Settings, which defaults to 330 seconds to help eliminate excess network traffic. The built-in help for the option describes it in more detail:

When NFS is enabled, this Tunable may be used to alleviate or solve instances of “NFS Stale File Handles” you might encounter with your NFS client.

In essence, (fuse_remember) tells an internal subsystem (named “fuse”) how long to “remember” or “cache” file and directory information associated with user shares. When an NFS client attempts to access a file (or directory) on the server, and that file (or directory) name is not cached, then you could encounter “stale file handle”.

The numeric value of this tunable is the number of seconds to cache file/directory name entries, where the default value of 330 indicates 5 1/2 minutes. There are two special values you may also set this to:

0 which means, do not cache file/directory names at all, and
-1 which means cache file/directory names forever (or until array is stopped)
A value of 0 would be appropriate if you are enabling NFS but only plan to use it for disk shares, not user shares.

A value of -1 would be appropriate if no other timeout seems to solve the “stale file handle” on your client. Be aware that setting a value of -1 will cause the memory footprint to grow by approximately 108 bytes per file/directory name cached. Depending how much RAM is installed in your server and how many files/directories you access via NFS this may or may not lead to out-of-memory conditions.

You could try changing it to 0, so it will pull file and folder names every time they are accessed instead of caching them.

1 Like

Hey All
So I built my new Plex server using the guide (minus Docker), and I got my 2 NAS’ automounted using AutoFS and it seems like everything is working just fine. Thank you @JDM_WAAAT, et all!

The problem I’m having is that those mounts cannot be written to by either Sonarr OR Radarr. I’m guessing that there is a permissions switch I’m missing somewhere, but for the life of me, I can’t figure it out.

This is what I have in auto.nfsdb: auto.fs - Pastebin.com

and this is how it looks when I “ls -l” it:

magic-box4@Magic-Box4:/mnt/nfs$ ls -l 
total 0
drwxrwxrwx 1 root            root   36 Mar 21 01:34 movies
drwxrwxrwx 1 systemd-resolve   99 7152 May  5 13:33 tv

I 'll lock it down more when I get this figured out, but even wide open, the *darr’s can’t write to the folders

any advice you can provide would be appreciated

Just so I’m clear: radarr/sonarr are running on your Plex server, not your NAS?

If they are, you may need to look at your NAS permission settings. In Unraid I had to add my Plex transcode box as a user.

yes, they are running on my plex server.
I have one movies NAS (Terramaster), one Shows NAS (Readynas) and I’m running ubuntu 20 on my plex server [repurposed Dell Optiplex]

NAS’ are storage only, neither are powerful enough for transcoding, hence the Optiplex

I’ve tried setting Tunable (fuse_remember) to 0 and -1, and have seen no change in stale file handle errors. Thank you for the info, but for now I’ve gone back to using fstab.

If you are having trouble auto-mounting with the cmd line, try using the GNOME utility Disks. When I can’t get the cmd line working, I go in here and set it up. Works like a charm. Hope this helps some of you.

After going back to fstab for mounting network shares, I was still experiencing stale file handles. As a test I started from scratch, installing Ubuntu fresh and following the instructions to install the OS and Plex and the autofs. My 4k share had become stale before the Movie share fully loaded.
It seems like it’s a setting issue with Unraid, but beyond making sure that Tunable (support Hard Links) is set to No, I don’t know what else to check at this point.

So I’m using the autofs guide to mount my media from my unraid box to my plex box and it’s fine for tv and movies but when I mount my recordings share using

recordings -fstype=nfs,rw,timeo=100,noatime unraidserv.home:/mnt/user/media/recordings

it will timeout/get a stale file handle after a while and recordings fail, is there something that i can change so that it never unmounts? The autofs.master file is set to never timeout

/mnt/nfs /etc/auto.nfsdb --timeout=0 --browse

I don’t record a lot so dont always remember to check that everything is mounted. Tunable hard links is set to no in Unraid. It’s only this share, all the others are fine.