NFS

From Leo's Notes
Last edited on 25 April 2022, at 00:18.

NFS is a Network File System originally developed by Sun Microsystems. As the name implies, it's a filesystem over the network and allows for a remote filesystem to be mounted locally via TCP or UDP.

Setting up NFS[edit | edit source]

FreeBSD[edit | edit source]

To set a FreeBSD machine up as a NFS server, add the following lines to /etc/rc.conf

mountd_enable="YES"
nfs_server_enable="YES"
rpcbind_enable="YES"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
mountd_flags="-r"

Define your exports in /etc/exports. For exports that have different permissions for different remote hosts, you may specify multiple exports more than once with a different subnet and settings. For example:

/storage/downloads /storage/backups/esxi -ro -network 10.1.1.0 -mask 255.255.255.0
/storage/downloads /storage/backups/esxi -maproot=root -network 10.1.1.2 -mask 255.255.255.255

To apply your changes, restart mountd.

# service mountd restart

showmount should show both exports:

# showmount -e
Exports list on localhost:
/storage/downloads                 10.1.1.0 10.1.1.2
/storage/backups/esxi              10.1.1.0 10.1.1.2

Linux[edit | edit source]

Depending on the flavour of Linux you're using, you will need to install different packages.

Distro Installation & Packages
Debian apt install nfs-common
CentOS, Red Hat yum -y install nfs-utils
Gentoo emerge nfs-fs/nfs-utils

These major distros will have NFS kernel support built as a kernel module. If you are building a custom Linux kernel, you will need to enable NFS kernel support by enabling Dnotify and File Systems -> Network File Systems.

Firewall settings[edit | edit source]

Ensure the firewall is configured so that NFS, mountd, and RPC Bind can communicate.

# firewall-cmd --permanent --add-service=nfs
# firewall-cmd --permanent --add-service=mountd
# firewall-cmd --permanent --add-service=rpc-bind
# firewall-cmd --reload

Defining your exports[edit | edit source]

Filesystem exports are defined in /etc/exports. This file specifies what host has access to a specific filesystem.

An example exports entry looks like this:

/mnt       serverA(rw,no_root_squash) serverB(rw,no_root_squash)
  ^           ^
  |            `- remote hosts and export options
  `- Mount point being exported

Common mount options are listed below.

Mount option Description
rw, ro Read/Write or Read/Only. Regardless of how the client mounts the filesystem (rw/ro), the export will deny write if the export is set to ro.
no_root_squash Squashing is the act of the NFS server converting files written as root on a remote system to nobody on the NFS server. Essentially, it converts UID and GID 0 to 65534 which typically maps to nobody or nfsnobody. This prevents a remote system from having root level permissions on the filesystem. Operations that changes file permissions will fail with 'Operation not permitted'. The default is to squash.

By setting no root squashing, you allow files on the remote system written by root to appear as root or have remote systems be able to change file ownerships.

no_subtree_check When a sub-directory of a volume is exported, the NFS server will add additional checks to ensure that a file being served is indeed under the exported sub-directory.

Use this option if the entire volume is exported.

sync, async Sync tells the client that a file write is complete only when the data is written to storage on the NFS server. This prevents data loss in case the server reboots or suddenly goes offline.

Async is the default behavior.

Starting the NFS server[edit | edit source]

On most recent systems using systemd, start the NFS service by running:

# systemctl start nfs-server.service

For the old SytemV init:

# /etc/init.d/nfs start

If everything's working, you should see nfsd running. There are also some other daemons that you should be aware of:

  • nfsd is the NFS service
  • lockd and statd handles file locking
  • mountd handles the initial mount requests
  • rquotad handles filesystem quotas on exported volumes.

Modifying exports after the NFS server has started[edit | edit source]

If you need to modify the exports file after the NFS server has started, you will need to re-export the file. Do so by running:

# exportfs -ra

The options -r removes any exports that is no longer defined. -a exports all directories that are defined.

Advanced features[edit | edit source]

Here are some notes covering more advanced and niche features with NFS. These are specific to Red Hat flavour of Linux.

NFSv4 ID Mapping[edit | edit source]

NFSv4 introduces a ID mapping feature that solves the problem of having users with different UID/GIDs on different systems. On NFS v2/v3 systems using the AUTH_SYS/AUTH_UNIX(sec=sys) security mechanism, security was implemented based on UID/GID between the server and the client. With NFS v4, the RPC ID Mapper is able to use user principal names rather than the numeric identifiers.

To use ID Mapping with NFS v4, you must either:

  1. Use sec=krb which involves using Kerberos both server and client, or
  2. Enforce ID Mapping with AUTH_SYS/AUTH_UNIX by setting a NFS parameter on both server and client.

The simple approach is the second option and can be done easily with the steps below.

  1. On the Server:
    # echo "N" > /sys/module/nfsd/parameters/nfs4_disable_idmapping
    
  2. On the Client:
    # echo "N" > /sys/module/nfs/parameters/nfs4_disable_idmapping
    
  3. Ensure the ID Mapper is running on the server. You may also want to set the Domain in the /etc/idmapd.conf file.
  4. Configure the client ID Mapper. Set the Method to include static if that's what you're using. Additional static translations can be added in the [Static] section.
[Static]
lleung@REMOTESITE.COM = leo
  1. Mount the NFS export on the client using NFSv4.

By changing the nfs4_disable_idmapping value on the NFS server, any other clients are mounting from this server using NFSv4 that was relying on UID/GIDs will be affected. You can either make these clients use NFSv4 or ensure that all clients are using idmapd.

See Also:

Host access and deny policy[edit | edit source]

Use the /etc/hosts.allow and /etc/hosts.deny to control which computers have access to services on the server.

Automounting[edit | edit source]

See also: Autofs

You can have filesystems automatically be mounted when accessed on demand. This might be useful if you wish to reduce the amount of NFS mounts on a system especially if the majority of the mounts aren't necessary (Eg. if you have each user's home directory as a separate NFS export). This is accomplished by using the autofs automounter program.

Troubleshooting[edit | edit source]

If you run into issues, the following commands might help you diagnose the issue.

Command Description
showmount -e $Server Run on a client to show all the NFS exports from $Server.
nfsstat -s Run on a server to show all NFS server statistics (mostly just counters)
nfsstat -m Run on a client to show all NFS mounts in use. You can also use df.

writing fd to kernel failed: errno 111[edit | edit source]

If you get kernel errors similar to:

rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)

You need to start the rpc.bind service.

# /etc/init.d/rpcbind start

fcntl() failed - No locks available[edit | edit source]

While attempting to get Dovecot with LDAP authentication working, I've ran into the following error:

dovecot: Feb 22 22:43:02 Error: IMAP(leo): fcntl() failed with file /home/leo/Maildir/dovecot.index.log: No locks available
dovecot: Feb 22 22:43:02 Error: IMAP(leo): mail_index_wait_lock_fd() failed with file /home/leo/Maildir/dovecot.index.log: No locks available

The /home directory is a automounted NFS share from a remote server.

To resolve this issue, ensure that nfslock is running on both the server and client machine.

# service nfslock start
## or 
# systemctl start nfs-lock

reason given by server: No such file or directory[edit | edit source]

When trying to mount an export, I get this error:

mount.nfs: mounting bnas:surveillance/events/v2 failed, reason given by server: No such file or directory

/surveillance/events is a ZFS dataset. On the server, /etc/exports has:

/surveillance             10.1.2.0/24(rw,no_subtree_check,no_root_squash,async,crossmnt)

When I try to mount this export, I get:

# mount -t nfs -o vers=4 bnas:surveillance/events/v2  /mnt
mount.nfs: mounting bnas:surveillance/events/v2 failed, reason given by server: No such file or directory

However, the parent directory works:

# mount -t nfs -o vers=4 bnas:surveillance/events  /mnt

Solution: It appears that if the directory exists in the parent ZFS dataset, the NFS server will see that instead of the child ZFS mount. I fixed this after deleting the empty directory at /surveillance/events and then remounted the child ZFS dataset. The NFS mounts will work properly afterwards.

Cannot change permission/ownership on file in nfs mounted directory: Operation not permitted[edit | edit source]

As either root or a normal user on a system with a NFS mounted filesystem, changing the file permission or ownership fails.

# chmod 755 test_file chmod: changing permissions of `test_file`: Operation not permitted

Solution: Add the no_root_squash as an export option on the NFS server. After adding the option to /etc/exports, re-export on the NFS server by running:

# exportfs -ra

For security reasons, the default behavior is to squash on the server side. This is done so that untrusted systems aren't able to upload programs and set its setuid bit and have it run on a trusted system, for example.