Nfs-utils
- Review systemd support in article. man nfs.systemd, /etc/nfs.conf, etc.
Network File System (NFS) is a file system protocol that allows client machines to access network attached filesystems (called exports) from a host system. NFS is supported by the Linux kernel and userspace daemons and utilities are found in the net-fs/nfs-utils package.
Installation
Kernel
NFS server support is not required for NFS clients. Conversely NFS client support is not required for NFS servers. Inotify support is only required for NFSv4. NFSv3 is only required for compatibility with legacy clients e.g. the BusyBox mount command does not support NFSv4.
Client support
Client kernel support must be enabled on each system connecting to the host running the NFS exports.
File systems --->
[*] Inotify support for userspace
[*] Network File Systems --->
<*> NFS client support
< > NFS client support for NFS version 2
<*> NFS client support for NFS version 3
[ ] NFS client support for the NFSv3 ACL protocol extension (NEW)
<*> NFS client support for NFS version 4
[ ] Provide swap over NFS support
[ ] NFS client support for NFSv4.1
[ ] Use the legacy NFS DNS resolver
[ ] NFS: Disable NFS UDP protocol support
Server support
Server kernel support is only necessary on the system hosting the NFS exports. For local testing purposes, it can be helpful to also enable client support as defined in the previous section on the server as well.
File systems --->
[*] Inotify support for userspace
[*] Network File Systems --->
<*> NFS server support
-*- NFS server support for NFS version 3
[ ] NFS server support for the NFSv3 ACL protocol extension (NEW)
[*] NFS server support for NFS version 4
[ ] NFSv4.1 server support for pNFS block layouts (NEW)
[ ] NFSv4.1 server support for pNFS SCSI layouts (NEW)
[ ] NFSv4.1 server support for pNFS Flex File layouts (NEW)
[ ] Provide Security Label support for NFSv4 server (NEW)
USE flags
USE flags for net-fs/nfs-utils NFS client and server daemons
+libmount
|
Link mount.nfs with libmount |
+nfsv3
|
Enable support for NFSv2 and NFSv3 |
+nfsv4
|
Enable support for NFSv4 (includes NFSv4.1 and NFSv4.2) |
+uuid
|
Support UUID lookups in rpc.mountd |
caps
|
Use Linux capabilities library to control privilege |
junction
|
Enable NFS junction support in nfsref |
kerberos
|
Add kerberos support |
ldap
|
Add ldap support |
sasl
|
Add support for the Simple Authentication and Security Layer |
selinux
|
!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur |
tcpd
|
Add support for TCP wrappers |
Emerge
Install net-fs/nfs-utils:
root #
emerge --ask net-fs/nfs-utils
Configuration
Server
The following table describes the filesystems that will be exported by the server:
Device | Mount directory | Description |
---|---|---|
/dev/sdb1 | /home | Filesystem containing user home directories. |
/dev/sdc1 | /data | Filesystem containing user data. |
Virtual root
While this article demonstrates a best-practice NFSv4 deployment using a virtual root, it is possible to directly export the required directories without using one. If that is desired this section can be skipped and the exports file populated as follows, instead:
/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check)
The filesystems to be exported can be made available under a single directory. This directory is known as the virtual root directory:
root #
mkdir /export
The /export directory is used throughout this article as the virtual root directory, although any directory can be used e.g. /nfs or /srv/nfs
Create directories in the virtual root directory for the filesystems (e.g. /home and /data) that are to be exported:
root #
mkdir /export/home
root #
mkdir /export/data
The filesystems to be exported need to be made available under their respective directories in the virtual root directory. This is accomplished with the --bind
option of the mount command (if you need also mount something that is mounted inside, use --rbind
instead:
root #
mount --bind /home /export/home
root #
mount --bind /data /export/data
To make the above mounts persistent, add the following to /etc/fstab:
/home /export/home none bind 0 0
/data /export/data none bind 0 0
Exports
The filesystems to be made accessible for clients are specified in /etc/exports. This file consists of the directories to be exported, the clients allowed to access those directories, and a list options for each client. Refer to man exports for more information about the NFS export configuration options.
The following table briefly describes the server options used in the configuration below:
Option | Description |
---|---|
insecure
|
The server will require that client requests originate on unprivileged ports (those above 1024). This option is required when mounting exported directories from OS X or by the nfs:/ kioslave in KDE. The default is to use privileged ports. |
rw
|
The client will have read and write access to the exported directory. The default is to allow read-only access. |
sync
|
The server must wait until filesystem changes are committed to storage before responding to further client requests. This is the default. |
no_subtree_check
|
The server will not verify that a file requested by a client is in the appropriate filesystem and exported tree. This is the default. |
crossmnt
|
The server will reveal filesystems that are mounted under the virtual root directory that would otherwise be hidden when a client mounts the virtual root directory. |
fsid=0
|
This option is required to uniquely identify the virtual root directory. |
If changes are made to /etc/exports after the NFS server has started, issue the following command to propagate the changes to clients:
root #
exportfs -rv
IPv4
Configuration grants access to the exported local shares, access is granted to the clients in the192.0.2.0/24
IP network. Client access can also be specified as a single host (IP address or fully qualified domain name), NIS netgroup, or with a single *
character which grants all clients access.
/export 192.0.2.0/24(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0)
/export/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check)
/export/data 192.0.2.0/24(insecure,rw,sync,no_subtree_check)
IPv6
IPv6 only configuration. Allowed IPv6 prefixes are put after the already configured IPv4 networks.
The above configuration grants access to the exported directories by IP network, in this case 2001:db8:1::/64
.
These IP networks are allowed to access the exported shares on the NFS server:
/export 2001:db8:1::/64(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0)
/export/home 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
/export/data 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
Dual stack configuration
IPv4 and IPv6 networks which are allowed to access the exported shares on the NFS server, here 192.0.2.0/24
and 2001:db8:1::/64
.
/export 192.0.2.0/24(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0) 2001:db8:1::/64(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0)
/export/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check) 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
/export/data 192.0.2.0/24(insecure,rw,sync,no_subtree_check) 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
Daemon
OpenRC
The NFS daemon on OpenRC is configured via the OPTS_RPC_NFSD variable:
OPTS_RPC_NFSD="8 -V 3 -V 4 -V 4.1"
systemd
The NFS daemon on systemd is configured via the /etc/nfs.conf config file:
[nfsd]
threads=4
vers3=on
vers4=on
vers4.1=on
The option threads=4
is the number of NFS server threads to start, 8 threads are started by default. The options vers3=on
, vers4=on
and vers4.1=on
enable NFS versions 3, 4, and 4.1. Refer to man nfsd for more information about the NFS daemon configuration options. Technical differences between major NFS versions explained in the wikipedia article.
Service
OpenRC
To start the NFS server:
root #
rc-service nfs start
* Starting rpcbind ... [ ok ] * Starting NFS statd ... [ ok ] * Starting idmapd ... [ ok ] * Exporting NFS directories ... [ ok ] * Starting NFS mountd ... [ ok ] * Starting NFS daemon ... [ ok ] * Starting NFS smnotify ... [ ok ]
The above output shows that many other services are also started along with the nfs service. To stop all NFS services, stop the rpcbind service:
root #
rc-service rpcbind stop
To start the NFS server at boot:
root #
rc-update add nfs default
systemd
To start the NFS server:
root #
systemctl start rpcbind nfs-server
To start the NFS server at boot:
root #
systemctl enable rpcbind nfs-server
Client
Service
OpenRC
To be able to mount exported directories, start the NFS client:
root #
rc-service nfsclient start
* Starting rpcbind [ ok ] * Starting NFS statd [ ok ] * Starting NFS sm-notify [ ok ]
To start the NFS client at boot:
root #
rc-update add nfsclient default
systemd
The nfs-client service will be started automatically when systemd detects that exported directories are being mounted.
Mounting exports
The commands and configuration files below use the IPv4 address
192.0.2.1
and IPv6 address2001:db8:1::1
to represent the NFS server.Mount the exported directories:
root #
mount 192.0.2.1:/home /home
root #
mount 192.0.2.1:/data /data
To make the above mounts persistent, add the following to /etc/fstab:
192.0.2.1:/home /home nfs rw,_netdev 0 0
192.0.2.1:/data /data nfs rw,_netdev 0 0
root #
mount 192.0.2.1:/home -t nfs4 -o _netdev,rsize=1048576,wsize=1048576,vers=4
The virtual root directory can be mounted instead of each individual exported directory. This will make all exported directories available to the client:
root #
mount 192.0.2.1:/ /mnt
To make the above mount persistent, add the following to /etc/fstab:
192.0.2.1:/ /mnt nfs rw,_netdev 0 0
When using /etc/fstab to mount the exported directories, add the netmount service to the default runlevel:
root #
rc-update add netmount default
It will probably be necessary to specify the network management dependencies in /etc/conf.d/netmount.
If the NFS server or client support NFSv3 only, the full path to the exported directory (e.g. /export/home or /export/data) needs to be specified when mounting:
root #
mount 192.0.2.1:/export/home /home
root #
mount 192.0.2.1:/export/data /data
The same applies when mounting the virtual root directory:
root #
mount 192.0.2.1:/export /mnt
When mounting exported directories on an IPv6 network, enclose the IPv6 NFS server address in square brackets:
root #
mount [2001:db8:1::1]:/home /home
root #
mount [2001:db8:1::1]:/data /data
When mounting a link-local IPv6 address, the outgoing local network interface must also be specified:
root #
mount [fe80::215:c5ff:fb3e:e2b1%eth0]:/home /home
root #
mount [fe80::215:c5ff:fb3e:e2b1%eth0]:/data /data
With NFSv4, the virtual root directory can be rather 'invisible' depending on server configuration; you may need to use relative path:
root #
mount -t nfs4 192.0.2.1:home /home
root #
mount -t nfs4 192.0.2.1:data /data
I/O on large files over NFSv4 can be *strongly* improved by the following, which increases the maximum read and write size to 1024^2 bytes, or 1MB.
root #
mount 192.0.2.1:/home /home -o rsize=1048576,wsize=1048576,vers=4
For persistence:
192.0.2.1:/data /data nfs4 _netdev,rw,rsize=1048576,wsize=1048576,vers=4
Kerberos
It is possible to identify NFS client using Kerberos GSS. This will require a few modifications. In the following instruction, it is supposed that Kerberos is already installed on the same server as NFS (which hostname is server.domain.tld) and that the client (client.domain.tld) is able to kinit to it. The Kerberos default realm it DOMAIN_REALM.TLD.
First, enable the following kernel option (CONFIG_RPCSEC_GSS_KRB5) for both server and client. Note that this option may not appear if all cryptographic dependencies are not selected. See kernel option dependencies for more information:
File systems --->
[*] Network File Systems --->
<*> Secure RPC: Kerberos V mechanism
Then, create principals for the NFS service for both the server and the client. On the server, execute:
root #
kadmin.local add_principal -randkey nfs/server.domain.tld
root #
kadmin.local add_principal -randkey nfs/client.domain.tld
Each computer must have its password saved in a local keytab. The easiest way to do it is (on the server):
root #
kadmin.local ktadd nfs/server.domain.tld
root #
kadmin.local ktadd -k /root/krb5.keytab nfs/client.domain.tld
and then transfer the /root/krb5.keytab to the client, with the name /etc/krb5.keytab. Note that the file should be owned by root with 0600
mode.
The service rpc.gssd must run at client side. The following line must appear in /etc/conf.d/nfsclient of the client:
rc_need="!rpc.statd rpc.gssd"
The services rpc.idmapd and rpc.svcgssd must run at server side. The following line must appear in /etc/conf.d/nfs of the server:
NFS_NEEDED_SERVICES="rpc.idmapd rpc.svcgssd"
The rpc.idmapd service must be correctly configured (on the server):
[General]
Domain = domain.tld
Local-Realms = DOMAIN_REALM.TLD
Add sec=krb5
to the export options.
/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check,sec=krb5)
It is also possible to increase security with sec=krb5i
(user authentication and integrity checking) or even sec=krb5p
(user authentication, integrity checking and NFS traffic encryption). The more security, the more resources are needed.
The same option must be added to the mount command at client side.
Troubleshooting
- Verify that the NFS server is running and listening for connections:
root #
ss -tulpn | grep rpc
udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=4020,fd=6)) udp UNCONN 0 0 0.0.0.0:49657 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=4)) udp UNCONN 0 0 0.0.0.0:58082 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=12)) udp UNCONN 0 0 127.0.0.1:834 0.0.0.0:* users:(("rpc.statd",pid=4050,fd=5)) udp UNCONN 0 0 0.0.0.0:34042 0.0.0.0:* users:(("rpc.statd",pid=4050,fd=8)) udp UNCONN 0 0 0.0.0.0:35152 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=8)) udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=4020,fd=8)) udp UNCONN 0 0 *:49463 *:* users:(("rpc.mountd",pid=4149,fd=14)) udp UNCONN 0 0 *:43316 *:* users:(("rpc.mountd",pid=4149,fd=10)) udp UNCONN 0 0 *:44048 *:* users:(("rpc.mountd",pid=4149,fd=6)) udp UNCONN 0 0 *:44332 *:* users:(("rpc.statd",pid=4050,fd=10)) tcp LISTEN 0 0 0.0.0.0:52271 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=5)) tcp LISTEN 0 0 0.0.0.0:41965 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=9)) tcp LISTEN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=4020,fd=7)) tcp LISTEN 0 0 0.0.0.0:48527 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=13)) tcp LISTEN 0 0 0.0.0.0:53559 0.0.0.0:* users:(("rpc.statd",pid=4050,fd=9)) tcp LISTEN 0 0 *:52293 *:* users:(("rpc.mountd",pid=4149,fd=7)) tcp LISTEN 0 0 *:43983 *:* users:(("rpc.mountd",pid=4149,fd=15)) tcp LISTEN 0 0 *:111 *:* users:(("rpcbind",pid=4020,fd=9)) tcp LISTEN 0 0 *:40105 *:* users:(("rpc.statd",pid=4050,fd=11)) tcp LISTEN 0 0 *:38481 *:* users:(("rpc.mountd",pid=4149,fd=11))
- Verify which NFS daemons are running:
root #
rpcinfo -p
program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 57655 status 100024 1 tcp 34950 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100021 1 udp 44208 nlockmgr 100021 3 udp 44208 nlockmgr 100021 4 udp 44208 nlockmgr 100021 1 tcp 44043 nlockmgr 100021 3 tcp 44043 nlockmgr 100021 4 tcp 44043 nlockmgr
- List the exported directories from the NFS server:
root #
exportfs -v
/export 192.0.2.0/24(rw,wdelay,crossmnt,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,no_all_squash) /export/home 192.0.2.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,no_all_squash) /export/data 192.0.2.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,no_all_squash)
- List the current open connections to the NFS server:
user $
ss -tun|grep -E 'Sta|2049'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process tcp ESTAB 0 0 192.0.2.1:2049 192.0.2.10:1012
- Verify that the exported directories are mounted by the NFS client:
user $
ss -tun|grep -E 'Sta|2049'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process tcp ESTAB 0 0 192.0.2.10:1012 192.0.2.1:2049
Unresponsiveness of the system
The system may become unresponsive during shutdown when the NFS client attempts to unmount exported directories after udev has stopped. To prevent this a local.d script can be used to forcibly unmount the exported directories during shutdown.
Create the file nfs.stop:
/bin/umount -a -f -t nfs,nfs4
Set the according file bits:
root #
chmod a+x /etc/local.d/nfs.stop
See also
- Samba — a re-implementation of the SMB/CIFS networking protocol, a Microsoft Windows alternative to Network File System (NFS).
External resources
- NFSv2, v3 and v4.x versions and variations
- Ubuntu Wiki - NFSv4Howto
- Funtoo Wiki - NFS
- Linux NFS - General troubleshooting recommendations
- Linux NFS - HOWTO Troubleshooting
- RFC 1094 - Network File System (NFS) version 2 Protocol
- RFC 1813 - Network File System (NFS) version 3 Protocol
- RFC 7530 - Network File System (NFS) version 4 Protocol
- RFC 8881 - Network File System (NFS) version 4.1 Protocol
- RFC 7862 - Network File System (NFS) version 4.2 Protocol