nfs-utils
- Review systemd support in article. man nfs.systemd, /etc/nfs.conf, etc.
Network File System (NFS) is a file system protocol that allows client machines to access network attached filesystems (called exports) from a host system. NFS is supported by the Linux kernel and userspace daemons and utilities are found in the net-fs/nfs-utils package.
Installation
Kernel
NFS server support is not required for NFS clients. Conversely NFS client support is not required for NFS servers. Inotify support is only required for NFSv4. NFSv3 is only required for compatibility with legacy clients e.g. the BusyBox mount command does not support NFSv4.
Client support
Client kernel support must be enabled on each system connecting to the host running the NFS exports.
File systems --->
[*] Inotify support for userspace
[*] Network File Systems --->
<*> NFS client support
< > NFS client support for NFS version 2
<*> NFS client support for NFS version 3
[ ] NFS client support for the NFSv3 ACL protocol extension (NEW)
<*> NFS client support for NFS version 4
[ ] Provide swap over NFS support
[ ] NFS client support for NFSv4.1
[ ] Use the legacy NFS DNS resolver
[ ] NFS: Disable NFS UDP protocol support
Server support
Server kernel support is only necessary on the system hosting the NFS exports. For local testing purposes, it can be helpful to also enable client support as defined in the previous section on the server as well.
File systems --->
[*] Inotify support for userspace
[*] Network File Systems --->
<*> NFS server support
-*- NFS server support for NFS version 3
[ ] NFS server support for the NFSv3 ACL protocol extension (NEW)
[*] NFS server support for NFS version 4
[ ] NFSv4.1 server support for pNFS block layouts (NEW)
[ ] NFSv4.1 server support for pNFS SCSI layouts (NEW)
[ ] NFSv4.1 server support for pNFS Flex File layouts (NEW)
[ ] Provide Security Label support for NFSv4 server (NEW)
USE flags
USE flags for net-fs/nfs-utils NFS client and server daemons
+libmount
|
Link mount.nfs with libmount |
+nfsv3
|
Enable support for NFSv2 and NFSv3 |
+nfsv4
|
Enable support for NFSv4 (includes NFSv4.1 and NFSv4.2) |
+uuid
|
Support UUID lookups in rpc.mountd |
caps
|
Use Linux capabilities library to control privilege |
junction
|
Enable NFS junction support in nfsref |
kerberos
|
Add kerberos support |
ldap
|
Add ldap support |
sasl
|
Add support for the Simple Authentication and Security Layer |
selinux
|
!!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur |
tcpd
|
Add support for TCP wrappers |
Emerge
Install net-fs/nfs-utils:
root #
emerge --ask net-fs/nfs-utils
Configuration
Server
The following table describes the filesystems that will be exported by the server:
Device | Mount directory | Description |
---|---|---|
/dev/sdb1 | /home | Filesystem containing user home directories. |
/dev/sdc1 | /data | Filesystem containing user data. |
Virtual root
While this article demonstrates a best-practice NFSv4 deployment using a virtual root, it is possible to directly export the required directories without using one. If that is desired this section can be skipped and the exports file populated as follows, instead:
/etc/exports
/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check)
The filesystems to be exported can be made available under a single directory. This directory is known as the virtual root directory:
root #
mkdir /export
The /export directory is used throughout this article as the virtual root directory, although any directory can be used e.g. /nfs or /srv/nfs
Create directories in the virtual root directory for the filesystems (e.g. /home and /data) that are to be exported:
root #
mkdir /export/home
root #
mkdir /export/data
The filesystems to be exported need to be made available under their respective directories in the virtual root directory. This is accomplished with the --bind
option of the mount command (if you need also mount something that is mounted inside, use --rbind
instead:
root #
mount --bind /home /export/home
root #
mount --bind /data /export/data
To make the above mounts persistent, add the following to /etc/fstab:
/etc/fstab
/home /export/home none bind 0 0
/data /export/data none bind 0 0
Exports
The filesystems to be made accessible for clients are specified in /etc/exports. This file consists of the directories to be exported, the clients allowed to access those directories, and a list options for each client. Refer to man exports for more information about the NFS export configuration options.
The following table briefly describes the server options used in the configuration below:
Option | Description |
---|---|
insecure
|
The server will require that client requests originate on unprivileged ports (those above 1024). This option is required when mounting exported directories from OS X or by the nfs:/ kioslave in KDE. The default is to use privileged ports. |
rw
|
The client will have read and write access to the exported directory. The default is to allow read-only access. |
sync
|
The server must wait until filesystem changes are committed to storage before responding to further client requests. This is the default. |
no_subtree_check
|
The server will not verify that a file requested by a client is in the appropriate filesystem and exported tree. This is the default. |
crossmnt
|
The server will reveal filesystems that are mounted under the virtual root directory that would otherwise be hidden when a client mounts the virtual root directory. |
fsid=0
|
This option is required to uniquely identify the virtual root directory. |
If changes are made to /etc/exports after the NFS server has started, issue the following command to propagate the changes to clients:
root #
exportfs -rv
IPv4
Configuration grants access to the exported local shares, access is granted to the clients in the192.0.2.0/24
IP network. Client access can also be specified as a single host (IP address or fully qualified domain name), NIS netgroup, or with a single *
character which grants all clients access.
/etc/exports
/export 192.0.2.0/24(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0)
/export/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check)
/export/data 192.0.2.0/24(insecure,rw,sync,no_subtree_check)
IPv6
IPv6 only configuration. Allowed IPv6 prefixes are put after the already configured IPv4 networks.
The above configuration grants access to the exported directories by IP network, in this case 2001:db8:1::/64
.
These IP networks are allowed to access the exported shares on the NFS server:
/etc/exports
/export 2001:db8:1::/64(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0)
/export/home 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
/export/data 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
Dual stack configuration
IPv4 and IPv6 networks which are allowed to access the exported shares on the NFS server, here 192.0.2.0/24
and 2001:db8:1::/64
.
/etc/exports
/export 192.0.2.0/24(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0) 2001:db8:1::/64(insecure,rw,sync,no_subtree_check,crossmnt,fsid=0)
/export/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check) 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
/export/data 192.0.2.0/24(insecure,rw,sync,no_subtree_check) 2001:db8:1::/64(insecure,rw,sync,no_subtree_check)
Daemon
OpenRC
The NFS daemon on OpenRC is configured via the OPTS_RPC_NFSD variable:
/etc/conf.d/nfs
OPTS_RPC_NFSD="8 -V 3 -V 4 -V 4.1"
systemd
The NFS daemon on systemd is configured via the /etc/nfs.conf config file:
/etc/nfs.conf
[nfsd]
threads=4
vers3=on
vers4=on
vers4.1=on
The option threads=4
is the number of NFS server threads to start, 8 threads are started by default. The options vers3=on
, vers4=on
and vers4.1=on
enable NFS versions 3, 4, and 4.1. Refer to man nfsd for more information about the NFS daemon configuration options. Technical differences between major NFS versions explained in the wikipedia article.
Service
OpenRC
To start the NFS server:
root #
rc-service nfs start
* Starting rpcbind ... [ ok ] * Starting NFS statd ... [ ok ] * Starting idmapd ... [ ok ] * Exporting NFS directories ... [ ok ] * Starting NFS mountd ... [ ok ] * Starting NFS daemon ... [ ok ] * Starting NFS smnotify ... [ ok ]
The above output shows that many other services are also started along with the nfs service. To stop all NFS services, stop the rpcbind service:
root #
rc-service rpcbind stop
To start the NFS server at boot:
root #
rc-update add nfs default
systemd
To start the NFS server:
root #
systemctl start rpcbind nfs-server
To start the NFS server at boot:
root #
systemctl enable rpcbind nfs-server
Client
Service
OpenRC
To be able to mount exported directories, start the NFS client:
root #
rc-service nfsclient start
* Starting rpcbind [ ok ] * Starting NFS statd [ ok ] * Starting NFS sm-notify [ ok ]
To start the NFS client at boot:
root #
rc-update add nfsclient default
systemd
The nfs-client service will be started automatically when systemd detects that exported directories are being mounted.
Mounting exports
The commands and configuration files below use the IPv4 address
192.0.2.1
and IPv6 address2001:db8:1::1
to represent the NFS server.Mount the exported directories:
root #
mount 192.0.2.1:/home /home
root #
mount 192.0.2.1:/data /data
To make the above mounts persistent, add the following to /etc/fstab:
/etc/fstab
192.0.2.1:/home /home nfs rw,_netdev 0 0
192.0.2.1:/data /data nfs rw,_netdev 0 0
root #
mount 192.0.2.1:/home -t nfs4 -o _netdev,rsize=1048576,wsize=1048576,vers=4
The virtual root directory can be mounted instead of each individual exported directory. This will make all exported directories available to the client:
root #
mount 192.0.2.1:/ /mnt
To make the above mount persistent, add the following to /etc/fstab:
/etc/fstab
192.0.2.1:/ /mnt nfs rw,_netdev 0 0
When using /etc/fstab to mount the exported directories, add the netmount service to the default runlevel:
root #
rc-update add netmount default
It will probably be necessary to specify the network management dependencies in /etc/conf.d/netmount.
If the NFS server or client support NFSv3 only, the full path to the exported directory (e.g. /export/home or /export/data) needs to be specified when mounting:
root #
mount 192.0.2.1:/export/home /home
root #
mount 192.0.2.1:/export/data /data
The same applies when mounting the virtual root directory:
root #
mount 192.0.2.1:/export /mnt
When mounting exported directories on an IPv6 network, enclose the IPv6 NFS server address in square brackets:
root #
mount [2001:db8:1::1]:/home /home
root #
mount [2001:db8:1::1]:/data /data
When mounting a link-local IPv6 address, the outgoing local network interface must also be specified:
root #
mount [fe80::215:c5ff:fb3e:e2b1%eth0]:/home /home
root #
mount [fe80::215:c5ff:fb3e:e2b1%eth0]:/data /data
With NFSv4, the virtual root directory can be rather 'invisible' depending on server configuration; you may need to use relative path:
root #
mount -t nfs4 192.0.2.1:home /home
root #
mount -t nfs4 192.0.2.1:data /data
I/O on large files over NFSv4 can be *strongly* improved by the following, which increases the maximum read and write size to 1024^2 bytes, or 1MB.
root #
mount 192.0.2.1:/home /home -o rsize=1048576,wsize=1048576,vers=4
For persistence:
/etc/fstab
192.0.2.1:/data /data nfs4 _netdev,rw,rsize=1048576,wsize=1048576,vers=4
Kerberos
NFS security is a complex subject for NFSv3 and especially NFSv4. This section focuses on configuration directly applied to Kerberos and NFS. The complete picture, including which encryption types are acceptable, involves the GSS RPC API and even lower-level protocols such as IPsec. Do not be astonished if you follow these guidelines and still have interoperation problems. For a more complete view consult the External Sources noted below, especially RFCs 7530, 5403, and 2203, and the ols2004v1 paper.
It is possible to identify NFS client using Kerberos GSS. This will require a few modifications. In the following instruction, it is supposed that Kerberos is already installed on the same server as NFS (which hostname is server.domain.tld) and that the client (client.domain.tld) is able to kinit to it. The Kerberos default realm it DOMAIN_REALM.TLD.
First, enable the following kernel option (CONFIG_RPCSEC_GSS_KRB5) for both server and client. Note that this option may not appear if all cryptographic dependencies are not selected. See kernel option dependencies for more information:
File systems --->
[*] Network File Systems --->
<*> Secure RPC: Kerberos V mechanism
Then, create principals for the NFS service for both the server and the client. On the server, execute:
root #
kadmin.local add_principal -randkey nfs/server.domain.tld
root #
kadmin.local add_principal -randkey nfs/client.domain.tld
Each computer must have its password saved in a local keytab. The easiest way to do it is (on the server):
root #
kadmin.local ktadd nfs/server.domain.tld
root #
kadmin.local ktadd -k /root/krb5.keytab nfs/client.domain.tld
and then transfer the /root/krb5.keytab to the client, with the name /etc/krb5.keytab. Note that the file should be owned by root with 0600
mode.
The service rpc.gssd must run at client side. The following line must appear in /etc/conf.d/nfsclient of the client:
/etc/conf.d/nfsclient
rc_need="!rpc.statd rpc.gssd"
The services rpc.idmapd and rpc.svcgssd must run at server side. The following line must appear in /etc/conf.d/nfs of the server:
/etc/conf.d/nfs
NFS_NEEDED_SERVICES="rpc.idmapd rpc.svcgssd"
The rpc.idmapd service must be correctly configured (on the server):
/etc/idmapd.conf
[General]
Domain = domain.tld
Local-Realms = DOMAIN_REALM.TLD
Add sec=krb5
to the export options.
/etc/exports
/home 192.0.2.0/24(insecure,rw,sync,no_subtree_check,sec=krb5)
It is also possible to increase security with sec=krb5i
(user authentication and integrity checking) or even sec=krb5p
(user authentication, integrity checking and NFS traffic encryption). The more security, the more resources are needed.
The same option must be added to the mount command at client side.
Encryption
For the three security modes krb
, krb5i
, and krb5p
the client and server must agree upon the encryption scheme to use. For Linux clients and servers the scheme used is determined by four things: which schemes are enabled in the client's kernel, which are enabled in the server's, and which are enabled in the client and server Kerberos configurations.
Kernel Configuration
First, you must enable the basic building blocks that will be used for the schemes. You should stick to strong algorithms. As of early 2025, AES + SHA2 is a decent choice. Camellia + CMAC is probablly OK. Avoid DES. Avoid SHA1 unless forced by compatibility reasons.
[*] Cryptographic API --->
Block ciphers --->
<*> AES (Advanced Encryption Standard)
<*> AES (Advanced Encryption Standard) (fixed time)
<*> Camellia
Length-preserving ciphers and modes --->
<*> CBC (Cipher Block Chaining)
<*> CTS (Cipher Text Stealing)
Hashes, digests, and MACs --->
<*> CMAC (Cipher-based MAC)
<*> HMAC (Keyed-Hash MAC)
<*> SHA-224 and SHA-256
<*> SHA-384 and SHA-512
File systems --->
[*] Network File Systems --->
[*] Secure RPC: Kerberos V mechanism
<*> Enable Kerberos encryption types based on Camellia and CMAC
<*> Enable Kerberos enctypes based on AES and SHA-2
There are a lot of acceleration options. Those available to you will depend upon your architecture and processor type. A rule of thumb is "if in doubt, turn it on."
Cryptographic API --->
Accelerated Cryptographic Algorithms for CPU (x86) ->
<*> Ciphers: Camellia with modes: ECB, CBC (AES-NI/AVX)
<*> Ciphers: AES, modes: ECB, CBC, CTS, CTR, XCTR, XTS, GCM (AES-NI/VAES)
<*> Ciphers: Blowfish, modes: ECB, CBC
<*> Ciphers: Camellia with modes: ECB, CBC
<*> Ciphers: Camellia with modes: ECB, CBC (AES-NI/AVX)
<*> Ciphers: Camellia with modes: ECB, CBC (AES-NI/AVX2)
<*> Hash functions: SHA-224 and SHA-256 (SSSE3/AVX/AVX2/SHA-NI)
<*> Hash functions: SHA-384 and SHA-512 (SSSE3/AVX/AVX2)
Cryptographic API --->
Accelerated Cryptographic Algorithms for CPU (arm64) --->
<*> Ciphers: AES, modes: ECB/CBC/CTR/XTS (ARMv8 Crypto Extensions)
Kerberos Configuration
Kerberos must also be configured with the permitted and preferred encryption types. This goes into the /etc/krb5.conf file. An example:
/etc/krb5.conf
Encryption types example[libdefaults]
default_realm = EXAMPLE.COM
...
# Don't do this unless you're forced to and understand the risks.
allow_weak_crypto = true
# Included in this list is one encryption type deemed "weak" (des-cbc-crc), and others
# which are not officially weak but you should probably avoid (sha1). The list is in
# descending order of perference.
permitted_enctypes = aes256-cts-hmac-sha384-192 aes128-cts-hmac-sha256-128 camellia256-cts-cmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 des-cbc-crc
...
It is also possible to allow encoding types per-service ("principal"). See the kadmin documentation under set string. Also, the kdc.conf supported_enctypes parameter can control, on a per-domain basis, the encryption types that kadmind will use when generating long-term keys.
Which encryption types are "weak" is hard-coded into the Kerberos libraries; see the MIT Kerberos Consortium Encryption Types documentation for a list of supported and weak types.
The kernel and Kerberos configurations do not need to be identical, and your only indication they differ will be that things are mysteriously not working.
See the krb5.conf page for a fuller explanation of the configuration options. See the "Troubleshooting" section, below, for encryption type mismatch errors.
Troubleshooting
- Verify that the NFS server is running and listening for connections:
root #
ss -tulpn | grep rpc
udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=4020,fd=6)) udp UNCONN 0 0 0.0.0.0:49657 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=4)) udp UNCONN 0 0 0.0.0.0:58082 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=12)) udp UNCONN 0 0 127.0.0.1:834 0.0.0.0:* users:(("rpc.statd",pid=4050,fd=5)) udp UNCONN 0 0 0.0.0.0:34042 0.0.0.0:* users:(("rpc.statd",pid=4050,fd=8)) udp UNCONN 0 0 0.0.0.0:35152 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=8)) udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=4020,fd=8)) udp UNCONN 0 0 *:49463 *:* users:(("rpc.mountd",pid=4149,fd=14)) udp UNCONN 0 0 *:43316 *:* users:(("rpc.mountd",pid=4149,fd=10)) udp UNCONN 0 0 *:44048 *:* users:(("rpc.mountd",pid=4149,fd=6)) udp UNCONN 0 0 *:44332 *:* users:(("rpc.statd",pid=4050,fd=10)) tcp LISTEN 0 0 0.0.0.0:52271 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=5)) tcp LISTEN 0 0 0.0.0.0:41965 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=9)) tcp LISTEN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=4020,fd=7)) tcp LISTEN 0 0 0.0.0.0:48527 0.0.0.0:* users:(("rpc.mountd",pid=4149,fd=13)) tcp LISTEN 0 0 0.0.0.0:53559 0.0.0.0:* users:(("rpc.statd",pid=4050,fd=9)) tcp LISTEN 0 0 *:52293 *:* users:(("rpc.mountd",pid=4149,fd=7)) tcp LISTEN 0 0 *:43983 *:* users:(("rpc.mountd",pid=4149,fd=15)) tcp LISTEN 0 0 *:111 *:* users:(("rpcbind",pid=4020,fd=9)) tcp LISTEN 0 0 *:40105 *:* users:(("rpc.statd",pid=4050,fd=11)) tcp LISTEN 0 0 *:38481 *:* users:(("rpc.mountd",pid=4149,fd=11))
- Verify which NFS daemons are running:
root #
rpcinfo -p
program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 57655 status 100024 1 tcp 34950 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100021 1 udp 44208 nlockmgr 100021 3 udp 44208 nlockmgr 100021 4 udp 44208 nlockmgr 100021 1 tcp 44043 nlockmgr 100021 3 tcp 44043 nlockmgr 100021 4 tcp 44043 nlockmgr
- List the exported directories from the NFS server:
root #
exportfs -v
/export 192.0.2.0/24(rw,wdelay,crossmnt,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,no_all_squash) /export/home 192.0.2.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,no_all_squash) /export/data 192.0.2.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,no_all_squash)
- List the current open connections to the NFS server:
user $
ss -tun|grep -E 'Sta|2049'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process tcp ESTAB 0 0 192.0.2.1:2049 192.0.2.10:1012
- Verify that the exported directories are mounted by the NFS client:
user $
ss -tun|grep -E 'Sta|2049'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process tcp ESTAB 0 0 192.0.2.10:1012 192.0.2.1:2049
Unresponsiveness of the system
The system may become unresponsive during shutdown when the NFS client attempts to unmount exported directories after udev has stopped. To prevent this a local.d script can be used to forcibly unmount the exported directories during shutdown.
Create the file nfs.stop:
/etc/local.d/nfs.stop
/bin/umount -a -f -t nfs,nfs4
Set the according file bits:
root #
chmod a+x /etc/local.d/nfs.stop
Client refuses to mount NFS filesystem
If you do this: mount myserver:/somefilesystem, and the client responds with mount.nfs: Operation not permitted for myserver:/somefilesystem on /the_fstab/mount_point, and the server log says
/var/log/messages
Jan 6 02:29:06 myserver rpc.svcgssd[1958]: ERROR: GSS-API: error in handle_nullreq: gss_accept_sec_context(): GSS_S_FAILURE (Unspecified GSS failure. Minor code may provide more information) - Encryption type aes256-cts-hmac-sha384-192 not permitted
then you have an encryption type mismatch. RPC debug won't help much; you'll just see stuff like
/var/log/messages
Jan 6 05:04:11 myserver rpc.svcgssd[23215]: svcgssd_limit_krb5_enctypes: Calling gss_set_allowable_enctypes with 2 enctypes from the kernel
but it won't tell you _which_ two types the kernel wants.
Here's something that should help:
root #
cat /proc/net/rpc/gss_krb5_enctypes
18,17
Not very impressive, I admit. However, there is a secret decoding ring:
root #
cat /usr/include/krb5/krb5.h
/* per Kerberos v5 protocol spec */ ... #define ENCTYPE_NULL 0x0000 #define ENCTYPE_DES_CBC_CRC 0x0001 /**< @deprecated no longer supported */ #define ENCTYPE_DES_CBC_MD4 0x0002 /**< @deprecated no longer supported */ #define ENCTYPE_DES3_CBC_ENV 0x000f /**< DES-3 cbc mode, CMS enveloped data */ ... #define ENCTYPE_DES3_CBC_SHA1 0x0010 #define ENCTYPE_AES128_CTS_HMAC_SHA1_96 0x0011 /**< RFC 3962 */ #define ENCTYPE_AES256_CTS_HMAC_SHA1_96 0x0012 /**< RFC 3962 */ #define ENCTYPE_AES128_CTS_HMAC_SHA256_128 0x0013 /**< RFC 8009 */ #define ENCTYPE_AES256_CTS_HMAC_SHA384_192 0x0014 /**< RFC 8009 */
Translating the 18 & 17 to hexidecimal, we end up with 0x0012 (ENCTYPE_AES256_CTS_HMAC_SHA1_96) and 0x0011 (ENCTYPE_AES128_CTS_HMAC_SHA1_96). So two SHA1 algorithms. If you've eliminated them from the Client's krb5.conf then that's your problem.
See also
- Samba — a re-implementation of the SMB/CIFS networking protocol, a Microsoft Windows alternative to Network File System (NFS).
External resources
- NFSv2, v3 and v4.x versions and variations
- Ubuntu Wiki - NFSv4Howto
- Funtoo Wiki - NFS
- Linux NFS - General troubleshooting recommendations
- Linux NFS - HOWTO Troubleshooting
- Kernel.org - NFSv4 and rpcsec_gss for linux
- RFC 1094 - Network File System (NFS) version 2 Protocol
- RFC 1813 - Network File System (NFS) version 3 Protocol
- RFC 2203 - RPCSEC_GSS Protocol Specification
- RFC 5403 - RPCSEC_GCCv2
- RFC 7530 - Network File System (NFS) version 4 Protocol
- RFC 8881 - Network File System (NFS) version 4.1 Protocol
- RFC 7862 - Network File System (NFS) version 4.2 Protocol