User:Ali3nx/Installing Gentoo Linux EFISTUB On ZFS
Install Gentoo Linux on OpenZFS using EFIStub Boot
Author: Michael Crawford (ali3nx)
Contact: mcrawford@eliteitminds.com
Preface
This guide will show you how to install Gentoo Linux on AMD64 with:
* UEFI-GPT (EFI System Partition) - This will be on a FAT32 unencrypted partition as per UEFI Spec. * /, /home/username, on segregated ZFS datasets * /home, /usr, /var, /var/lib zfs dataset containers created for pool dataset structure * raid 1 or mirrored disk configuration * swap on regular partition * OpenZFS 2.0.6+ * efistub boot without Grub * dracut initramfs (optionally genkernel) * systemd or openrc * Gentoo Stable (amd64)
Why efistub boot!? grub works for everyone!
- UEFI bios motherboards have been the default on all modern computer hardware since around 2013 entirely depreciating legacy bios.
- The modernization and wide availability of UEFI motherboards has retired the mandatory requirement for software bootloaders such as grub.
- grub itself when UEFI booted uses efistub to boot both itself and linux OS installs. This additional interference is unnecessary to boot Linux.
- Intel has publicly stated that legacy bios CSM compatibility switch support will be entirely depreciated on new hardware manufactured after 2020 forcing use of true uefi boot modes
Why not use grub with zfs!?
- The wiki guides for zfsroot from zfsonlinux and many distros all advise using grub bootloader which can work however grub doesn't fully support the newest zfs pool feature flags and using grub can be an added risk as well as added complication that can be entirely mitigated by using a uefi boot efistub configuration to boot your zfs root pool directly.
- The risk of using grub with zfs arises from the lack of modern pool feature support for zfsonlinux which requires the administrator tread carefully to ensure that a global zpool upgrade is never run or your zfsroot configuration becomes unbootable due to the legacy zfs pool feature flags required for grub to function having been upgraded. Such an occurrence having happened cannot be undone and recovery would require some major surgery from a livecd.
- Building a new system install initially using a legacy configuration implies additional ongoing maintenance be accepted to maintain a legacy configuration.
- zfs rootfs dataset encryption is easier to configure utilizing efistub boot.
Required Tools
Download the Gentoo admincd iso from the official Gentoo mirrors
You will need to download Gentoo admincd that includes ZFS.
LiveUSB Creation
We will assume for this example the device will be /dev/sdg but this may vary for your system.
root #
dd if=admincd-amd64-<this filename will vary>.iso of=/dev/sdg bs=1M status=progress
And that's it! You now have a Bootable UEFI USB.
Windows
Etcher is the USB Utility I recommend when on Windows for Gentoo admincd-amd64.iso. You can Download Etcher here.
- Start Etcher
- Select your USB Device from the Device drop down.
- Select your ISO by clicking SELECT.
- Click START.
This should be all that's necessary to have a Bootable UEFI USB.
Assumptions
- Only installing Gentoo on two disks called /dev/sda,/dev/sdb (or /dev/nvme0n1, /dev/nvme1n1)
- Gentoo admincd-amd64.iso is being used.
- dracut is being used as your initramfs.
- gentoo-kernel-bin is being used as your kernel.
Boot your system into the zfs LiveUSB
Since this is highly computer dependent, you will need to figure out how to boot your USB on your system and get to the live environment. You may need to disable Secure Boot if that causes your USB to be rejected. Make sure your system BIOS/UEFI is set up to boot UEFI devices, rather than BIOS devices (Legacy).
Confirm that you booted in UEFI Mode
After you booted into the Live CD, make sure that you booted into UEFI mode by typing the following:
root #
ls /sys/firmware/efi
If the above directory is empty or doesn't exist, you are not in UEFI mode. Reboot and boot into UEFI mode.
Continuing the installation without being in UEFI mode will most likely yield an unbootable system. If you want to install in BIOS mode, you will need a different setup.
Partition
We will now partition the drive and aim to create the following layout:
/dev/sda1 | 512 MB | EFI System Partition | /boot /dev/sda2 | 32768 MB | swap | swap /dev/sda3 | Rest of Disk | ZFS | /, /home/username ...
The above partition table must be repeated identically for both disks if using a mirror configuration
/dev/sdb1 | 512 MB | EFI System Partition | /dev/sdb2 | 32768 MB | swap | swap /dev/sdb3 | Rest of Disk | ZFS | /, /home/username ...
Some UEFI motherboard firmwares that are extremely buggy. We will attempt to use a 512 MiB FAT32 partition configuration to increase success and a 512MB esp will be beneficial to provide adequate space should an optional 250MB genkernel initramfs file become desirable or required.
Open up your drive in GNU parted and tell it to use optimal geometry alignment:
root #
parted -a optimal /dev/sda
Complete the commands below for /dev/sdb or /dev/vdb if you intend to create a mirrored disk configuration. Keep in mind that all of the following operations will affect the disk immediately. GNU parted does not stage changes like fdisk or gdisk.
Create GPT partition layout
This will delete all partitions and create a new GPT table.
Larger swap will accommodate hibernation should that be desired and using swap with zfs is highly advised. 32GB swap is used in the below example to accommodate many different hardware configurations.
(parted)
mklabel gpt
Create and label your partitions
(parted)
mkpart esp fat32 0% 513
(parted)
mkpart swap linux-swap 513 33280
(parted)
mkpart rootfs btrfs 33280 100%
parted does not offer a zfs filesystem type so btrfs is used temporarily. the filesystem label name is largely autodetected and as a result will become irrelevant after zpool creation.
Set the bootable flag on the ESP partition
(parted)
set 1 boot on
Final View
(parted)
print
Model: Virtio Block Device (virtblk) Disk /dev/vda: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 513MB 512MB fat32 esp boot, esp 2 513MB 33.3GB 32.8GB linux-swap(v1) swap 3 33.3GB 500GB 467GB rootfs
If using mirror disk configuration
(parted)
print
Model: Virtio Block Device (virtblk) Disk /dev/vdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 513MB 512MB fat32 esp boot, esp 2 513MB 33.3GB 32.8GB linux-swap(v1) swap 3 33.3GB 500GB 467GB rootfs
Exit the application
(parted)
quit
Format your drives
Format your uefi esp partition
root #
mkfs.vfat -F32 /dev/vda1
This partition needs to be FAT32 due to it being an UEFI requirement. If it isn't, your system will not boot!
Create your swap
root #
mkswap -f /dev/vda2
root #
swapon /dev/vda2
* Do not put your swap inside a zvol. System lockups are possible when RAM is 100% and the system starts swapping while the swap is on ZFS. There has been an open unresolved bug in Openzfs regarding this is and as a result is best avoided. Swap memory pressure doesn't crash when the swap is on a normal partition.
https://github.com/openzfs/zfs/issues/7734
Determine disk/by-id identifier
Using traditional block device identifiers such as /dev/sda or /dev/nvme0n1 with zfs can work but can also be undesirable due to the possibility of a block device name changing. Something as simple as connecting a usb storage device can cause this to occur.
Should this ever happen zfs pools are unaware of the change having occurred which can render a zfs pool inoperable. Use of non generic device specific disk identifiers which are also identified by disk serial number is more desirable for use with zfs as a result of this complication. This also provides added utility advantages for identifying a faulty disk in larger zfs pools.
To determine the non generic ata disk identifier id type the following
root #
ls -l /dev/disk/by-id
lrwxrwxrwx 1 root root 9 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar 2 11:28 ata-Samsung_SSD_860_EVO_500GB_serialnum-part3 -> ../../sda3
Nvme storage devices would resemble this example
root #
ls -l /dev/disk/by-id
lrwxrwxrwx 1 root root 13 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-Samsung_SSD_960_PRO_512GB_serialnum-part3 -> ../../nvme0n1p3
Generally using /dev/disk/by-id/ata-disk or /dev/disk/by-id/nvme-disk is more desirable to ensure the disk block device is more specific.
There may be /dev/disk/by-id/wmm or /dev/disk/by-id/nvme-eui.
Use of these block device identifiers in the example below should be avoided if possible for use with this guide.
root #
ls -l /dev/disk/by-id/wwn*
lrwxrwxrwx 1 root root 10 Mar 2 11:28 wwn-0x5002538e40aba28d-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 2 11:28 wwn-0x5002538e40aba28d-part2 -> ../../sda2
root #
ls -l /dev/disk/by-id/nvme*
lrwxrwxrwx 1 root root 13 Mar 2 11:28 nvme-eui.0025385971b064dd -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 2 11:28 nvme-eui.0025385971b064dd-part1 -> ../../nvme0n1p1
Create your zpool
Create your zpool which will contain your drives and datasets:
xattrs and posixacl are enabled to provide support for modern filesystem security features. Relative atime updates which are a global default in ext4 are enabled as well.
xattrs is necessary for proper functionality of systemd-journald
It is beneficial and important to create or generate a valid zfs /etc/hostid file in advance of creating the first zfs pool to ensure that a valid zfs hostid is referenced later by the initramfs during initial system boot. Occasionally if the zfs rpool hostid and initramfs hostid reference mismatch pool import can fail until a new hostid and zpool.cache file can be regenerated from initramfs rescue shell.
The command to ensure the removal existing zfs hostid file and generate a new zfs hostid record is
root #
rm -f /etc/hostid && zgenhostid
To create the zfs root pool including a mirror configuration
Substitute ata-disk1-part3 for nvme-disk1-part3 and ata-disk2-part3 for nvme-disk2-part3 if you have an nvme ssd disk.
root #
zpool create -f -o ashift=12 -o cachefile=/etc/zfs/zpool.cache -O compression=lz4 -O xattr=sa -O relatime=on -O acltype=posixacl -O dedup=off -m none -R /mnt/gentoo rpool mirror /dev/disk/by-id/ata-disk1-part3 /dev/disk/by-id/ata-disk2-part3
To create the zfs root pool including a single disk
Use this only for testing or for data already secured on redundant storage. zfs cannot validate data integrity of single disk pools. not production reliable!!!
Substitute ata-disk1-part3 for nvme-disk1-part3 if you have an nvme ssd disk.
root #
zpool create -f -o ashift=12 -o cachefile=/etc/zfs/zpool.cache -O compression=lz4 -O xattr=sa -O relatime=on -O acltype=posixacl -O dedup=off -m none -R /mnt/gentoo rpool /dev/disk/by-id/ata-disk1-part3
Create your rootfs zfs datasets
Create the dataset container structure and dataset necessary for /.
root #
zfs create -o mountpoint=none -o canmount=off rpool/ROOT
root #
zfs create -o mountpoint=/ rpool/ROOT/gentoo
Set the boot flag for zfs root dataset
root #
zpool set bootfs=rpool/ROOT/gentoo rpool
Create /usr, /var, /var/lib and /home zfs dataset containers
Creation of several unmounted dataset containers is necessary to provide dataset structure for the zfs pool. Creation of these containers after install is complete can be disruptive, involved and best completed before filesystem contents are written to disk to ensure the system will boot.
Dataset containers for /usr and /var especially benefit from this having been completed in advance.
This structures datasets within the pool for correct dataset segregation.
The /var/lib dataset container is created to allow for easy creation of /var/lib/foo datasets for system or network services if desired at a later date.
rpool/home dataset container is created to segregate user home directory dataset contents from the rootfs dataset for improved rootfs dataset incremental snapshot size management to ensure that rootfs snapshots do not fill the available pool storage space.
Additional accomodation must be made when using systemd with zfs to ensure that zfs /home dataset container is not configured to use a mountpoint as systemd may attempt to create a new /home directory on system boot causing the user home directory datasets to fail to mount on system boot due to a pool import mountpoint conflict.
Creating the rpool/home dataset container using the canmount=off option omitting a directory mountpoint ensures this complication will be unlikely to occur.
root #
zfs create -o canmount=off rpool/usr
root #
zfs create -o canmount=off rpool/var
root #
zfs create -o canmount=off rpool/var/lib
root #
zfs create -o canmount=off rpool/home
Create user home directory dataset
Replace username with the desired user name
root #
zfs create -o mountpoint=/home/username rpool/home/username
Verify everything looks good
You can verify that all of these things worked by running the following:
root #
zpool status
pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 vda3 ONLINE 0 0 0 vdb3 ONLINE 0 0 0 errors: No known data errors
I created a qemu vm to provide the zpool status representation. qemu and the livecd I used did not provide /dev/disk/by-id for qemu virtual disks. If installing on bare metal hardware this should not be a complication.
root #
zfs list
NAME USED AVAIL REFER MOUNTPOINT rpool 1.20M 418G 96K none rpool/ROOT 192K 418G 96K none rpool/ROOT/gentoo 96K 418G 96K /mnt/gentoo rpool/home 192K 418G 96K none rpool/home/username 96K 418G 96K /mnt/gentoo/home/username rpool/usr 96K 418G 96K none rpool/var 192K 418G 96K none rpool/var/lib 96K 418G 96K none
Now we are ready to install Gentoo!
Installing Gentoo
Set your date and time
We use ntpdate to set accurate time,date and hardware clock to mitigate clock skew that can cause software compilation to malfunction
root #
ntpdate -u pool.ntp.org
2 Mar 19:32:19 ntpdate[12777]: adjust time server 216.232.132.31 offset 0.454897 sec
Preparing to chroot
First let's mount our efi boot partition in our chroot directory:
root #
cd /mnt/gentoo
root #
mkdir boot
root #
mount /dev/sda1 boot
We'll use the Oregon State University Gentoo Linux mirror.
If you desire use a different regional mirror from the official Gentoo Linux mirror list
Download the systemd amd64 stage3 system archive and extract it
root #
wget <file>
root #
tar xJpvf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
Copy zpool cache
root #
mkdir etc/zfs
root #
cp /etc/zfs/zpool.cache etc/zfs
Copy network settings
root #
cp --dereference /etc/resolv.conf /mnt/gentoo/etc/
Mounting the necessary filesystems
root #
mount --types proc /proc /mnt/gentoo/proc
root #
mount --rbind /sys /mnt/gentoo/sys
root #
mount --make-rslave /mnt/gentoo/sys
root #
mount --rbind /dev /mnt/gentoo/dev
root #
mount --make-rslave /mnt/gentoo/dev
root #
mount --bind /run /mnt/gentoo/run
root #
mount --make-slave /mnt/gentoo/run
Entering the new environment
root #
chroot /mnt/gentoo /bin/bash
root #
source /etc/profile
root #
export PS1="(chroot) ${PS1}"
Inside the chroot
Edit fstab
Use of disk UUID's to denote block devices entries in fstab has become the more desirable default to ensure an unpredicted block device alteration never renders a filesystem unmountable as a result of fstab becoming inaccurate.
Something as simple as connecting a usb storage device to a booted system has been known to cause this to occur.
The blkid command reveals these disk identifiers that are available for disk partitions created on gpt disk partition labels.
Despite having created disk partition names disk UUID's are more specific.
root #
blkid
/dev/loop0: TYPE="squashfs"
/dev/vda1: UUID="9E40-2218" TYPE="vfat" PARTLABEL="esp" PARTUUID="ce3ca4f8-bf90-42ae-9ed3-fbd34a718fd9"
/dev/vda2: UUID="fac87c68-50ef-424b-9673-dfd0a9890aff" TYPE="swap" PARTLABEL="swap" PARTUUID="5475ac59-f72a-40eb-80f1-7a634bc04f5c"
/dev/vda3: LABEL="rpool" UUID="3195477004188779862" UUID_SUB="13330732843625778565" TYPE="zfs_member" PARTLABEL="rootfs" PARTUUID="7997947d-1530-4c4e-be93-c76b6c966822"
/dev/sr0: UUID="2019-09-27-14-03-43-10" LABEL="Gentoo amd64 latest" TYPE="iso9660" PTUUID="2db7a891" PTTYPE="dos"
Everything is on zfs so we don't need anything in here except for the boot and swap entries. fstab should resemble the following example. Substitute the provided UUID's from your blkid command:
root #
nano /etc/fstab
UUID=9E40-2218 /boot vfat defaults 1 2 UUID=fac87c68-50ef-424b-9673-dfd0a9890aff none swap sw 0 0
Modify make.conf
Let's modify our /etc/portage/make.conf so we can start installing stuff with a good base (Change it to what you need):
root #
nano /etc/portage/make.conf
USE="caps" # This should be a realistic number reflecting cpu thermal limits and potential ram usage. MAKEOPTS="-j4" EMERGE_DEFAULT_OPTS="--with-bdeps y --complete-graph y" # knight rider rides again! FEATURES="candy" ACCEPT_LICENSE="*"
Get the portage tree
Copy the default example portage config
root #
mkdir /etc/portage/repos.conf
root #
cp /usr/share/portage/config/repos.conf /etc/portage/repos.conf/gentoo.conf
root #
emerge-webrsync
Install required applications
Now install the initial apps:
root #
emerge dracut bash-completion eix dev-vcs/git eselect-repository gentoolkit efibootmgr dosfstools gentoo-kernel-bin linux-firmware cronie intel-microcode parted
Kernel Configuration for custom kernel builders (Optional)
Reviewing the current gentoo-sources Linux kernel version
Gentoo provides eselect to manage many core system environment variables including the active /usr/src/linux symlink.
root #
eselect kernel list
Available kernel symlink targets:
[1] linux-6.6.30-gentoo-dist *
The command result of eselect should match the active linux kernel symlink
root #
ls -l /usr/src/
total 9
lrwxrwxrwx 1 root root 20 Mar 3 00:20 linux -> linux-6.6.30-gentoo-dist
drwxr-xr-x 26 root root 39 Mar 3 00:20 linux-6.6.30-gentoo-dist
Necessary kernel configuration features for custom kernel builders
efistub boot relies on a key Linux kernel configuration feature to function
---> Processor type and features
[*] EFI runtime service support
sys-fs/zfs requires Zlib kernel support (module or builtin).
General Architecture Dependent Options --->
GCC plug ins --->
[ ] Randomize layout of sensitive kernel structures
Cryptographic API --->
<*> Deflate compression algorithm
Security options --->
[ ] Harden common str/mem functions against buffer overflows
sys-apps/systemd relies on the following menu options provided by sys-kernel/gentoo-sources
Gentoo Linux --->
[*] Gentoo Linux support
[*] Linux dynamic and persistent device naming (userspace devfs) support
[*] Select options required by Portage features
Support for init systems, system and service managers --->
[*] systemd
[*] openrc
The Linux kernel provides a console based configuration menu. Select the required configuration features in addition to necessary configuration features for your hardware.
root #
cd /usr/src/linux
root #
make menuconfig
Compile the Linux kernel
root #
cd /usr/src/linux
root #
make && make modules_install install
Install zfs software and kernel module
sys-fs/zfs and sys-fs/zfs-kmod must be installed after kernel configuration is complete
Install ZFS software
root #
emerge sys-fs/zfs-kmod sys-fs/zfs
Enable zfs systemd services- Systemd Only
root #
systemctl enable zfs.target
root #
systemctl enable zfs-import-cache
root #
systemctl enable zfs-mount
root #
systemctl enable zfs-import.target
Enable zfs openrc services - Openrc Only
root #
rc-update add zfs-import boot
root #
rc-update add zfs-mount boot
root #
rc-update add zfs-share default
root #
rc-update add zfs-zed default
Using the Gentoo prebuild binary distro kernel
If you chose to use the Gentoo distro kernel adding USE="dist-kernel" use flag to /etc/portage/make.conf enables management automation of the initramfs using dracut often without additional administrator intervention required. If the initramfs ever requires updating simply emerge --config gentoo-kernel-bin.
Using gentoo-kernel-bin initramfs
root #
emerge --config gentoo-kernel-bin
Using dracut or Genkernel for custom kernel builders
Genkernel initramfs works good for most configurations and provides an alternative initramfs creation and management option where dracut may be experiencing difficulties importing zfs pools at system boot. I've experienced this with some configurations using fast ssd storage pools. When this abundance of performance was available dracut loaded the initramfs too fast causing a latency delay loading the zfs kernel module consequentially causing pool import failure on system boot. Reproducing this behavior can be hit or miss if kernel module modprobe latency ever does occur during initramfs processing.
To attempt to introduce additional processing latency into a genkernel initramfs to slow down initramfs processing a solution was devised to include the entire linux-firmware contents into a genkernel initramfs. This worked very well for many months however this purposefully bloated initramfs when uncompressed is very large and may not function with some common home pc motherboards. My server as can be seen below is an older model supermicro enterprise server motherboard and has no disagreements with being force fed a 600MB uncompressed initramfs image at system boot.
If your able to use dracut and dracut works do use dracut. If you prefer to use genkernel you can at your desire to not include the --firmware option to create a sensibly sized genkernel initramfs.
My server has been using a dracut initramfs with dual vdev ssd mirror pool for many years and has experienced no pool import failure concerns but when those do occur using a different initramfs has resolved those complications.
Using a dracut initramfs
root #
dracut --hostonly -k /lib/modules/6.6.30-gentoo/ --kver 6.6.30-gentoo -f /boot/initramfs-6.6.30-gentoo.img
Using genkernel initramfs
root #
genkernel initramfs --zfs --firmware --compress-initramfs --microcode-initramfs --kernel-config=/usr/src/linux/.config
Installing the bootloader onto your drive
We will need to configure the bootloader entry in uefi firmware to direct boot the linux kernel and initramfs.
The following command will install the uefi bootloader entry in uefi firmware referencing the kernel and initramfs located at /boot
Edit the Linux kernel version to the desired current version used.
root #
efibootmgr --disk /dev/sda --part 1 --create --label "Gentoo ZFS 6.6.30" --loader "vmlinuz-6.6.30-gentoo-dist" --unicode 'root=ZFS=rpool/ROOT/gentoo ro initrd=\initramfs-6.6.30-gentoo-dist.img'
efibootmgr will print the uefi firmware loader table contents upon success also revealing the updated boot order
root #
efibootmgr
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0003,0001,0000,0002
Boot0000* UiApp
Boot0001* UEFI QEMU DVD-ROM QM00001
Boot0002* EFI Internal Shell
Boot0003* Gentoo ZFS 6.6.30
Final steps before reboot
root #
passwd
root #
exit
root #
reboot
After you reboot
Take a snapshot of your new system
Since we now have a working system, we will snapshot it in case we ever want to go back or recover files:
root #
zfs snapshot rpool/ROOT/gentoo@2022-12-26-0000-01-INSTALL
root #
zfs snapshot rpool/home/username@2022-12-26-0000-01-INSTALL
You can view the status of these snapshots using the zfs command
root #
zfs list -t snapshot
ZFS dataset snapshot automation
There are two common options available for zfs snapshot automation.
sys-fs/zfs-auto-snapshot is available from gentoo's main repo sys-fs/sanoid a superior and more feature rich zfs snapshot manager that also provides syncoid
Sanoid is available from a gentoo overlay I maintain named sensible-overlay. Directions to configure the overlay are provided on the github page.
Configuring Sanoid
root #
emerge sys-fs/sanoid
A simplified configuration for sanoid is provided below to configure /etc/sanoid/sanoid.conf to automate snapshots of rpool/ROOT/gentoo and rpool/home/username
#################### # sanoid.conf file # #################### [rpool/ROOT/gentoo] use_template = production [rpool/home/username] use_template = production ############################# # templates below this line # ############################# [template_production] # store hourly snapshots 36h hourly = 36 # store 30 days of daily snaps daily = 30 # store back 6 months of monthly monthly = 6 # store back 3 yearly (remove manually if to large) yearly = 3 # create new snapshots autosnap = yes # clean old snapshot autoprune = yes
Configuring zfs-auto-snapshot (optional)
Configure daily and weekly snapshot generation for rpool/ROOT/gentoo
root #
zfs set com.sun:auto-snapshot:daily=true rpool/ROOT/gentoo
root #
zfs set com.sun:auto-snapshot:weekly=true rpool/ROOT/gentoo
Installing required cron daemon
root #
emerge sys-process/cronie
Enable the system service and start cronie cron daemon as required for functionality of sys-fs/sanoid or zfs-auto-snapshot.
root #
systemctl enable cronie.service
root #
systemctl start cronie.service
Limiting the ARC size
If you want to cap the ZFS ARC from growing past a certain point, you can put the number of bytes inside the /etc/modprobe.d/zfs.conf file, and then remake your initramfs. When the system starts up, and the module is loaded, these options will be passed to the zfs kernel module.
ARC cache memory usage will vary depending on zfs pool sizes. I've had a 50TB single vdev raidz2 pool consume 24GB of memory at system idle when unlimited however zfs wll generally default to using 50% of available system memory for the ARC cache
(Temporary) Change the ARC max for the running system to 4 GB
root #
echo 4294967296 >> /sys/module/zfs/parameters/zfs_arc_max
(Permanent) Save the 4 GB ARC cap as a loadable kernel parameter
root #
echo "options zfs zfs_arc_max=4294967296" >> /etc/modprobe.d/zfs.conf
Once we have the above file created, let's regenerate the initramfs. genkernel will automatically detect that this file exists and copy it into the initramfs. When you reboot your machine, the initramfs will load up the zfs kernel module with the parameters found in the file.
root #
dracut --hostonly -k /lib/modules/6.6.30-gentoo/ --kver 6.6.30-gentoo -f /boot/initramfs-6.6.30-gentoo.img
Limiting maximum trim I/Os active to each device. ( Optional )
Some hard disk controllers or ssd disks may exhibit disk controller resets when zpool trim <poolname> is run due to either the disk controller or disk not being able to process multiple synchronous disk controller driver commands being issued to a disk.
A known workaround is to reduce the default value of zfs_vdev_trim_max_active from the default value of 2 to 1 using a zfs driver parameter in the /etc/modprobe.d/zfs.conf file, and then remake your initramfs. When the system starts up, and the module is loaded, these options will be passed to the zfs kernel module.
I've had this behavior or symptom occur using an LSI 9305-16i HBA controller which relies on the mpt3sas kernel driver with Samsung 860 evo ssd's.
There is an open bug on openzfs git discussing this issue.
If this symptom did occur and a sysadmin had zpool trim configured to run from a crontab schedule a zfs pool scrub may be required, pool desync or data corruption at the very worst may occur. zfs has always detected the controller reset behavior as the pool or disk within the pool having been affected by an unrecoverable error prompting zpool replace to be used or zpool clear to clear the error state.
(Temporary) Change maximum trim I/Os active to each device.
root #
echo 1 > /sys/module/zfs/parameters/zfs_vdev_trim_max_active
(Permanent) Save the maximum trim I/Os active to each device as a loadable kernel parameter
root #
echo "options zfs zfs_vdev_trim_max_active=1" >> /etc/modprobe.d/zfs.conf
Once we have the above file created, let's regenerate the initramfs. genkernel will automatically detect that this file exists and copy it into the initramfs. When you reboot your machine, the initramfs will load up the zfs kernel module with the parameters found in the file.
root #
dracut --hostonly -k /lib/modules/6.6.30-gentoo/ --kver 6.6.30-gentoo -f /boot/initramfs-6.6.30-gentoo.img
Successful Installations
- My custom gentoo zfs HTPC nas server
- Gentoo HTPC zfs NAS Neofetch
- TdDF Gentoo zfs nas server - Austin Texas USA. Installed remotely 12/2019. i9-9900k, 32GB DDR4, 7x10TB WD Red's raidz2, Adata SSD root mirror pool.
Credit and Thanks
- Fearedbliss, Richard Yao and Georgy Yakovlev - zfs and Gentoo wouldn't be what has become without their generous dedication and contributions.
- Fallendusk for generously contributing to the gentoo reddit community.
- Everyone that helped me learn in 17 years using gentoo. I promise to pay it forward.
- Kerframil for the Low latency coffee! Go Kerf :)