Backing up ZFS datasets from Linux to FreeNAS (TrueNAS Core)

A FreeNAS box (okay, TrueNAS Core as we should now call it) is an ideal place to perform set-and-forget, reliable backups of my Linux laptop, given that both devices now use the OpenZFS 2.x filesystem. This howto is not intended to be a ZFS primer – I am assuming the reader already understands ZFS terminology – so suffice it to say we will take scheduled snapshots of the entire current laptop disk state, then copy those snapshots to a location on the TrueNAS filesystem, available in a format ready for off-site backup using Duplicacy. For reasons of security and isolation I want to use a FreeBSD 12 jail as the target, rather than the FreeNAS operating system itself.

This howto explains how I achieved this aim, leveraging zfs_autobackup and FreeBSD’s excellent periodic utility. zfs_autobackup  uses ZFS’ snapshotting and copying features to perform backup and archiving from one machine to another over ssh, optimized for speed, and also handles aging-out older snapshots over time. periodic is a feature-complete task scheduling tool. Note that I will use the names TrueNAS and FreeNAS interchangeably in this article, since I have written it on the cusp of the re-branding of FreeNAS to TrueNAS Core.

My starting point is the assumption that you’ve already created a standard FreeNAS jail that is going to perform your backups. In my case, this jail also handles offsite backups to Backblaze B2 using the wonderfully robust duplicacy tool. This is relevant only because my approach is that anything under /mnt is in scope for offsite backup, hence I desire that my laptop datasets appear mounted somewhere under /mnt once the backup has completed.

    1. Upgrade ZFS in the jail
      N.B. This step will become unnecessary once TrueNAS Core ships with FreeBSD 13.x.
      At time of writing, whilst TrueNAS Core 12 supports OpenZFS 2.0, the FreeBSD version on which it is based (12.2) does not. This means that jails do not include the OpenZFS 2.0 userland tools. To avoid weird version incompatibility errors it is strongly advised to copy the zfs binary (and dependent libraries) from the base TrueNAS OS into the jail. This situation is well explained by this TrueNAS support ticket, which also includes a handy script to perform the copying.
    2. Install zfs-autobackup
      First of all, install Python into the jail (pkg install python37), then follow the zfs-autobackup installation howto. This will guide you to install the software on the backup jail, set up the jail’s ability to ssh to the machine to be backed up, and apply the necessary custom zfs property to the datasets on the machine to be backed up. I would also highly recommend following the Performance Tips as part of installation, since they can make a noticeable difference to how long backups take. Remember, the first one will theoretically be the slowest one of all, so best to get it tuned from the start!
    3. Create destination dataset
      zfs-autobackup copies zfs datasets to zfs datasets, so create a zfs dataset on the FreeNAS host system pool for the backups to go to. Since this dataset is being used explicitly to store backups of snapshots, it might make sense during creation of the dataset set the ‘Snapshot directory’ property to Visible. There is also the option to set the dataset read-only to avoid accidental alterations that could casuse zfs-autobackup to throw ‘target altered’ errors on subsequent backups. Read-only datasets can still zfs recv data (which is what zfs-autobackup leverages) and alterations to dataset properties (zfs get/set) are allowed, but at the filesystem level no other process can alter its content.
    4. Assign dataset to jail
      So far so good, but by default jails cannot ‘see’ the FreeNAS system datasets – try a ‘zfs list’ to confirm – so we need to leverage a FreeNAS ZFS feature to change that. With the jail shut down, go into Edit.. Jail Properties, and tick allow_mount and allow_mount_zfs to allow root to mount and unmount zfs datasets the jail has assigned to it. Next we need to assign the destination dataset to the jail. In the jail’s Edit.. Custom Properties, tick jail_zfs and in the jail_zfs_dataset field enter the dataset name _without_ the pool name prepended. In the jail_zfs_mountpoint field enter the dataset name prepended by a / (the folder in which mounts are mounted – /mnt by default – is implicit and should not be included here). Now save the jail settings and start it. Keep in mind that once assigned to a jail, the dataset cannot be mounted by the FreeNAS host. It effectively now ‘belongs’ to this jail alone. You can immediately zfs list from within the jail and find it now available.
    5. Run the initial full backup
      The following command should now be run from within the jail to perform the initial backup of your source machine to the dataset mounted to the jail. Remember that some of the performance tips listed in the zfs_autobackup installation article are not part of the backup command, so perhaps review those again at this point. The following command took ~12 hours to back up ~550GB over my 1Gbps LAN, achieving an average speed of 10MB/s:

      zfs-autobackup –ssh-source carbon.local remote med/carbon-backup –progress –verbose –clear-mountpoint –clear-refreservation –zfs-compressed –no-holds


      • –clear-mountpoint disables automatic mounting for the datasets being backed up to the jail, allowing them to be mounted in a more logical layout for a backup under /mnt on the target (see next step of this howto)
      • –clear-refreservation prevents the back-up datasets reserving extra space for growth on the target system.
      • –zfs-compressed copies compressed datasets as-is and will speed up the transfer. No good if the target dataset uses a different compression algorithm to the source.
      • –no-holds does not enable the ‘hold’ flag on the target snapshots. Apparently this speeds up transfer, but be aware that it will allow you to delete the target datasets without the enforced ‘Are you sure?’ of having to run a zfs command to release the held snapshots first.
      • –progress –verbose outputs verbose progress info. –no-progress disables output of percentage progress and time remaining, which is preferable when run in an automated manner from a script. Note that –progress is the cause of a performance issue introduced in ZFS 2.0.1 and not yet resolved at time of writing.
      • there is a –debug switch that provides very helpful (but not excessive) output if you run into any errors.
    6. Modify mount points on destination
      The backed-up datasets bring with them their mountpoints relative to the source filesystem, which will be relative to /mnt on the backup jail. If your source machine runs multiple datasets mounted in a filesystem hierarchy. such as /, boot, root and home, for example, these would end up mounted directly into /mnt. For neatness and clarity it would be preferable for them to appear in a subdirectory under /mnt, named for the source machine, hence the reason for disabling their automated mounting during the backup using –clear-mountpoint. With the initial backup complete, we can use zfs set mountpoint to one-time modify the mount points for all the destination datasets, then mount them. It is important to note that the mount points must not be below the mountpoint of the ‘parent’ dataset on the destination, nor can they be sub-directories of one another, else next time you run a backup you will get ‘destination has been modified since most recent snapshot’ errors. Here then is the script I used to modify the mount points and mount the backed-up datasets:

      zfs set mountpoint=/carbon-mounts/boot med/carbon-backup/boot
      zfs set mountpoint=/carbon-mounts/rpool med/carbon-backup/rpool
      zfs set mountpoint=/carbon-mounts/home med/carbon-backup/rpool/HOME
      zfs set mountpoint=/carbon-mounts/root med/carbon-backup/rpool/ROOT
      zfs set mountpoint=/carbon-mounts/gentoo-root med/carbon-backup/rpool/ROOT/gentoo
      zfs mount med/carbon-backup/boot
      zfs mount med/carbon-backup/rpool
      zfs mount med/carbon-backup/rpool/HOME
      zfs mount med/carbon-backup/rpool/ROOT
      zfs mount med/carbon-backup/rpool/ROOT/gentoo

      The destination is now ready for subsequent incremental backups, and will retain these new mount points across backups.

    7. Set up scheduled backups
      The backup is initiated from the server side, meaning we can leverage FreeBSD’s excellent periodic utility to set up a daily execution of zfs_autobackup. Setting up a schedule is as simple as adding a shell script in /etc/periodic/daily that runs the same zfs_autobackup command as used for the initial backup, but adding the –no-progress option since we’re not watching the output.

Offsite backup with TrueNAS Core, Duplicacy and Backblaze B2

Just recording this here as a reminder to myself. I tried out a couple of backup tools to run offsite backup of my TrueNAS Core ZFS system dataset pool and settled on Duplicacy, since it seemed reliable and works with BackBlaze B2. Setting up Duplicacy was as simple as downloading the command-line executable then executing it from /mnt/<system dataset pool name>, after which I following this guide to configure B2 and a two monthly executions of Duplicacy to perform the backup and prune aged-out snapshots. The switches that dis/enable the backup are in /etc/defaults/periodic.conf and are named weekly_backup_enabled and monthly_backup_enabled.


OpenVPN watchdog (and Private Internet Access port forwarding) script for FreeNAS

When a VPN is running then you tend to want to be sure it is running, or else lose access to what’s at the other end of the pipe. One way to ensure this is have a script run periodically that checks that the VPN connection is up, restarting it if it is not. Running OpenVPN in a FreeNAS jail is a relatively common use case; this howto explains how to put such a script in place.

Whilst Private Internet Access do provide some Linux and Unix scripts for interacting with their service, the one best suited for use with FreeBSD (and therefore FreeNAS) is this one. It supports requesting a port forward from the remote end of the tunnel, which can be a useful advanced feature.

  1. The script can be copied to a suitable location within the relevant jail’s file system.
  2. Log into the jail and install the bash shell FreeBSD package (FreeBSD default is csh but this script expects bash): sudo pkg install bash
  3. Back out in the FreeNAS main menu, go for Task.. Cron Jobs (this is the location at least on FreeNAS 11.3)
  4. Set up a new cron job to run the script as an account that has the relevant rights over the jail in question and can use iocage (e.g. root). The script can be run every 15 minutes (a Custom Schedule, set with 0,15,30,45 in the Minutes field), using the following command line: iocage exec Name_of_Jail /usr/local/sbin/

By default, this cron job will email the address associated with the account as which it is executed if the script outputs anything to stderr (i.e. if there is any error from it), so you can be notified if the script has had to restart OpenVPN.

Per the script’s documentation, it will need to be able to read a file named /usr/local/etc/openvpn/pass.txt, which contains the login credentials for your PIA account.

With this watchdog/port mapping script in place, score some extra privacy points by setting up your firewall to disable internet connectivity to any route but the VPN tunnel. Follow Step 7 of this OpenVPN installation tutorial.

Lenovo Thinkpad X1 Carbon HDMI audio output under Linux

If the Lenovo forums are anything to go by, getting audio to come out of the TV when connecting a Lenovo laptop via HDMI in non-obvious for many folks, regardless of the operating system in use. Under KDE Plasma with PulseAudio, the volume control applet does not add an output device for HDMI audio as it does when connecting a USB audio device, so there is no GUI method to choose which output device to switch to Configuration pane can be used to switch the Built-in Audio profile from ‘Analog Stereo Duplex’ to ‘Digital Stereo (HDMI) Output’. That’s it, that’s all – it doesn’t auto-switch, you have to do it for yourself. The rest of this article might be of interest to command-line adherents, plus it has a couple of useful kernel config tips to ensure your HDMI support is compiled in in a usable manner.

Thanks to Thorsten’s 2017 response on this AskUbuntu post, the solution (at least for my Lenovo X1 Carbon 5th generation) is to use the pactl command line tool. This command lists the cards available on the system in order to obtain the correct card #:

pactl list

With the card # identified, the following command can be used to switch to the HDMI output (the numeric value is the card #):

pactl set-card-profile 1 output:hdmi-stereo+input:analog-stereo

The following command will switch back to the built-in speakers, or else I found that it did automatically switch back when I unplugged the HDMI cable:

pactl set-card-profile 1 output:analog-stereo+input:analog-stereo

As a final point, all this will only work if you have enabled HD Audio in your kernel settings and compiled the HD audio codecs as modules. Why? Well:

  1. The X1 Carbon uses an Intel HD audio solution, which is part of the Intel integrated video chipset.
  2. The Intel integrated video is supported by the i915 driver. Since this depends on a firmware blob it must be built (or, rather, it is wiser to build it) as a module rather than compiled into the kernel. As a consequence, support for the audio parts of this chipset must also be built as modules otherwise, though the kernel will compile without errors, dmesg will throw “Unable to bind the codec” errors and you’ll have no HDMI audio support at all.
  3. Intel have used a range of codecs from various manufacturers in the various chipsets supported by the i915 driver, so it is simplest to compile all the codecs as modules and let udev pick the right one for you at boot time.

Keeping a Gentoo house neat and tidy

Semi-automating emerge @world update

tl;dr Use this interactive script on a weekly basis to keep your Gentoo installation up to date and avoid intractable library version conflicts.

Gentoo has a fantastic source-based package management solution, however there are a number of housekeeping details involved in keeping an installation tidy, to avoid increasing issues with conflicts when upgrading large volumes of packages at once. The so-called @world update (upgrading all installed packages at once) is one such case, and it ought to be done regularly in order to avoid having too large a list of packages with updates, which itself increases the risk of conflicts due to differing version requirements of each package requiring upgrade.

The set of steps that need to be run for a clean, housekept @world update are as follows:

  1. Update the local repository of package versions.
  2. Update all packages with updates in the @world set.
  3. Resolve conflicts that prevent packages updating, repeat steps 2 & 3 until all conflicts are resolved.
  4. Uninstall (unmerge in Gentoo parlance) packages that are no longer referenced as dependencies for any package in @world.
  5. Double-check whether a package has been broken due to one or more dependencies having been updated.
  6. Recompile any packages that use a library that has been updated (step 2 already does this to an extent; it is unclear to me if this step is still required).

There seem to be various reasons why Gentoo doesn’t have a combined tool that performs all of the above steps, not least among which is that the approach and individual tools for each step has evolved and been replaced over time. My take is that Gentoo is not for the Linux neophyte and those that use the distro would prefer to have the flexibility of keeping these tools separate and to properly understand what each of them does.

I tend to do an @world update once a week, so I have found it useful to have a script that runs all the various commands in order, logging all the output for reference. I’ve found it always necessary to review this output, both because many packages output important messages that need to be followed up, and because it can contain useful clues when one does run into blocking conflicts that needs to be unpicked. The script I’ve developed is also a dumping ground for the various tips and tricks I have picked up from across the web for resolving the various forms of conflict that can occur.

The script can be found here at my Github repository; it is well-commented to explain the purpose of each of the command switches, and offers break-out pauses after each command is run just in case there’s the need to review what’s been done so far or indeed stop processing if something significant isn’t right.

Installing the EQ10Q equalizer on Gentoo

EQ10Q appears to be the absolute best quality equalizer available for Linux. I have tried that that comes with pulse-effects, but found that it introduced occasional audio glitches; no good for audiophile use.

EQ10Q is an LV2 plugin, so I start from the assumption that you already have JACK installed (doing so is trivial so I will not include instructions here). The following set of steps are what I used to get EQ10Q working:

  1. Emerge dev-cpp/gtkmm, since eq10q depends upon it for it’s UI.
  2. Download and install eq10q (do not use the ebuild available in the darkelf repo).
  3. Attach the audio-overlay repo (layman -a audio-overlay).
  4. Emerge media-sound/carla (requires accepting the live ebuild in package.use, plus enable the gtk2 flag because the eq10q UI uses GTK2).
  5. Run carla and add /usr/local/lib/lv2 to where carla looks for LV2 plugins.

Sync music playlists to Android


MTP sucks hard, and none of the free linux music players seem to have a reliable method for syncing a library subset (such as a playlist) to another device. rsync plus SSH plus ADB (and a little light shell scripting) to the rescue!

the problem

I have music in my library that is not and will never be available on streaming services (well, not until it is cost-effective to stream Plex from the home that is ;o). Consequently I need to have that music copied onto my phone. The set of files is a subset of my entire library and is around 25GB. Wireless syncing is all well and good, but I don’t need the sync to be wireless and wired would be way faster.

the solution

write a shell script that takes the path of your music library and a playlist file as input, then rsync’s the files listed in the playlist over SSH to a localhost port. How in hell is that useful? Well, rather conveniently, ADB allows you to map a local port to one on your connected phone:

adb forward tcp:2222 tcp:2222

So as long as you have an SSH server running on your phone on that port (I use SSHelper) then the files will end up in SDCard/Music on the phone (this path is hard-coded in the script but obvious so can be changed). I get roughly 50MB/s transfer rate, which seems none too shabby; the whole 25GB playlist in around 11 minutes.

Scanning on Gentoo using a Lexmark MX310dn

SANE, the de facto linux scanning solution has only one open source driver (backend, in SANE parlance) that supports Lexmark scanner, and only a small number of USB models at that. Lexmark have released a closed-source SANE backend, available at their support site if you search for downloads related to the product you have. Unfortunately they do not offer a portage ebuild, only RPM or DEB packages so it takes a little more effort to get it installed and, at least in my case, needed a few further tweaks to get it operating.

Installing the Lexmark SANE backend

  1. Visit and search for your scanner.
  2. On the resulting page, in the Downloads tab, select the most recent Debian release.
  3. Download the offered .deb file (and any firmware update available).
  4. Put the .deb in a folder by itself, then follow ‘Option 2 : manual installation’ of the Arch Linux instructions to install it.


I now found that Xsane would launch and crash, and running the simplest way in to sane:

scanimage - L

would throw a segmentation fault. Great. In my case the fix was to simply disable USB scanner support (no great loss), as follows:

  1. Edit the Lexmark backend configuration file:

sudo nano /etc/sane.d/lexmark_nscan.conf

  1. Find the following variable and set it as below:


  1. I also found it useful to make the following setting, else the scanner showed a second entry in the list for the Lexmark:


Scanner not supported??

So huzzah, now Xsane will open and offer the Lexmark scanner in the list of those detected. When I tried to scan however, after a pause I was getting a window saying that my scanner model wasn’t supported. This seemed highly unlikely. Trying a scan from the command line:

scanimage -d lexmark_nscan > ~/test.jpg

showed me the problem; the scanner just wasn’t listening: Connection refused

In my case, it turned out I had two problems; the printer’s firmware needed an update (I reckon it was around 2017 vintage beforehand), and I needed to enable the HTTP server on the printer (dig your way down into the printer’s settings page in the TCP/IP settings). I also enabled HTTPS support (further down the list in the same menu).

It is a shame that the HTTP server is required, given the increased attack surface (printers are hardly known for their strong security after all!) but on the other hand it did provide the most convenient method to update the firmware.

With these two things done I was off to the races. The method of operation is distinctly bizarre (you’ll see), but if you follow the steps it works fine. I also found that if you use the auto document feeder, when you click Scan it runs all the pages through at once but only records the first sheet in a multi-document project. Subsequent clicks of the Scan button however would bring in each successive page.


Mounting NFS shares when using dhcpcd as a network manager under Gentoo


If you use NFS shares and/or want your workstation to have a static LAN IP and want seamless wired<->wireless LAN roaming under Gentoo/OpenRC, then dhcpcd is your go-to network manager. But not without a bit of manual tweaking..

Use Case

You have the following requirements:

  • your workstation is a laptop that moves between wired and wireless connections within your LAN
  • you use NFS shares that you would like auto-reconnected when roaming between connections, and auto-connected on boot regardless of the connection available at that time (at the very least, do not hang on boot due to unavailable shares)
  • you require your workstation to have a static LAN IP so as to be able to reliably expose services it hosts outside the LAN (i.e. so as to forward ports from your router to it)


I don’t know if it’s just me, but network roaming under Gentoo using the default netifrc + ifplugd system is just not reliable. Maybe it’s that ifplugd isn’t oriented towards wireless connections, maybe it’s that my usual wired ethernet port is on a USB-C dock, maybe it’s the NFS shares I use, but I was continually finding myself without a path to my LAN when switching from one form of connection to another or have it lock up trying to remount shares. Having bashed my head against this for some weeks I re-read the Gentoo network management article for the millionth time, once again decided against migrating to Network Manager (and its associated systemd leanings), but for the first time noticed that it mentions in passing that the simplest network management solution is to use dhcpcd in Master mode.

So then, I uninstalled ifplugd and the netifrc services in favour of dhcpcd. The latter is admirably install-and-forget, but will not assign a static address to any interface by default. Whilst it can support static IP assignment, in order to use it one needs to specify the configuration for every local interface else confusing badness will transpire1The approach I chose instead is to use DHCP and apply IP reservation at the DHCP server to maintain a predictable address. Explanation of how to do that is beyond the scope of this article (I don’t know what DHCP server you’re using).

Okay, so now we have a system that will pick up an IP address as it roams from wired to wireless connections and back. Mount-at-boot NFS mounts will not handle this cleanly at all though, so the second part of the magic is to do away with static mounts in /etc/fstab entirely, in favour of using autofs to mount them on demand. Installation and configuration is documented admirably at the Gentoo site, so I will not duplicate that here.

That’s all there is to it! It seems trivial in retrospect, but I put up with all manner of half-working solutions on the way here, so I thought it worth documenting anyway.



  1. If you have multiple local ethernet ports it’ll assign the same address to all of them regardless of whether they’re even up, resulting in bizarre routing problems. The worst of these issues is that down interfaces receive a metric of 0, that is to say the most preferred, meaning that LAN routing fails because it tries to use interfaces that aren’t up!↩︎

Upgrading the Gentoo kernel when running ZFS on an NVME SSD

Gentoo user FearedBliss wrote the de-facto guide for installing Gentoo to boot up from a ZFS partition, alongside a tool to build an init ramdisk that support ZFS partitions. Unfortunately their init ramdisk does not support ZFS partitions on NVME SSD drives (something to do with how they are represented in /dev/disks/by-id). Happily, top dude Guy Robot wrote a pair of excellent Gentoo/ZFS/NVME SSD installation articles that incorporate the extra steps required to patch the init ramdisk to fix this.

Time passes, and the time comes to upgrade my kernel, meaning an init ramdisk rebuild is also required. Blending together the Gentoo Kernel Upgrade Guide with Guy Robot’s articles I have synthesized the following set of kernel upgrade steps. Please note this is an upgrade article only and assumes you already have a working system that boots from a ZFS partition. If you’re starting from scratch please follow Guy Robot’s articles linked above.
If you’re already there and genuinely only want to upgrade your kernel then please read on..

  1. Obviously enough, start with updating the relevant packages (sys-kernel/linux-firmware, sys-kernel/gentoo-sources, sys-fs/zfs, sys-fs/zfs-kmod; note that since v0.8 ZFS no longer relies upon sys-kernel/spl so this can be unmerged). In my case I just went for a complete emerge update @world:

    emaint -a sync
    emerge –update –deep –changed-use –keep-going @world

  2. Ensure that the file /etc/portage/savedconfig/sys-kernel/linux-firmware points to (or is) a file containing the list of only the firmware blobs to compile into the kernel (the remainder should be commented out or deleted). If you are not doing so already, I recommend setting the savedconfig USE variable for the sys-kernel/linux-firmware package so that future emerges retain your config; your set of required firmware is not likely to change from one kernel build to the next.
  3. The /usr/src/linux symlink points to the kernel source package directory. This needs to be updated to point to the location we wish to use for this build. Emerging the gentoo-sources package with USE=symlink would do this for us automatically, but there are other packages out there that need to know where to find our current source, so it’s best that we manage this manually. The Gentoo way is to use the eselect command:

    eselect kernel list
    eselect kernel set <the index number of the new source dir>

  4. We’ll now do some work in the kernel dir. Change into it explicitly to ensure picking the new location of the symlink:

    cd /usr/src/linux

  5. Copy the currently-running kernel’s config to the new kernel’s source dir to use it as a basis:

    cp /boot/kernels/<currently-running kernel dir>/config /usr/src/linux/.config

  6. We already have the NVME support enabled from the original install (see Guy Robot’s article for details), so next we need to update the config to match the new kernel version (i.e. bring in options that are new since the last time we built it). In the past I have used make olddefconfig to do this, which supposedly keeps your existing settings and selects sane defaults for any new settings but I got burned badly by it once, so now I use:

    make oldconfig

    which instead prompts you to select values for any new settings that exist in the new kernel version and not in your old one; with a bit of searching it’s possible to work out what the right option should be for your hardware.

  7. Processor manufacturers periodically release microcode patches to fix bugs. Gentoo includes several methods to identify which ones apply to your processor and to build them into the kernel. The generic instructions are here and the Intel-specific procedure is here, the final steps of which involve using…
  8. …the graphical kernel config tool – maybe in browsing through it you’ll additionally find options you now want to enable, or perhaps disable:

    make menuconfig

    And do remember to save your changes before quitting the tool.

  9. Right, we’re ready to build the kernel. The time this takes is very variable. It seems that small changes in config mean small build times; I have had it take between a couple of minutes to around 1 hour on my Intel i7-7600U. Set the -j parameter to the number of processor cores you have plus one:

    make -j5

  10. I’m honestly not certain that it’s necessary to rebuild ZFS against the new kernel sources, but since we are about to rebuild the ZFS kernel module it seems prudent:

    emerge sys-fs/zfs

  11. Rebuild the kernel modules (zfs-kmod and any others you’re using) against the new sources:

    emerge @module-rebuild

  12. Install the kernel and modules (they aren’t actually used until we reconfigure grub to know about them, so not to worry):

    make modules_install
    make install

  13. The modules get installed to /lib/modules/<kernel version>, and the kernel and it’s ancilliary files will be installed to /boot. For neatness and the ability to roll back in case of problems we will keep the existing kernel alongside the new, so create subdirs in the form /boot/kernels/<kernel version dir> and move/rename vmlinuz, config and from /boot to there.
  14. Now we need to build an init ramdisk that’ll import our ZFS pools at boot time. First step here is to use Fearedbliss’ tool to create one that is Gentoo/ZFS-friendly (N.B. I’m assuming you’ve already added the necessary Portage overlay to your repos.conf, as explained in Guy Robot’s article)

    emerge bliss-initramfs

    Since v8.1.0, this tool is configured via command-line options plus a settings file at /etc/bliss-initramfs/settings.json. If you are using the systemd init then you can go right ahead, but if you’re using OpenRC then edit the settings to change udevProvider to /sbin/udevd. The command line is as follows in either case:

    bliss-initramfs -k <new kernel version>

    When it has finished, copy the resulting initrd-<kernel version> file from /usr/src/linux to /boot/kernels/<kernel version dir>/initrd

  15. We now need to edit the configuration within the init ramdisk to have it mount our zpools explicitly. Folllow the instructions in Guy Robot’s article starting from “We now need to modify the initramfs.” and up until “Now you have a custom initramfs…” :o)
    N.B. If you boot from a ZFS-formatted volume, you will need to repeat steps 14 to 18 every time the ZFS kernel module package (sys-fs/zfs-kmod) is updated, since this module is baked into the initramfs. If you do not do so, the userland tools (sys-fs/zfs) will track ZFS updates, but the kernel API will stay at whatever ZFS version was current when you last rebuilt your kernel.
  16. Next we need to let grub know about our new kernel so we can select it at boot. Edit /boot/grub/grub.cfg to set the timeout to 3 (temporarily until we’re happy with the new kernel) and add a menu item like the below, updating the kernel version number appropriately:

    menuentry “Gentoo – 4.19.52-gentoo-FC.01” {
    linux /@/kernels/4.19.52-gentoo-FC.01/vmlinuz root=rpool/ROOT/gentoo resume=/dev/nvme0n1p3 by=id elevator=noop quiet logo.nologo refresh
    initrd /@/kernels/4.19.52-gentoo-FC.01/initrd

    A few notes about the kernel command line:

    • root= and by= are for the Bliss initramfs, your root value might differ
    • resume= is the hibernation partion
    • refresh= is a Bliss parameter to instruct it to flush the ZFS cache at the next reboot


  17. Take a deep breath and test your work:


  18. We do not want the ZFS cache flushed at every boot, so go back in and edit grub.cfg to remove the ‘refresh’  option.
  19. Once you are happy with your new kernel, you will want to ensure it’s source code sticks around because some other packages depend on it when building. To prevent it being unmerged when new versions are released, preserve the gentoo-sources package version by adding it to @world:

    emerge –noreplace sys-kernel/gentoo-sources:<version>

  20. You can then do the opposite for the ‘outgoing’ kernel, so that at the next emerge @world it will be removed:

    emerge –deselect =sys-kernel/gentoo-sources-<version>

  21. You’ll need to edit grub.cfg one more time to remove the menu entry for the old kernel version and set timeout back to 0 so that you don’t see the menu on booting.
  22. Finally, delete the old kernel’s subfolders under /boot/kernels and /usr/src. You’re done! Enjoy new kernel goodness :)