Setting a bandwidth limit for a BSD SSH connection

sshd has no features that limit the bandwidth used by a given ssh connection, so scp or sftp transfers can consume as much bandwidth as the underlying connection can support. If this connection is shared, this might be undesirable – one transfer will always be trying to saturate the pipe, forcing other communications to contend for bandwidth. Conveniently, the OpenBSD ipfw firewall includes a traffic shaping filter named dummynet which can used to simulate a wide variety of network conditions, including artificially limiting the bandwidth for traffic matching a given ipfw rule. This can be leveraged to limit bandwidth for external ssh connections.

I ran into two issues when trying to get this working:

  1. Confusingly the ipfw manpage still suggests that the ipfw command can be used to configure dummynet. In reality, all it can now do is create a dummynet pipe; dnctl is used to configure the pipe once created.
  2. The dummynet kernel module is neither compiled into the kernel nor loaded by default.

The below script incorporates solutions to both the above challenges, defines the rules necessary to base-configure ipfw, and adds a 10Mbit/s bandwidth limit for outgoing ssh connections:

ipfw -q -f flush
# since we have dynamic rules later in the set (those with keep-state), adding this here reduces rule-scanning once a dynamic rule is created
ipfw -q add 00100 check-state
# must always allow unfettered comms on the loopback address
ipfw -q add 00001 allow all from any to any via lo0
# Load the dummynet kernel model
kldload dummynet
# Use dummynet to create a virtual pipe between any local address and any external address, applied only for outgoing traffic
ipfw -q add 00102 pipe 1 ip from me to any out
# Set the maximum bandwith for the pipe I have defined
dnctl pipe 1 config bw 10Mbit/s

Note that the above only limits bandwidth. If you have needs to limit access by IP (ranges), then further ipfw rules will be required.

With the above put into a file named, say, /usr/local/etc/ and set executable by root, the following two lines need to be added to /etc/rc.conf to start the firewall at boot time and install the rules:


Setting up custom buttons for Kensington trackballs under Linux

The Kensington Expert Mouse and Slimblade trackballs have four buttons and a scrollwheel, and I have my personal preference for their layout. There is a utility program available for Windows and macOS to customise the button mapping, but no equivalent program exists for Linux. Fortunately it doesn’t need to, because under Linux there are multiple ways to alter the button-click events sent by input devices; I tried the not-so-new-any-more hotness that is udev before settling on a simple X Windows xorg configuration file.

Xorg approach

X Windows includes the Xorg configuration file system, usually located at /etc/xorg.conf.d. Following the somewhat standard Linux approach, filenames in this directory should be prefixed with a number between 0 and 100, which denotes the order in which they are processed. Cutting to the chase, I created a file named 50-trackball.conf, with the following content:

Section “InputClass”
Identifier “libinput Kensington trackball”
MatchDevicePath “/dev/input/event*”
MatchVendor “047d”
MatchProduct “Kensington Slimblade Trackball”
Driver “libinput”
Option “ButtonMapping” “3 2 1”
Option “DragLockButtons” “2 3”
Options “Natural Scrolling Enabled” “1”

It is necessary to restart X Windows to have it re-read the contents of /etc/xorg.conf.d. The simplest way to achieve this is to log out and back in. Here’s what each line does:

  • Identifier “libinput Kensington trackball”
    This is simply a label for the entry; can be anything.
  • MatchDevicePath “/dev/input/event*”
    Which /dev input event streams to monitor. I’ve configured this to ‘all of them’ to save me the grief of working out which one the trackball appears at. It might even change each time the trackball is reconnected, I don’t know.
  • MatchVendor “047d”
  • MatchProduct “Kensington Slimblade Trackball”
    These are used to decide whether the event data should be filtered. The vendor id can be determined using lsusb – it’s the first of the two hex IDs on the row for the Kensington trackball in that command’s output.
  • Driver “libinput”
    Which device driver I would like to have process events for this stream.
  • Option “ButtonMapping” “3 2 1”
    Okay, finally we’re getting to the meat of this. The syntax here is “which physical button should send left button events, which sends middle button events, and which sends right button events”. “3 2 1” flips the front-left (1) and front-right (3) buttons of the trackball to get left-handed behaviour, whilst assigning the back-left button (2) as middle button.
  • Option “DragLockButtons” “2 3”
    When using a trackball it is not comfortable to hold a button down whilst dragging, so this syntax means that a single click of the back-left button starts a drag at the current cursor position. Front-right terminates the drag at the new current cursor position.
  • Options “Natural Scrolling Enabled” “1”
    I detest the name of this feature. As if the direction in which we’ve moved our pointing device in order to scroll has been unnatural for decades?? Being left-handed, for me anti-clockwise scrolling to descend feels “right”; apparently that corresponds with Natural so far as the Slimblade xinput parameter name reporting is concerned.

The above configuration is fully set-and-forget; I use a KVM switch and yet the configuration is applied faithfully whenever the trackball is assigned to the Linux laptop.

Udev approach

This is what I tried before settling upon the Xorg configuration above (or rather, before this post set me straight). Udev sends events when new devices are plugged in, and has a built-in system to write rules to trigger actions based on those events. As mentioned above, I use a KVM switch and therefore thought it’d be necessary to reconfigure the trackball setup every time I switch it to the Linux laptop. I therefore wrote a udev rule that detected the connection and ran a script that would perform the configuration. Here is the rule (written to a file at /etc/udev/rules.d/80-trackball.rules):

ATTRS{idVendor}==”047d”, ATTRS{idProduct}==”2041″, OWNER=”<your username>“, ACTION==”bind”, RUN+=”<full path to configuration script>”

Once again, idVendor and idProduct can be determined using lsusb, owner is the user as which to perform actions. The script I ran is here; the gist is that it runs xinput and parses the output to find the dynamically-assigned usbId, then again uses xinput to apply the button configuration desired. In testing, the udev rule would fire reliably when the trackball was connected, but xinput would for some reason not list the trackball device. The exact same script worked fine when run interactively. It’s not a timing issue – it worked no better if I put 5 seconds delay in the script to allow everything to notionally ‘settle’. I include this failed approach more as a reminder to myself what not to try¬† when I move from X Windows to Wayland and no longer have Xorg input device configuration as an option :->

Backing up ZFS datasets from Linux to FreeNAS (TrueNAS Core)

A FreeNAS box (okay, TrueNAS Core as we should now call it) is an ideal place to perform set-and-forget, reliable backups of my Linux laptop, given that both devices now use the OpenZFS 2.x filesystem. This howto is not intended to be a ZFS primer – I am assuming the reader already understands ZFS terminology – so suffice it to say we will take scheduled snapshots of the entire current laptop disk state, then copy those snapshots to a location on the TrueNAS filesystem, available in a format ready for off-site backup using Duplicacy. For reasons of security and isolation I want to use a FreeBSD 12 jail as the target, rather than the FreeNAS operating system itself.

This howto explains how I achieved this aim, leveraging zfs_autobackup and FreeBSD’s excellent periodic utility. zfs_autobackup¬† uses ZFS’ snapshotting and copying features to perform backup and archiving from one machine to another over ssh, optimized for speed, and also handles aging-out older snapshots over time. periodic is a feature-complete task scheduling tool. Note that I will use the names TrueNAS and FreeNAS interchangeably in this article, since I have written it on the cusp of the re-branding of FreeNAS to TrueNAS Core.

My starting point is the assumption that you’ve already created a standard FreeNAS jail that is going to perform your backups. In my case, this jail also handles offsite backups to Backblaze B2 using the wonderfully robust duplicacy tool. This is relevant only because my approach is that anything under /mnt is in scope for offsite backup, hence I desire that my laptop datasets appear mounted somewhere under /mnt once the backup has completed.

    1. Upgrade ZFS in the jail
      N.B. This step will become unnecessary once TrueNAS Core ships with FreeBSD 13.x.
      At time of writing, whilst TrueNAS Core 12 supports OpenZFS 2.0, the FreeBSD version on which it is based (12.2) does not. This means that jails do not include the OpenZFS 2.0 userland tools. To avoid weird version incompatibility errors it is strongly advised to copy the zfs binary (and dependent libraries) from the base TrueNAS OS into the jail. This situation is well explained by this TrueNAS support ticket, which also includes a handy script to perform the copying.
    2. Install zfs-autobackup
      First of all, install Python into the jail (pkg install python37), then follow the zfs-autobackup installation howto. This will guide you to install the software on the backup jail, set up the jail’s ability to ssh to the machine to be backed up, and apply the necessary custom zfs property to the datasets on the machine to be backed up. I would also highly recommend following the Performance Tips as part of installation, since they can make a noticeable difference to how long backups take. Remember, the first one will theoretically be the slowest one of all, so best to get it tuned from the start!
    3. Create destination dataset
      zfs-autobackup copies zfs datasets to zfs datasets, so create a zfs dataset on the FreeNAS host system pool for the backups to go to. Since this dataset is being used explicitly to store backups of snapshots, it might make sense during creation of the dataset set the ‘Snapshot directory’ property to Visible. There is also the option to set the dataset read-only to avoid accidental alterations that could casuse zfs-autobackup to throw ‘target altered’ errors on subsequent backups. Read-only datasets can still zfs recv data (which is what zfs-autobackup leverages) and alterations to dataset properties (zfs get/set) are allowed, but at the filesystem level no other process can alter its content.
    4. Assign dataset to jail
      So far so good, but by default jails cannot ‘see’ the FreeNAS system datasets – try a ‘zfs list’ to confirm – so we need to leverage a FreeNAS ZFS feature to change that. With the jail shut down, go into Edit.. Jail Properties, and tick allow_mount and allow_mount_zfs to allow root to mount and unmount zfs datasets the jail has assigned to it. Next we need to assign the destination dataset to the jail. In the jail’s Edit.. Custom Properties, tick jail_zfs and in the jail_zfs_dataset field enter the dataset name _without_ the pool name prepended. In the jail_zfs_mountpoint field enter the dataset name prepended by a / (the folder in which mounts are mounted – /mnt by default – is implicit and should not be included here). Now save the jail settings and start it. Keep in mind that once assigned to a jail, the dataset cannot be mounted by the FreeNAS host. It effectively now ‘belongs’ to this jail alone. You can immediately zfs list from within the jail and find it now available.
    5. Run the initial full backup
      The following command should now be run from within the jail to perform the initial backup of your source machine to the dataset mounted to the jail. Remember that some of the performance tips listed in the zfs_autobackup installation article are not part of the backup command, so perhaps review those again at this point. The following command took ~12 hours to back up ~550GB over my 1Gbps LAN, achieving an average speed of 10MB/s:

      zfs-autobackup –ssh-source carbon.local remote med/carbon-backup –progress –verbose –clear-mountpoint –clear-refreservation –zfs-compressed –no-holds


      • –clear-mountpoint disables automatic mounting for the datasets being backed up to the jail, allowing them to be mounted in a more logical layout for a backup under /mnt on the target (see next step of this howto)
      • –clear-refreservation prevents the back-up datasets reserving extra space for growth on the target system.
      • –zfs-compressed copies compressed datasets as-is and will speed up the transfer. No good if the target dataset uses a different compression algorithm to the source.
      • –no-holds does not enable the ‘hold’ flag on the target snapshots. Apparently this speeds up transfer, but be aware that it will allow you to delete the target datasets without the enforced ‘Are you sure?’ of having to run a zfs command to release the held snapshots first.
      • –progress –verbose outputs verbose progress info. –no-progress disables output of percentage progress and time remaining, which is preferable when run in an automated manner from a script. Note that –progress is the cause of a performance issue introduced in ZFS 2.0.1 and not yet resolved at time of writing.
      • there is a –debug switch that provides very helpful (but not excessive) output if you run into any errors.
      • by default zfs-autobackup will retain the last 10 snapshots, but will also retain each day for a week, each week for a month, and each month for a year. The end result is that one does not regain the disk space from deleting files until a year later! I have therefore also added the following parameter to my regular execution to only keep the last week’s worth of snapshots: –keep-source=7,1d1w
    6. Modify mount points on destination
      The backed-up datasets bring with them their mountpoints relative to the source filesystem, which will be relative to /mnt on the backup jail. If your source machine runs multiple datasets mounted in a filesystem hierarchy. such as /, boot, root and home, for example, these would end up mounted directly into /mnt. For neatness and clarity it would be preferable for them to appear in a subdirectory under /mnt, named for the source machine, hence the reason for disabling their automated mounting during the backup using –clear-mountpoint. With the initial backup complete, we can use zfs set mountpoint to one-time modify the mount points for all the destination datasets, then mount them. It is important to note that the mount points must not be below the mountpoint of the ‘parent’ dataset on the destination, nor can they be sub-directories of one another, else next time you run a backup you will get ‘destination has been modified since most recent snapshot’ errors. Here then is the script I used to modify the mount points and mount the backed-up datasets:

      zfs set mountpoint=/carbon-mounts/boot med/carbon-backup/boot
      zfs set mountpoint=/carbon-mounts/rpool med/carbon-backup/rpool
      zfs set mountpoint=/carbon-mounts/home med/carbon-backup/rpool/HOME
      zfs set mountpoint=/carbon-mounts/root med/carbon-backup/rpool/ROOT
      zfs set mountpoint=/carbon-mounts/gentoo-root med/carbon-backup/rpool/ROOT/gentoo
      zfs mount med/carbon-backup/boot
      zfs mount med/carbon-backup/rpool
      zfs mount med/carbon-backup/rpool/HOME
      zfs mount med/carbon-backup/rpool/ROOT
      zfs mount med/carbon-backup/rpool/ROOT/gentoo

      The destination is now ready for subsequent incremental backups, and will retain these new mount points across backups.

    7. Set up scheduled backups
      The backup is initiated from the server side, meaning we can leverage FreeBSD’s excellent periodic utility to set up a daily execution of zfs_autobackup. Setting up a schedule is as simple as adding a shell script in /etc/periodic/daily that runs the same zfs_autobackup command as used for the initial backup, but adding the –no-progress option since we’re not watching the output.
    8. Work around reading-snapshots-in-a-jail bug
      For reasons as yet unexplained, attempts to ls the content of .zfs/snapshot on datasets from within a jail return ‘Operation not permitted’. This is a problem for me since I assume my file-based off-site backup tooling will run into this bug. I don’t know whose it is (OpenZFS, FreeBSD or TrueNAS), but fortunately someone discovered a simple workaround – list the directory contents from the base TrueNAS shell and thereafter it’ll work from within the jail. I’ve set up a dead simple cron job in TrueNAS Core that runs every hour to ls the content of all the jailed datasets.

Offsite backup with TrueNAS Core, Duplicacy and Backblaze B2

Just recording this here as a reminder to myself. I tried out a couple of backup tools to run offsite backup of my TrueNAS Core ZFS system dataset pool and settled on Duplicacy, since it seemed reliable and works with BackBlaze B2. Setting up Duplicacy was as simple as downloading the command-line executable then executing it from /mnt/<system dataset pool name>, after which I following this guide to configure B2 and a two monthly executions of Duplicacy to perform the backup and prune aged-out snapshots. The switches that dis/enable the backup are in /etc/defaults/periodic.conf and are named weekly_backup_enabled and monthly_backup_enabled.


OpenVPN watchdog (and Private Internet Access port forwarding) script for FreeNAS

When a VPN is running then you tend to want to be sure it is running, or else lose access to what’s at the other end of the pipe. One way to ensure this is have a script run periodically that checks that the VPN connection is up, restarting it if it is not. Running OpenVPN in a FreeNAS jail is a relatively common use case; this howto explains how to put such a script in place.

Whilst Private Internet Access do provide some Linux and Unix scripts for interacting with their service, the one best suited for use with FreeBSD (and therefore FreeNAS) is this one. It supports requesting a port forward from the remote end of the tunnel, which can be a useful advanced feature.

  1. The script can be copied to a suitable location within the relevant jail’s file system.
  2. Log into the jail and install the bash shell FreeBSD package (FreeBSD default is csh but this script expects bash): sudo pkg install bash
  3. Back out in the FreeNAS main menu, go for Task.. Cron Jobs (this is the location at least on FreeNAS 11.3)
  4. Set up a new cron job to run the script as an account that has the relevant rights over the jail in question and can use iocage (e.g. root). The script can be run every 15 minutes (a Custom Schedule, set with 0,15,30,45 in the Minutes field), using the following command line: iocage exec Name_of_Jail /usr/local/sbin/

By default, this cron job will email the address associated with the account as which it is executed if the script outputs anything to stderr (i.e. if there is any error from it), so you can be notified if the script has had to restart OpenVPN.

Per the script’s documentation, it will need to be able to read a file named /usr/local/etc/openvpn/pass.txt, which contains the login credentials for your PIA account.

With this watchdog/port mapping script in place, score some extra privacy points by setting up your firewall to disable internet connectivity to any route but the VPN tunnel. Follow Step 7 of this OpenVPN installation tutorial.

Lenovo Thinkpad X1 Carbon HDMI audio output under Linux

If the Lenovo forums are anything to go by, getting audio to come out of the TV when connecting a Lenovo laptop via HDMI in non-obvious for many folks, regardless of the operating system in use. Under KDE Plasma with PulseAudio, the volume control applet does not add an output device for HDMI audio as it does when connecting a USB audio device, so there is no GUI method to choose which output device to switch to Configuration pane can be used to switch the Built-in Audio profile from ‘Analog Stereo Duplex’ to ‘Digital Stereo (HDMI) Output’. That’s it, that’s all – it doesn’t auto-switch, you have to do it for yourself. The rest of this article might be of interest to command-line adherents, plus it has a couple of useful kernel config tips to ensure your HDMI support is compiled in in a usable manner.

Thanks to Thorsten’s 2017 response on this AskUbuntu post, the solution (at least for my Lenovo X1 Carbon 5th generation) is to use the pactl command line tool. This command lists the cards available on the system in order to obtain the correct card #:

pactl list

With the card # identified, the following command can be used to switch to the HDMI output (the numeric value is the card #):

pactl set-card-profile 1 output:hdmi-stereo+input:analog-stereo

The following command will switch back to the built-in speakers, or else I found that it did automatically switch back when I unplugged the HDMI cable:

pactl set-card-profile 1 output:analog-stereo+input:analog-stereo

As a final point, all this will only work if you have enabled HD Audio in your kernel settings and compiled the HD audio codecs as modules. Why? Well:

  1. The X1 Carbon uses an Intel HD audio solution, which is part of the Intel integrated video chipset.
  2. The Intel integrated video is supported by the i915 driver. Since this depends on a firmware blob it must be built (or, rather, it is wiser to build it) as a module rather than compiled into the kernel. As a consequence, support for the audio parts of this chipset must also be built as modules otherwise, though the kernel will compile without errors, dmesg will throw “Unable to bind the codec” errors and you’ll have no HDMI audio support at all.
  3. Intel have used a range of codecs from various manufacturers in the various chipsets supported by the i915 driver, so it is simplest to compile all the codecs as modules and let udev pick the right one for you at boot time.

Keeping a Gentoo house neat and tidy

Semi-automating emerge @world update

tl;dr Use this interactive script on a weekly basis to keep your Gentoo installation up to date and avoid intractable library version conflicts.

Gentoo has a fantastic source-based package management solution, however there are a number of housekeeping details involved in keeping an installation tidy, to avoid increasing issues with conflicts when upgrading large volumes of packages at once. The so-called @world update (upgrading all installed packages at once) is one such case, and it ought to be done regularly in order to avoid having too large a list of packages with updates, which itself increases the risk of conflicts due to differing version requirements of each package requiring upgrade.

The set of steps that need to be run for a clean, housekept @world update are as follows:

  1. Update the local repository of package versions.
  2. Update all packages with updates in the @world set.
  3. Resolve conflicts that prevent packages updating, repeat steps 2 & 3 until all conflicts are resolved.
  4. Uninstall (unmerge in Gentoo parlance) packages that are no longer referenced as dependencies for any package in @world.
  5. Double-check whether a package has been broken due to one or more dependencies having been updated.
  6. Recompile any packages that use a library that has been updated (step 2 already does this to an extent; it is unclear to me if this step is still required).

There seem to be various reasons why Gentoo doesn’t have a combined tool that performs all of the above steps, not least among which is that the approach and individual tools for each step has evolved and been replaced over time. My take is that Gentoo is not for the Linux neophyte and those that use the distro would prefer to have the flexibility of keeping these tools separate and to properly understand what each of them does.

I tend to do an @world update once a week, so I have found it useful to have a script that runs all the various commands in order, logging all the output for reference. I’ve found it always necessary to review this output, both because many packages output important messages that need to be followed up, and because it can contain useful clues when one does run into blocking conflicts that needs to be unpicked. The script I’ve developed is also a dumping ground for the various tips and tricks I have picked up from across the web for resolving the various forms of conflict that can occur.

The script can be found here at my Github repository; it is well-commented to explain the purpose of each of the command switches, and offers break-out pauses after each command is run just in case there’s the need to review what’s been done so far or indeed stop processing if something significant isn’t right.

Installing the EQ10Q equalizer on Gentoo

EQ10Q appears to be the absolute best quality equalizer available for Linux. I have tried that that comes with pulse-effects, but found that it introduced occasional audio glitches; no good for audiophile use.

EQ10Q is an LV2 plugin, so I start from the assumption that you already have JACK installed (doing so is trivial so I will not include instructions here). The following set of steps are what I used to get EQ10Q working:

  1. Emerge dev-cpp/gtkmm, since eq10q depends upon it for it’s UI.
  2. Download and install eq10q (do not use the ebuild available in the darkelf repo).
  3. Attach the audio-overlay repo (layman -a audio-overlay).
  4. Emerge media-sound/carla (requires accepting the live ebuild in package.use, plus enable the gtk2 flag because the eq10q UI uses GTK2).
  5. Run carla and add /usr/local/lib/lv2 to where carla looks for LV2 plugins.

Sync music playlists to Android


MTP sucks hard, and none of the free linux music players seem to have a reliable method for syncing a library subset (such as a playlist) to another device. rsync plus SSH plus ADB (and a little light shell scripting) to the rescue!

the problem

I have music in my library that is not and will never be available on streaming services (well, not until it is cost-effective to stream Plex from the home that is ;o). Consequently I need to have that music copied onto my phone. The set of files is a subset of my entire library and is around 25GB. Wireless syncing is all well and good, but I don’t need the sync to be wireless and wired would be way faster.

the solution

write a shell script that takes the path of your music library and a playlist file as input, then rsync’s the files listed in the playlist over SSH to a localhost port. How in hell is that useful? Well, rather conveniently, ADB allows you to map a local port to one on your connected phone:

adb forward tcp:2222 tcp:2222

So as long as you have an SSH server running on your phone on that port (I use SSHelper) then the files will end up in SDCard/Music on the phone (this path is hard-coded in the script but obvious so can be changed). I get roughly 50MB/s transfer rate, which seems none too shabby; the whole 25GB playlist in around 11 minutes.

Scanning on Gentoo using a Lexmark MX310dn

SANE, the de facto linux scanning solution has only one open source driver (backend, in SANE parlance) that supports Lexmark scanner, and only a small number of USB models at that. Lexmark have released a closed-source SANE backend, available at their support site if you search for downloads related to the product you have. Unfortunately they do not offer a portage ebuild, only RPM or DEB packages so it takes a little more effort to get it installed and, at least in my case, needed a few further tweaks to get it operating.

Installing the Lexmark SANE backend

  1. Visit and search for your scanner.
  2. On the resulting page, in the Downloads tab, select the most recent Debian release.
  3. Download the offered .deb file (and any firmware update available).
  4. Put the .deb in a folder by itself, then follow ‘Option 2 : manual installation’ of the Arch Linux instructions to install it.


I now found that Xsane would launch and crash, and running the simplest way in to sane:

scanimage - L

would throw a segmentation fault. Great. In my case the fix was to simply disable USB scanner support (no great loss), as follows:

  1. Edit the Lexmark backend configuration file:

sudo nano /etc/sane.d/lexmark_nscan.conf

  1. Find the following variable and set it as below:


  1. I also found it useful to make the following setting, else the scanner showed a second entry in the list for the Lexmark:


Scanner not supported??

So huzzah, now Xsane will open and offer the Lexmark scanner in the list of those detected. When I tried to scan however, after a pause I was getting a window saying that my scanner model wasn’t supported. This seemed highly unlikely. Trying a scan from the command line:

scanimage -d lexmark_nscan > ~/test.jpg

showed me the problem; the scanner just wasn’t listening: Connection refused

In my case, it turned out I had two problems; the printer’s firmware needed an update (I reckon it was around 2017 vintage beforehand), and I needed to enable the HTTP server on the printer (dig your way down into the printer’s settings page in the TCP/IP settings). I also enabled HTTPS support (further down the list in the same menu).

It is a shame that the HTTP server is required, given the increased attack surface (printers are hardly known for their strong security after all!) but on the other hand it did provide the most convenient method to update the firmware.

With these two things done I was off to the races. The method of operation is distinctly bizarre (you’ll see), but if you follow the steps it works fine. I also found that if you use the auto document feeder, when you click Scan it runs all the pages through at once but only records the first sheet in a multi-document project. Subsequent clicks of the Scan button however would bring in each successive page.