Blog
December 27, 2019

Logs Mirror, Cache, & Snapshots in OpenZFS Filesystem on Linux

Another big advantage to installing ZoL or OpenZFS filesystem on Linux is that as the sysadmin you can create a mirror of two SCSI drives in the filesystem containing your system logs and create a cache consisting of two SSD drives in the filesystem containing system cache information. The logs mirror helps to balance the load of the ZFS pool in the system and also helps to ensure that your log files are preserved in the event of RAIDZ failure. The cache is a part of the ARC (Adaptive Replacement Cache) system in OpenZFS and assists in rebuilding drives to restore your system if drives begin to fail. Read cache is referred to as L2ARC (Level 2 Adaptive Replacement Cache), synchronous write cache is ZIL (ZFS Intent Log), SLOG (Separate Log Device).

To prepare for the creation of the logs mirror, I added two additional 10G SCSI drives to the ZFS pool. These are designated as /dev/sde and /dev/sdf, respectively. Next, I ran the following command in the Terminal to add the logs mirror:

Creating the logs mirror pretty much guarantees preservation of logs information in OpenZFS in the event of drive failure. Creating the logs mirror is just the first step in creating what is referred to as the ZIL, SLOG which is a fast persistent write cache for ZFS writes to disk. Note I said creating the logs mirror is the first step. Now that we have created the logs mirror in ZFS, the second step is to create what is known as the ZFS cache (a part of the L2ARC - Level 2 Adaptive Replacement Cache). To create the cache in the system, I added two high speed SSD drives /dev/sdg and /dev/sdh and then ran the following command:

# zpool add zspool cache /dev/sdg /dev/sdh

To check the status of the zpool at this point, we can rerun the zpool status command:

The size of the disks that contain the logs and cache need to be determined based on the performance of your system. Monitoring this will give you a better indication of the apparent size these drives need to be. Using high speed SSDs or NVMe M.2 drives for the cache rather than traditional SCSI drives is a good idea especially if you don't want to start running into performance bottlenecks when cache information is being written to these cache drives during OpenZFS resilvering of replacement drives.

Combining the ZIL and Separate Log Device (SLOG) in the ZIL, SLOG configuration that you see above, you are essentially balancing the load on your system using OpenZFS.

ZFS Snapshots are essentially instances in time of the ZFS filesystem across the entire pool or datasets within the pool that you wish to capture. Snapshots in ZFS are read-only, immutable, great for backups since you can backup the snapshot rather than the files, and snapshots within the same host are not backups. Snapshots can be exported from one host to another host or within the same host. When creating snapshots, it is important to note that these snapshots in Linux will be represented in the system as hidden directories under the dataset wherein they are created with the snapshots being placed in those hidden directories. Files within the snapshots that are deleted may be restored using a rollback procedure which I will demonstrate below. Let's create a snapshot of the /zfspool/data/apps dataset:

The command: # zfs snapshot zfspool/apps@271220191653 which creates a snapshot of the apps dataset represents an instance in time at 16:53 on 27 December 2019 of the zfspool/apps dataset.

After creating this snapshot, I used vim to create a file called file1 which I placed in the /zfspool/apps directory where the snapshot was created:

root@debian-10-desktop-vm:/zfspool/apps# ls -lh
total 1.0K
-rw-r--r-- 1 root root 15 Dec 27 16:55 file1

Now, if I rollback the snapshot using the ZFS rollback command, this file should no longer exist since it was created after the snapshot zfspool/apps@271220191653 was created:

root@debian-10-desktop-vm:/zfspool/apps# zfs rollback zfspool/apps@271220191653
root@debian-10-desktop-vm:/zfspool/apps# ls -lh
total 0
root@debian-10-desktop-vm:/zfspool/apps#

and, indeed, the rollback has eliminated the file which was created earlier. Snapshots allow you to create a mark in time for your system. If you wish to list out the snapshots that have been created in the ZFS filesystem, you can accomplish this using the command:

These snapshots do take up space on your system, but, as you can see here, this snapshot is only using 17.3K in the system.

And, finally, if you want to move a zpool to another machine, you need to get out of the zpool you wish to move, then use the following command:

root@debian-10-desktop-vm:/zfspool/apps# pwd
/zfspool/apps
root@debian-10-desktop-vm:/zfspool/apps# cd ../../
root@debian-10-desktop-vm:/# pwd
/
root@debian-10-desktop-vm:/# zpool export zfspool
root@debian-10-desktop-vm:/# zpool status
no pools available
root@debian-10-desktop-vm:/#

As you can see above, I moved up the directory tree two levels to the / directory, then ran the command which is highlighted. After running the zpool status command, you see that no pools are available. This is because the zpool has been exported. I did not use the switch on the export command to direct the export to a specific directory in Linux, but you probably should do this so you know where you put it. In this example, I let it place the exported zpool in the default location because I am going to turn right around and import the zpool back to its original location. I do this using the ZFS import command like this:

The zpool zfspool has been returned and running a status on the pool returns good status as before.

Setting up Quotas & Reservations in OpenZFS - Part 5