26 January 2017

File system tests

Since there is interest in filesystem test, I put the script I used for the ZFS/Ext4/XFS tests on a web server. If you test file systems for Grid storage purpose, feel free to give it a try.
It can of course also be used by anyone else, but this test is not doing any random read/writes.

In general, it would be good to run this test (or any other test) under 3 different scenarios when using raid systems:

  1. normal working raid system
  2. degraded raid system
  3. rebuild of raid system


The script needs 2 areas:

  1. one where you have files that are read during the read tests, and 
  2. one where you want write files to. This write area should have no compression since the writes come from /dev/zero.                                                                                                                                                                                                                                                                                          

By default it is doing reads over all specified files, writes to large files, and writes to small files.
For the reads, first it is doing a sequential read for all files, and in a second pass it reads files in parallel for the same set of files.
For the writes, it is doing a sequential write first and in a second pass it is writing in parallel do different files.   That is the same for the writing of large files and of small files.

After each read/write pass there is a cache flush and also each single write issues a file sync after each file is written to make sure that the time is measured to really write a file to disk.


The script needs 3 parameters:

  1. location of a text files that contains the file name including absolute path for all files that you want to include in the read tests
  2. a name used as description for your test, it can be used to distinguish between different tests (e.g. ZFS-raid2-12 disks or ZFS-raidz3-12disks)
  3. absolute path to an area where the write test can write it files too; this area should have no compression enabled

The parameters inside the script, like number of parallel read/writes and  file sizes, can easily be configured. By default about 5TB space are needed for the write tests.

The script itself can be downloaded here

ZFS auto mount for CentOS7

When testing ZFS installs on servers running on CentOS7.3, it can happen that ZFS is not available after a restart. After some testing this seems to be related to systemd and probably affects other systemd Linux distributions too.

What I used were ZFS installs using different versions of ZFS on Linux. After looking into the system setup, I noticed that by default ZFS is just disabled. Doing the following solved the problem on the machines I tested:

systemctl enable zfs.target
systemctl start zfs.target
systemctl enable zfs-import-cache.service
systemctl enable zfs-mount.service
systemctl enable zfs-share.service 

This solved all auto mount issues for me on the CentOS systems.

 Note: At least when using the latest version 0.6.5.8, one can also use the following command as explained on the ZFSonLinux web page:

systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target


Everyone who is upgrading to the latest version should also have a look to the ZFS on Linux web page since the repository address has changed. While it should have updated it automatically, if you haven't run any updates since some month, then it can't get the new repository automatically.