In the previous test, 10GB sized files have been used since that's more at the order of file size when used as GridPP storage behind DPM. However, since the machine used for the tests has 24GB RAM which is larger than the file size, the files could still be in the cache.
For the following test, the same machine was used as in the above mentioned post, but with some changes in the configuration:
- Both raid controllers, a PERC H700 and a PERC H800, have been reset before doing any tests.
- The element size defined in the controllers was 8KB for the previous test, now it uses 64KB.
- The test file size was increased to 30GB to be larger than the total RAM in the machine.
- All write and read tests were repeated 10 times to see how large the variation in the measured rates are.
- On the H700 11x2TB disks are used as raid6/raidz2 + 1 hotspare, mounted under /tank-2TB.
- On the H800 16x8TB disks are used as raid6/raidz2 + 1 hotspare, mounted under /tank-8TB.
The controller cache was again set to "write through" instead of the default "write back".
All read and writes have been performed 10 times to 10 different files to reduce the possibility that anything is left over in memory or controller cache. The results are then averaged for read/write operations and controller. "dd" was used to generate/read the files with commands like:
time (dd if=/dev/zero of=/tank-2TB/test30G-$i bs=1M count=30720 && sync)
time (dd if=/tank-2TB/test30G-$i of=/dev/null bs=1M && sync)
The averaged results are given as "time"-value because the value reported by time includes the "sync" operation and makes sure everything is written to disk, while "dd" reports only about the own process which doesn't mean it is physically on disk already but can still be in memory cached. The minimum and maximum values are also reported to give an idea about the range during all 10 trials.
All read and writes have been performed 10 times to 10 different files to reduce the possibility that anything is left over in memory or controller cache. The results are then averaged for read/write operations and controller. "dd" was used to generate/read the files with commands like:
time (dd if=/dev/zero of=/tank-2TB/test30G-$i bs=1M count=30720 && sync)
time (dd if=/tank-2TB/test30G-$i of=/dev/null bs=1M && sync)
The averaged results are given as "time"-value because the value reported by time includes the "sync" operation and makes sure everything is written to disk, while "dd" reports only about the own process which doesn't mean it is physically on disk already but can still be in memory cached. The minimum and maximum values are also reported to give an idea about the range during all 10 trials.
H700
ZFS write: 56s (549MB/s) (min: 52s, max: 59s)Hardware raid write: 305s (101MB/s) (min:265s, max:342s)
ZFS read: 74s (415MB/s) (min: 56s, max: 83s)
Hardware raid read: 156s (197MB/s) (min:147s, max: 159s)
H800
ZFS write: 28s (1097MB/s) (min: 28s, max: 30s)Hardware raid write: 147s ( 209MB/s) (min:125s, max:154s)
ZFS read: 30s (1024MB/s) (min:30s, max: 34s)
Hardware raid read: 29s (1059MB/s) (min:29s, max: 31s)
In conclusion, it can be seen that the H800 performs in both configurations better than the H700 while ZFS clearly has the better performance than the hardware raid configuration. Therefore, all new installations at the Edinburgh side will use ZFS for the administration of the GridPP storage space. In the next blog post, I will show how to setup the zfs storage part for GridPP usage.
No comments:
Post a Comment