I have a 3.7GB snapraid.content file saving to a zfs volume with a 1MB blocksize and block caching disabled (caching metadata only). I've noted that saving state to this file is fast (drive speed), but verifying takes a very ling time:
Verified /z/p04/snapraid.content in 314 seconds
The drive reads at ~200MB/s. This should take ~ 20 seconds.
Watching the drive status (zpool iostat) while verifying, I can observe a steady read rate from the drive for the whole 5+ minutes. It appears that the snapraid.content files are being verified with a 64KB read block size, which forces the (uncached) zfs volume to re-read each 1MB block 16x in a row.
A similar impact might occur with snapraid.content files being verified from parity volumes formatted with largefile when operating in an uncached mode (similar 1MB block size).
I've also noted that fast SSD volumes take nearly a minute to verify with the same above (1MB / uncached) conditions present.
Recommend increasing the read request size >64KB for verify operations, ideally 1MB as that corresponds with the commonly set block size for large parity volumes.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There are a few other read/write operations that I've noted to not 'line up' with 1MB block devices:
Loading state from snapraid.content (even from SSD this appears slower than it should be - likely 128K reads based on my observations).
When using blocksize 1024, data disk reads appear to be aligned, however parity disks are seeing ~20% additional write requests than expected to the disk, suggesting that parity file writes might not be aligned to the configured blocksize. Is there some header information in the parity files that might cause an offset here? If so, recommend padding to the next multiple of the configured blocksize as to better optimize these writes.
Last edit: Allyn Malventano 2020-05-25
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I have a 3.7GB snapraid.content file saving to a zfs volume with a 1MB blocksize and block caching disabled (caching metadata only). I've noted that saving state to this file is fast (drive speed), but verifying takes a very ling time:
Verified /z/p04/snapraid.content in 314 seconds
The drive reads at ~200MB/s. This should take ~ 20 seconds.
Watching the drive status (zpool iostat) while verifying, I can observe a steady read rate from the drive for the whole 5+ minutes. It appears that the snapraid.content files are being verified with a 64KB read block size, which forces the (uncached) zfs volume to re-read each 1MB block 16x in a row.
A similar impact might occur with snapraid.content files being verified from parity volumes formatted with largefile when operating in an uncached mode (similar 1MB block size).
I've also noted that fast SSD volumes take nearly a minute to verify with the same above (1MB / uncached) conditions present.
Recommend increasing the read request size >64KB for verify operations, ideally 1MB as that corresponds with the commonly set block size for large parity volumes.
There are a few other read/write operations that I've noted to not 'line up' with 1MB block devices:
Loading state from snapraid.content (even from SSD this appears slower than it should be - likely 128K reads based on my observations).
When using blocksize 1024, data disk reads appear to be aligned, however parity disks are seeing ~20% additional write requests than expected to the disk, suggesting that parity file writes might not be aligned to the configured blocksize. Is there some header information in the parity files that might cause an offset here? If so, recommend padding to the next multiple of the configured blocksize as to better optimize these writes.
Last edit: Allyn Malventano 2020-05-25