Currently, if a read is done of a hard drive using sync and a large block size that is not an integer multiple of the drive, the input and output files will end up as different sizes. As such, it makes things like hashing a bit more complicated. The patch attached below attempts to provide a fix by utilizing the size probe functionality.
dcfldd if=/dev/zero of=test bs=1024 count=3
With the patch, the hashes generated by these two processes are the same:
dcfldd if=test bs=2048 conv=sync sizeprobe=if | md5sum
More realistically, imagine a drive of size 120000000000 bytes being read. In order to not add (null) bytes to the end of an image, it would be necessary to read using bs=4k (or smaller). With the above patch, the drive could be read with bs=32k, or other (more efficient) block size.
If more data is read from the drive than what sizeprobe predicts, that data is not truncated.