From: Les M. <les...@gm...> - 2013-11-01 19:24:25
|
On Fri, Nov 1, 2013 at 1:46 PM, <bac...@ko...> wrote: > > This is probably not his *primary* issue since the pool is (only) > ~3T. But when he started talking about file read errors, I was > concerned that if the pool file reads were being truncated, then there > likely would be pool duplicates since the byte-by-byte comparisons > would fail for a given partial file md5sum leading to extra chain creation... The read errors were in the RStmp files that is supposed to be the uncompressed copy of a large compressed file so rsync can seek around looking for a match. I wonder if there could be a file (huge database, mailbox,etc.) that compresses to a point that even with the safety factor of backups not starting at 95% full, the uncompressed copy won't fit. Or maybe a sparse dbm type file where the original doesn't allocate the space the length would indicate. -- Les Mikesell les...@gm... |