From: dan <dan...@gm...> - 2009-09-02 21:03:29
|
On Tue, Sep 1, 2009 at 12:05 AM, Les Mikesell <les...@gm...> wrote: > Jim Leonard wrote: > > Les Mikesell wrote: > >> With backuppc the issue is not so much fragmentation within a file as > >> the distance between the directory entry, the inode, and the file > >> content. When creating a new file, filesystems generally attempt to > >> allocate these close to each other, but when you link an existing file > >> into a new directory, that obviously can't be done so you end up with a > >> lot of long seeks when you try to traverse directories picking up the > >> inode info. > > > > For some filesystem implementations, this is true. For others, it is > > not, due to judicious use of caching, preloading, and lookahead. > > Why would any filesystem 'judiciously' cache things for unlikely use > patterns? > > Specifically to serialize reads and writes. ZFS does this heavily. simply said, why make 10 trips when you can wait 1 second and make 1 trip. I think that backuppc will naturally 'fragment' files but not blocks. it is true that when writing a backup all those files are likely to be more contiguous but as that backup expires but the hardlinks remain the files will naturally not be contiguous. The only solution is to read and re-write the files to the end of the disk, and then read and write them to the beginning. again, this is file fragmentation not block fragmentation. |