On 10/25/07, Dave Kleikamp <shaggy@linux.vnet.ibm.com> wrote:
On Thu, 2007-10-25 at 11:18 +0200, Simon Lundell wrote:
> On 10/23/07, Steve Costaras <stevecs@chaven.com> wrote:
>         Yes, I use the filefrag tool from Theodor Tso' (ext2/3) which
>         works on pretty much any filesystem under linux.   That defrag
>         tool you mentioned would work as well (it's just copying
>         files) I don't like how it doesn't check for file integrity
>         though.
>         Here's a fast one I through together ages ago which works to
>         some extent (no pun intended. ;)  )   I never got it to take a
>         command-line argument as to which directory / mount point to
>         start on (it just runs from the current directory on down).
>         But that's easy to change (must have been interrupted).
>         Anyway do with it what you will.  :)
> What is the the best way to write a file with respect to
> fragmentation? I guess that its best that the filesystem knows the
> final size in advance, so that it can allocate it in as few extents as
> possilbe.

When doing a large write, jfs SHOULD at the very least allocate a
contiguous extent large enough for the data being written.  Currently it
does not.  It allocates on page at a time.  So on a fragmented file
system, the file can be quite fragmented.  I have plans to improve this,
but I haven't gotten to it yet.

I've been experimenting with a general defragmenter like the script posted in this thread. The basic algorithm is to rewrite the file, and hope that the copy is less fragmented than the original.  What is the best way of copying/rewriting a file with regards to fragmentation? Will cp do the trick or should one use something else?