Re: [Jfs-discussion] Fragmentation and poor write speeds.
Brought to you by:
blaschke-oss,
shaggyk
From: Jason F. <jas...@gm...> - 2007-01-31 20:31:34
|
$ > dd if=/dev/zero of=test bs=1024k count=1000 1048576000 bytes (1.0 GB) copied, 168.072 s, 6.2 MB/s I've tested all variations between 1024 - 1024k with 1-4GB of data--doesn't really matter--it's like hitting a wall. You may be right on the 4095 blocksize. I checked my history and it was a bs=10k, count=1 that resulted in 2-3 extents being used every 10th write. But that still means 10% of the time, this jfs couldn't find three contiguous blocks on a drive with 800GB+ available. The main use was for MythTV, but I did back up a few other systems on to it (without tarring first) at one point. If I created 300,000 smaller files and let Myth fill the drive a few times over, I suppose I could see how this may snowball. I do agree that a defrag tool designed for jfs would probably solve this. Either way, your work on jfs is appreciated and it served me well for some time. I'll be keeping an eye on the project in anticipation of a proper defrag tool. Thanks again, Jason Fisher On 1/31/07, Dave Kleikamp <sh...@li...> wrote: > On Wed, 2007-01-31 at 12:28 -0500, Jason Fisher wrote: > > It's hard to say. Basically, I manually recopied files with over > > 10,000 extents until I got one with 100-200 extents, and then deleted > > the originals. Rinse-repeat over most of the drive. Files with tens > > of thousands of extents that I didn't really need, I simply deleted. > > Oddly, I had a few directories scattered about with good write speeds, > > up to a certain point (i.e., 4-10gb) -- but I could never get the root > > of the filesystem above 6 MB/sec or so. > > > > After getting the drive down to 40% usage, write speed was still just > > as slow.. and this is where my problem with jfs lies. That 60% space > > available should allow for contiguous space large enough that a small > > (1-4MB) file should take very few extents in any case. i.e., jfs > > should be smart enough to place a file where it fits well. > > How are you writing the files? A large write size should help. When > writing a file a bit at a time, jfs doesn't know how large the file is > going to grow, so it's only going to find space to satisfy the size of > the write. > > > In my > > experience on this particular filesystem, using dd to write a single > > 4095 byte file would take *three* extents every 10th write. > > That can't be right. An extent can't be smaller than 4096 bytes. > > > This is > > on a filesystem with 4096 byte blocks (correct me if I'm wrong), mind > > you. Something very drastic would have surely had to occur to get jfs > > to respond this way. > > > > A new development is that it looks like this situation wasn't limited > > to my 1.7tb partition. It's also affecting my /usr and /home > > partitions. This means that I could probably come up with a partition > > image (dd) that is of reasonable size to compress and transfer > > (~1-2GB) if someone would like me to save a copy for inspection. > > I'm convinced that a defrag tool would be a big help. Unfortunately, I > don't have the spare time to work on it now. > > > The jfsCommit process seems to be the doing the blocking. > > > > Jason > > -- > David Kleikamp > IBM Linux Technology Center > > |