Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Right-click on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
|
From: Christian Kujau <lists@ne...> - 2009-12-24 10:31:26
|
I've had the chance to use a testsystem here and couldn't resist running a
few benchmark programs on them: bonnie++, tiobench, dbench and a few
generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
All with standard mkfs/mount options and +noatime for all of them.
Here are the results, no graphs - sorry:
http://nerdbynature.de/benchmarks/v40z/2009-12-22/
Reiserfs is locking up during dbench, so I removed it from the
config, here are some earlier results:
http://nerdbynature.de/benchmarks/v40z/2009-12-21/bonnie.html
Bonnie++ couldn't complete on nilfs2, only the generic tests
and tiobench were run. As nilfs2, ufs, zfs aren't supporting xattr, dbench
could not be run on these filesystems.
Short summary, AFAICT:
- btrfs, ext4 are the overall winners
- xfs to, but creating/deleting many files was *very* slow
- if you need only fast but no cool features or journaling, ext2
is still a good choice :)
Thanks,
Christian.
--
BOFH excuse #84:
Someone is standing on the ethernet cable, causing a kink in the cable
|
|
From: <pg_jf2@jf...> - 2009-12-24 13:06:31
|
> I've had the chance to use a testsystem here and couldn't
> resist
Unfortunately there seems to be an overproduction of rather
meaningless file system "benchmarks"...
> running a few benchmark programs on them: bonnie++, tiobench,
> dbench and a few generic ones (cp/rm/tar/etc...) on ext{234},
> btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options
> and +noatime for all of them.
> Here are the results, no graphs - sorry: [ ... ]
After having a glance, I suspect that your tests could be
enormously improved, and doing so would reduce the pointlessness of
the results.
A couple of hints:
* In the "generic" test the 'tar' test bandwidth is exactly the
same ("276.68 MB/s") for nearly all filesystems.
* There are read transfer rates higher than the one reported by
'hdparm' which is "66.23 MB/sec" (comically enough *all* the
read transfer rates your "benchmarks" report are higher).
BTW the use of Bonnie++ is also usually a symptom of a poor
misunderstanding of file system benchmarking.
On the plus side, test setup context is provided in the "env"
directory, which is rare enough to be commendable.
> Short summary, AFAICT:
> - btrfs, ext4 are the overall winners
> - xfs to, but creating/deleting many files was *very* slow
Maybe, and these conclusions are sort of plausible (but I prefer
JFS and XFS for different reasons); however they are not supported
by your results as they seem to me to lack much meaning, as what is
being measured is far from clear, and in particular it does not
seem to be the file system performance, or anyhow an aspect of
filesystem performance that might relate to common usage.
I think that it is rather better to run a few simple operations
(like the "generic" test) properly (unlike the "generic" test), to
give a feel for how well implemented are the basic operations of
the file system design.
Profiling a file system performance with a meaningful full scale
benchmark is a rather difficult task requiring great intellectual
fortitude and lots of time.
> - if you need only fast but no cool features or
> journaling, ext2 is still a good choice :)
That is however a generally valid conclusion, but with a very,
very important qualification: for freshly loaded filesystems.
Also with several other important qualifications, but "freshly
loaded" is a pet peeve of mine :-).
|
|
From: Christian Kujau <lists@ne...> - 2009-12-24 20:43:37
|
On Thu, 24 Dec 2009 at 13:05, Peter Grandi wrote:
> Unfortunately there seems to be an overproduction of rather
> meaningless file system "benchmarks"...
That's why I wrote "benchmark programs" and not "benchmarks": I'm very
well aware that opiones vary a lot about what use (synthetic) benchmarks
are, but whenever I'm looking for _equal_ comparisons (that's what my
tests are about) I only find stale data.
> * In the "generic" test the 'tar' test bandwidth is exactly the
> same ("276.68 MB/s") for nearly all filesystems.
True, because I'm tarring up ~2.7GB of content while the box is equipped
with 8GB of RAM. So it *should* be the same for all filesystems, as Linux
could easily hold all this in its caches. Still, jfs and zfs manage to be
slower than the rest.
> * There are read transfer rates higher than the one reported by
> 'hdparm' which is "66.23 MB/sec" (comically enough *all* the
> read transfer rates your "benchmarks" report are higher).
The bonnie++ read results are nearly report 66MB/s, the "generic" tests
are running with help of the filesystem caches (see above), so it's no
wonder that they're higher than normal read from disk - as the tests are
allowed to utilize their caches as well.
> BTW the use of Bonnie++ is also usually a symptom of a poor
> misunderstanding of file system benchmarking.
I didn't write this application, yet I find it useful for the sake of
comparison: do "something" to the filesystem, and then the same for all
the other filesystmes as well. Maybe I should add such a disclaimer[0] to
the results page as well :-)
> I think that it is rather better to run a few simple operations
> (like the "generic" test) properly (unlike the "generic" test),
If you're suggesting to run the generic tests with big enough data to NOT
hit the filesystem caches anymore - that's what I did not like to do. I
think we're all lucky that filesystem caches exists, otherwise things
would be a lot slower.
> That is however a generally valid conclusion, but with a very,
> very important qualification: for freshly loaded filesystems.
I hope that's to clear to everybody as soon as they hear the word
"benchmark" - yes, I always mkfs before running the next tests.
But again: this is done for every filesystem and this is not a
filesystem-aging-test.
I've yet to find good benchmark programs (results) that take filesystem
(us)age into account. Maybe I should try this myself, but the usage
patterns would be far from "real world usage" but limited to simple, but
numerous cp/rm operations on the filesystem. I'm happy to take any
pointers for this to implement :-)
Christian.
[0] http://www.sabi.co.uk/Notes/linuxFS.html#fsRefsBench
--
BOFH excuse #379:
We've picked COBOL as the language of choice.
|
|
From: Christian Kujau <lists@ne...> - 2009-12-27 21:55:42
|
On Sun, 27 Dec 2009 at 14:50, jim owens wrote: > And I don't even care about comparing 2 filesystems, I only care about > timing 2 versions of code in the single filesystem I am working on, > and forgetting about hardware cache effects has screwed me there. Not me, I'm comparing filesystems - and when the HBA or whatever plays tricks and "sync" doesn't flush all the data, it'll do so for every tested filesystem. Of course, filesystem could handle "sync" differently, and they probably do, hence the different times they take to complete. That's what my tests are about: timing comparision (does that still fall under the "benchmark" category?), not functional comparision. That's left as a task for the reader of these results: "hm, filesystem xy is so much faster when doing foo, why is that? And am I willing to sacrifice e.g. proper syncs to gain more speed?" > So unless you are sure you have no hardware cache effects... > "the comparison still stands" is *false*. Again, I don't argue with "hardware caches will have effects", but that's not the point of these tests. Of course hardware is different, but filesystems are too and I'm testing filesystems (on the same hardware). Christian. -- BOFH excuse #278: The Dilithium Crystals need to be rotated. |
|
From: Christian Kujau <lists@ne...> - 2009-12-25 01:53:12
|
On Thu, 24 Dec 2009 at 16:27, tytso@... wrote: > If you don't do a "sync" after the tar, then in most cases you will be > measuring the memory bandwidth, because data won't have been written Well, I do "sync" after each operation, so the data should be on disk, but that doesn't mean it'll clear the filesystem buffers - but this doesn't happen that often in the real world too. Also, all filesystem were tested equally (I hope), yet some filesystem perform better than another - even if all the content copied/tar'ed/removed would perfectly well fit into the machines RAM. > Another good example of well done file system benchmarks can be found > at http://btrfs.boxacle.net Thanks, I'll have a look at it and perhaps even integrate it in the wrapper script. > benchmarks for a living. Note that JFS and XFS come off much better > on a number of the tests Indeed, I was surpised to see JFS perform that good and XFS of course is one of the best too - I just wanted to point out that both of them are strangely slow at times (removing or creating many files) - not what I expected. > --- and that there is a *large* number amount > of variation when you look at different simulated workloads and with a > varying number of threads writing to the file system at the same time. True, the TODO list in the script ("different benchmark options") is in there for a reason :-) Christian. -- BOFH excuse #291: Due to the CDA, we no longer have a root account. |
|
From: lakshmi pathi <lakshmipathi.g@gm...> - 2009-12-25 13:19:33
|
I'm a file system testing newbie, I have a question/doubt,please let me know if i'm wrong. Do you think a tool, which uses output from "hdparm" command,to get hard drives maximum performance and compares it specific file system (say for example,"ext4 provides xx throughput against max. device throughput yy" ) would be more meaningful. Does using hdparm (or other device throughput related tools) for benchmarking will be useful? Thanks. -- Cheers, Lakshmipathi.G http://www.giis.co.in |
|
From: Christian Kujau <lists@ne...> - 2009-12-25 18:51:30
|
On Fri, 25 Dec 2009 at 08:22, Larry McVoy wrote: > Dudes, sync() doesn't flush the fs cache, you have to unmount for that. Thanks Larry, that was exactly my point[0] too, I should add that to the results page to avoid further confusion or misassumptions: > Well, I do "sync" after each operation, so the data should be on > disk, but that doesn't mean it'll clear the filesystem buffers > - but this doesn't happen that often in the real world too. I realize however that on the same results page the bonnie++ tests were run with a filesize *specifically* set to not utilize the filesystem buffers any more but the measure *disk* performance while my "generic* tests do something else - and thus cannot be compared to the bonnie++ or hdparm results. > No idea if that is still supported, but sync() is a joke for benchmarking. I was using "sync" to make sure that the data "should" be on the disks now, I did not want to flush the filesystem buffers during the "generic" tests. Thanks, Christian. [0] http://www.spinics.net/lists/linux-ext4/msg16878.html -- BOFH excuse #210: We didn't pay the Internet bill and it's been cut off. |
|
From: Christian Kujau <lists@ne...> - 2009-12-25 18:57:01
|
On Fri, 25 Dec 2009 at 11:33, tytso@... wrote: > caches, though; if you are going to measure read as well as writes, > then you'll probably want to do something like "echo 3 > > /proc/sys/vm/drop-caches". Thanks for the hint, I could find sys/vm/drop-caches documented in Documentation/ but it's good to know there's a way to flush all these caces via this knob. Maybe I should add this to those "genric" tests to be more comparable to the other benchmarks. Christian. -- BOFH excuse #210: We didn't pay the Internet bill and it's been cut off. |
|
From: Christian Kujau <lists@ne...> - 2009-12-25 19:33:40
|
On Fri, 25 Dec 2009 at 10:56, Christian Kujau wrote: > Thanks for the hint, I could find sys/vm/drop-caches documented in ------------------------------^ not, was what I meant to say, but it's all there, as "drop_caches" in Documentation/sysctl/vm.txt Christian. > Documentation/ but it's good to know there's a way to flush all these > caces via this knob. Maybe I should add this to those "genric" tests to be > more comparable to the other benchmarks. -- BOFH excuse #129: The ring needs another token |
|
From: Christian Kujau <lists@ne...> - 2009-12-26 19:07:18
|
On 26.12.09 08:00, jim owens wrote: >> I was using "sync" to make sure that the data "should" be on the disks > > Good, but not good enough for many tests... info sync [...] > On Linux, sync is only guaranteed to schedule the dirty blocks for > writing; it can actually take a short time before all the blocks are > finally written. Noted, many times already. That's why I wrote "should be" - but in this special scenario (filesystem speed tests) I don't care for file integrity: if I pull the plug after "sync" and some data didn't make it to the disks, I'll only look if the testscript got all the timestamps and move on to the next test. I'm not testing for "filesystem integrity after someone pulls the plug" here. And remember, I'm doing "sync" for all the filesystems tested, so the comparison still stands. Christian. |
|
From: jim owens <jowens@hp...> - 2009-12-27 19:50:30
|
Christian Kujau wrote: > On 26.12.09 08:00, jim owens wrote: >>> I was using "sync" to make sure that the data "should" be on the disks >> Good, but not good enough for many tests... info sync > [...] >> On Linux, sync is only guaranteed to schedule the dirty blocks for >> writing; it can actually take a short time before all the blocks are >> finally written. OK, that was wrong per Ted's explanation: > > But for quite some time, under Linux the sync(2) system call will wait > for the blocks to be flushed out to HBA, although we currently don't > wait for the blocks to have been committed to the platters (at least > not for all file systems). But Christian Kujau wrote: > Noted, many times already. That's why I wrote "should be" - but in this > special scenario (filesystem speed tests) I don't care for file > integrity: if I pull the plug after "sync" and some data didn't make it > to the disks, I'll only look if the testscript got all the timestamps > and move on to the next test. I'm not testing for "filesystem integrity > after someone pulls the plug" here. And remember, I'm doing "sync" for > all the filesystems tested, so the comparison still stands. You did not understand my point. It was not about data integrity, it was about test timing validity. And even with sync(2) behaving as Ted describes, *timing* may still tell you the wrong thing or not tell you something important. I have a battery-backed HBA cache. Writes are HBA cached. Timing only shows "to HBA memory". So 1000 pages (4MB total) that are at 1000 places on the disk will time (almost) the same completion as 1000 pages that are in 200 extents of 50 pages each. Writing to disk the time difference between these would be an obvious slap upside the head. Hardware caches can trick you into thinking a filesystem performs much better than it really does for some operations. Or trick you about relative performance between 2 filesystems. And I don't even care about comparing 2 filesystems, I only care about timing 2 versions of code in the single filesystem I am working on, and forgetting about hardware cache effects has screwed me there. So unless you are sure you have no hardware cache effects... "the comparison still stands" is *false*. jim |
|
From: Ryusuke Konishi <konishi.ryusuke@la...> - 2009-12-24 12:58:09
|
Hi,
On Thu, 24 Dec 2009 02:31:10 -0800 (PST), Christian Kujau wrote:
> I've had the chance to use a testsystem here and couldn't resist running a
> few benchmark programs on them: bonnie++, tiobench, dbench and a few
> generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
>
> All with standard mkfs/mount options and +noatime for all of them.
>
> Here are the results, no graphs - sorry:
> http://nerdbynature.de/benchmarks/v40z/2009-12-22/
>
> Reiserfs is locking up during dbench, so I removed it from the
> config, here are some earlier results:
>
> http://nerdbynature.de/benchmarks/v40z/2009-12-21/bonnie.html
>
> Bonnie++ couldn't complete on nilfs2, only the generic tests
> and tiobench were run.
I looked at the log but couldn't identify the error.
Is that a disk full?
Thanks,
Ryusuke Konishi
> As nilfs2, ufs, zfs aren't supporting xattr, dbench
> could not be run on these filesystems.
>
> Short summary, AFAICT:
> - btrfs, ext4 are the overall winners
> - xfs to, but creating/deleting many files was *very* slow
> - if you need only fast but no cool features or journaling, ext2
> is still a good choice :)
>
> Thanks,
> Christian.
> --
> BOFH excuse #84:
>
> Someone is standing on the ethernet cable, causing a kink in the cable
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
> the body of a message to majordomo@...
> More majordomo info at http://vger.kernel.org/majordomo-info.html
|
|
From: Teran McKinney <sega01@gm...> - 2009-12-24 12:59:43
|
Which I/O scheduler are you using? Pretty sure that ReiserFS is a
little less deadlocky with CFQ or another over deadline, but that
deadline usually gives the best results for me (especially for JFS).
Thanks,
Teran
On Thu, Dec 24, 2009 at 10:31, Christian Kujau <lists@...> wrote:
> I've had the chance to use a testsystem here and couldn't resist running a
> few benchmark programs on them: bonnie++, tiobench, dbench and a few
> generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
>
> All with standard mkfs/mount options and +noatime for all of them.
>
> Here are the results, no graphs - sorry:
> http://nerdbynature.de/benchmarks/v40z/2009-12-22/
>
> Reiserfs is locking up during dbench, so I removed it from the
> config, here are some earlier results:
>
> http://nerdbynature.de/benchmarks/v40z/2009-12-21/bonnie.html
>
> Bonnie++ couldn't complete on nilfs2, only the generic tests
> and tiobench were run. As nilfs2, ufs, zfs aren't supporting xattr, dbench
> could not be run on these filesystems.
>
> Short summary, AFAICT:
> - btrfs, ext4 are the overall winners
> - xfs to, but creating/deleting many files was *very* slow
> - if you need only fast but no cool features or journaling, ext2
> is still a good choice :)
>
> Thanks,
> Christian.
> --
> BOFH excuse #84:
>
> Someone is standing on the ethernet cable, causing a kink in the cable
> --
> To unsubscribe from this list: send the line "unsubscribe reiserfs-devel" in
> the body of a message to majordomo@...
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
|
|
From: <tytso@mi...> - 2009-12-24 22:04:23
|
On Thu, Dec 24, 2009 at 01:05:39PM +0000, Peter Grandi wrote:
> > I've had the chance to use a testsystem here and couldn't
> > resist
>
> Unfortunately there seems to be an overproduction of rather
> meaningless file system "benchmarks"...
One of the problems is that very few people are interested in writing
or maintaining file system benchmarks, except for file system
developers --- but many of them are more interested in developing (and
unfortunately, in some cases, promoting) their file systems than they
are in doing a good job maintaining a good set of benchmarks. Sad but
true...
> * In the "generic" test the 'tar' test bandwidth is exactly the
> same ("276.68 MB/s") for nearly all filesystems.
>
> * There are read transfer rates higher than the one reported by
> 'hdparm' which is "66.23 MB/sec" (comically enough *all* the
> read transfer rates your "benchmarks" report are higher).
If you don't do a "sync" after the tar, then in most cases you will be
measuring the memory bandwidth, because data won't have been written
to disk. Worse yet, it tends to skew the results of the what happens
afterwards (*especially* if you aren't running the steps of the
benchmark in a script).
> BTW the use of Bonnie++ is also usually a symptom of a poor
> misunderstanding of file system benchmarking.
Dbench is also a really nasty benchmark. If it's tuned correctly, you
are measuring memory bandwidth and the hard drive light will never go
on. :-) The main reason why it was interesting was that it and tbench
was used to model a really bad industry benchmark, netbench, which at
one point a number of years ago I/T managers used to decide which CIFS
server they would buy[1]. So it was useful for Samba developers who were
trying to do competitive benchmkars, but it's not a very accurate
benchmark for measuring real-life file system workloads.
[1] http://samba.org/ftp/tridge/dbench/README
> On the plus side, test setup context is provided in the "env"
> directory, which is rare enough to be commendable.
Absolutely. :-)
Another good example of well done file system benchmarks can be found
at http://btrfs.boxacle.net; it's done by someone who does performance
benchmarks for a living. Note that JFS and XFS come off much better
on a number of the tests --- and that there is a *large* number amount
of variation when you look at different simulated workloads and with a
varying number of threads writing to the file system at the same time.
Regards,
- Ted
|
|
From: <tytso@mi...> - 2009-12-27 22:33:26
|
On Sun, Dec 27, 2009 at 01:55:26PM -0800, Christian Kujau wrote: > On Sun, 27 Dec 2009 at 14:50, jim owens wrote: > > And I don't even care about comparing 2 filesystems, I only care about > > timing 2 versions of code in the single filesystem I am working on, > > and forgetting about hardware cache effects has screwed me there. > > Not me, I'm comparing filesystems - and when the HBA or whatever plays > tricks and "sync" doesn't flush all the data, it'll do so for every tested > filesystem. Of course, filesystem could handle "sync" differently, and > they probably do, hence the different times they take to complete. That's > what my tests are about: timing comparision (does that still fall under > the "benchmark" category?), not functional comparision. That's left as a > task for the reader of these results: "hm, filesystem xy is so much faster > when doing foo, why is that? And am I willing to sacrifice e.g. proper > syncs to gain more speed?" Yes, but given many of the file systems have almost *exactly* the same bandwidth measurement for the "cp" test, and said bandwidth measurement is 5 times the disk bandwidith as measured by hdparm, it makes me suspect that you are doing this: /bin/time /bin/cp -r /source/tree /filesystem-under-test sync /bin/time /bin/rm -rf /filesystem-under-test/tree sync etc. It is *a* measurement, but the question is whether it's a useful comparison. Consider two different file systems. One file system which does a very good job making sure that file writes are done contiguously to disk, minimizing seek overhead --- and another file system which is really crappy at disk allocation, and writes the files to random locations all over the disk. If you are only measuring the "cp", then the fact that filesystem 'A' has a very good layout, and is able to write things to disk very efficiently, and filesystem 'B' has files written in a really horrible way, won't be measured by your test. This is especially true if, for example, you have 8GB of memory and you are copying 4GB worth of data. You might notice it if you include the "sync" in the timing, i.e.: /bin/time /bin/sh -c "/bin/cp -r /source/tree /filesystem-under-test;/bin/sync" > Again, I don't argue with "hardware caches will have effects", but that's > not the point of these tests. Of course hardware is different, but > filesystems are too and I'm testing filesystems (on the same hardware). The question is whether your tests are doing the best job of measuring how good the filesystem really is. If your workload is one where you will only be copying file sets much smaller than your memory, and you don't care about when the data actually hits the disk, only when "/bin/cp" returns, then sure, do whatever you want. But if you want the tests to have meaning if, for example, you have 2GB of memory and you are copying 8GB of data, or if later on will be continuously streaming data to the disk, and sooner or later the need to write data to the disk will start slowing down your real-life workload, then not including the time to do the sync in the time to copy your file set may cause you to assume that filesystems 'A' and 'B' are identical in performance, and then your filesystem comparison will end up misleading you. The bottom line is that it's very hard to do good comparisons that are useful in the general case. Best regards, - Ted |
|
From: Christian Kujau <lists@ne...> - 2009-12-28 01:24:21
|
On Sun, 27 Dec 2009 at 17:33, tytso@... wrote: > Yes, but given many of the file systems have almost *exactly* the same "Almost" indeed - but curiously enough some filesystem are *not* the same, although they should. Again: we have 8GB RAM, I'm copying ~3GB of data, so why _are_ there differences? (Answer: because filesystems are different). That's the only point of this test. Also note the disclaimer[0] I added to the results page a few days ago. > measurement is 5 times the disk bandwidith as measured by hdparm, it > makes me suspect that you are doing this: > /bin/time /bin/cp -r /source/tree /filesystem-under-test > sync No, I'm not - see the test script[1] - I'm taking the time for cp/rm/tar *and* sync. But even if I would only take the time *only* for say "cp", not the sync part. Still, it would be a valid comparison across filesystems (the same operation for every filesystem) also a not very realistic one - because in the real world I *want* to make sure my data is on the disk. But that's as far as I go in these tests, I'm not even messing around with disk caches or HBA caches - that's not the scope of these tests. > You might notice it if you include the "sync" in the timing, i.e.: > /bin/time /bin/sh -c "/bin/cp -r /source/tree /filesystem-under-test;/bin/sync" Yes, that's exactly what the tests do. > "/bin/cp" returns, then sure, do whatever you want. But if you want > the tests to have meaning if, for example, you have 2GB of memory and > you are copying 8GB of data, For the bonnie++ tests I chose a filesize (16GB) so that disk performance will matter here. As the generic tests shuffle around much more smaller data, no disk performance, but filesystem performance is measured (and compared to other filesystems) - well aware of the fact that caches *Are* being used. Why would I want to discard caches? My daily usage pattern (opening webrowsers, terminal windows, spreadcheats deal with much smaller datasets and I'm happy that Linux is so hungry for cache - yet some filesystems do not seem to utilize this opportunity as good as others do. That's the whole point of this particular test. But constantly explaining my point over and over again I see what I have to do: I shall run the generic tests again with much bigger datasets, so that disk-performance is also reflected, as people do seem to care about this (I don't - I can switch filesystems more easily than disks). > The bottom line is that it's very hard to do good comparisons that are > useful in the general case. And it's difficult to find out what's a "useful comparison" for the general public :-) Christian. [0] http://nerdbynature.de/benchmarks/v40z/2009-12-22/ [1] http://nerdbynature.de/benchmarks/v40z/2009-12-22/env/fs-bench.sh.txt -- BOFH excuse #292: We ran out of dial tone and we're and waiting for the phone company to deliver another bottle. |
|
From: Larry McVoy <lm@bi...> - 2009-12-28 14:09:08
|
> The bottom line is that it's very hard to do good comparisons that are
> useful in the general case.
It has always amazed me watching people go about benchmarking. I should
have a blog called "you're doing it wrong" or something.
Personally, I use benchmarks to validate what I already believe to be true.
So before I start I have a predicition as to what the answer should be,
based on my understanding of the system being measured. Back when I
was doing this a lot, I was always within a factor of 10 (not a big
deal) and usually within a factor of 2 (quite a bit bigger deal).
When things didn't match up that was a clue that either
- the benchmark was broken
- the code was broken
- the hardware was broken
- my understanding was broken
If you start a benchmark and you don't know what the answer should be,
at the very least within a factor of 10 and ideally within a factor of 2,
you shouldn't be running the benchmark. Well, maybe you should, they
are fun. But you sure as heck shouldn't be publishing results unless
you know they are correct.
This is why lmbench, to toot my own horn, measures what it does. If go
run that, memorize the results, you can tell yourself "well, this machine
has sustained memory copy bandwidth of 3.2GB/sec, the disk I'm using
can read at 60MB/sec and write at 52MB/sec (on the outer zone where I'm
going to run my tests), it does small seeks in about 6 milliseconds,
I'm doing sequential I/O, the bcopy is in the noise, the blocks are big
enough that the seeks are hidden, so I'd like to see a steady 50MB/sec
or so on a sustained copy test".
If you have a mental model for how the bits of the system works you
can decompose the benchmark into the parts, predict the result, run
it, and compare. It'll match or Lucy, you have some 'splainin to do.
--
---
Larry McVoy lm at bitmover.com http://www.bitkeeper.com
|
|
From: Edward Shishkin <edward.shishkin@gm...> - 2010-01-15 21:58:02
|
> When things didn't match up that was a clue that either > > - the benchmark was broken > - the code was broken > [...] I would carry out an object-oriented dualism here. [1] methods (kernel module) ---- [2] objects (formatted partition) | | | | [3] benchmarks ----------------- [4] user-space utilities (fsck) User-space utilities investigate "object corruptions", whereas benchmarks investigate "software corruptions" (including bugs in source code, broken design, etc, etc..) It is clear that "software" can be "corrupted" by a larger number of ways than "objects". Indeed, it is known that dual space V* (of all linear functions over V) is a much more complex object than V. So benchmark is a process which takes a set of methods (we consider only "software" benchmarks) and puts numerical values populated with a special (the worst) value CRASH. Three main categories of benchmarks using: 1) Internal testing An engineer makes optimizations in a file system (e.g. for a customer) via choosing functions or plugins as winners in a set of internal (local) "nominations". 2) Business plans A system administrator chooses a "winner" in some (global) "nomination" of file systems in accordance with internal business-plans. 3) Flame and politics Someone presents a "nomination" (usually with the "winner" among restricted number of nominated members) to the public while nobody asked him to do it. |
|
From: Evgeniy Polyakov <zbr@io...> - 2009-12-25 03:07:06
|
Hi Ted.
On Thu, Dec 24, 2009 at 04:27:56PM -0500, tytso@... (tytso@...) wrote:
> > Unfortunately there seems to be an overproduction of rather
> > meaningless file system "benchmarks"...
>
> One of the problems is that very few people are interested in writing
> or maintaining file system benchmarks, except for file system
> developers --- but many of them are more interested in developing (and
> unfortunately, in some cases, promoting) their file systems than they
> are in doing a good job maintaining a good set of benchmarks. Sad but
> true...
Hmmmm.... I suppose here should be a link to such set? :)
No link? Than I suppose benchmark results are pretty much in sync with
what they are supposed to show.
> > * In the "generic" test the 'tar' test bandwidth is exactly the
> > same ("276.68 MB/s") for nearly all filesystems.
> >
> > * There are read transfer rates higher than the one reported by
> > 'hdparm' which is "66.23 MB/sec" (comically enough *all* the
> > read transfer rates your "benchmarks" report are higher).
>
> If you don't do a "sync" after the tar, then in most cases you will be
> measuring the memory bandwidth, because data won't have been written
> to disk. Worse yet, it tends to skew the results of the what happens
> afterwards (*especially* if you aren't running the steps of the
> benchmark in a script).
It depends on the size of untarred object, for linux kernel tarball and
common several gigs of RAM it is very valid not to run a sync after the
tar, since writeback will take care about it.
> > BTW the use of Bonnie++ is also usually a symptom of a poor
> > misunderstanding of file system benchmarking.
>
> Dbench is also a really nasty benchmark. If it's tuned correctly, you
> are measuring memory bandwidth and the hard drive light will never go
> on. :-) The main reason why it was interesting was that it and tbench
> was used to model a really bad industry benchmark, netbench, which at
> one point a number of years ago I/T managers used to decide which CIFS
> server they would buy[1]. So it was useful for Samba developers who were
> trying to do competitive benchmkars, but it's not a very accurate
> benchmark for measuring real-life file system workloads.
>
> [1] http://samba.org/ftp/tridge/dbench/README
Was not able to resist to write a small notice, what no matter what, but
whatever benchmark is running, it _does_ show system behaviour in one
or another condition. And when system behaves rather badly, it is quite
a common comment, that benchmark was useless. But it did show that
system has a problem, even if rarely triggered one :)
Not an ext4 nitpick of course.
--
Evgeniy Polyakov
|
|
From: <tytso@mi...> - 2009-12-25 16:12:01
|
On Fri, Dec 25, 2009 at 02:46:31AM +0300, Evgeniy Polyakov wrote: > > [1] http://samba.org/ftp/tridge/dbench/README > > Was not able to resist to write a small notice, what no matter what, but > whatever benchmark is running, it _does_ show system behaviour in one > or another condition. And when system behaves rather badly, it is quite > a common comment, that benchmark was useless. But it did show that > system has a problem, even if rarely triggered one :) If people are using benchmarks to improve file system, and a benchmark shows a problem, then trying to remedy the performance issue is a good thing to do, of course. Sometimes, though the case which is demonstrated by a poor benchmark is an extremely rare corner case that doesn't accurately reflect common real-life workloads --- and if addressing it results in a tradeoff which degrades much more common real-life situations, then that would be a bad thing. In situations where benchmarks are used competitively, it's rare that it's actually a *problem*. Instead it's much more common that a developer is trying to prove that their file system is *better* to gullible users who think that a single one-dimentional number is enough for them to chose file system X over file system Y. For example, if I wanted to play that game and tell people that ext4 is better, I'd might pick this graph: http://btrfs.boxacle.net/repository/single-disk/2.6.29-rc2/2.6.29-rc2/2.6.29-rc2_Mail_server_simulation._num_threads=32.html On the other hand, this one shows ext4 as the worst compared to all other file systems: http://btrfs.boxacle.net/repository/single-disk/2.6.29-rc2/2.6.29-rc2/2.6.29-rc2_Large_file_random_writes_odirect._num_threads=8.html Benchmarking, like statistics, can be extremely deceptive, and if people do things like carefully order a tar file so the files are optimal for a file system, it's fair to ask whether that's a common thing for people to be doing (either unpacking tarballs or unpacking tarballs whose files have been carefully ordered for a particular file systems). When it's the only number used by a file system developer when trying to convince users they should use their file system, at least in my humble opinion it becomes murderously dishonest. - Ted |
|
From: <tytso@mi...> - 2009-12-25 16:15:05
|
On Thu, Dec 24, 2009 at 05:52:34PM -0800, Christian Kujau wrote:
>
> Well, I do "sync" after each operation, so the data should be on disk, but
> that doesn't mean it'll clear the filesystem buffers - but this doesn't
> happen that often in the real world too. Also, all filesystem were tested
> equally (I hope), yet some filesystem perform better than another - even
> if all the content copied/tar'ed/removed would perfectly well fit into the
> machines RAM.
Did you include the "sync" in part of what you timed? Peter was quite
right --- the fact that the measured bandwidth in your "cp" test is
five times faster than the disk bandwidth as measured by hdparm, and
many file systems had exactly the same bandwidth, makes me very
suspicious that what was being measured was primarily memory bandwidth
--- and not very useful when trying to measure file system
performance.
- Ted
|
|
From: Christian Kujau <lists@ne...> - 2009-12-25 18:43:01
|
On Fri, 25 Dec 2009 at 11:14, tytso@... wrote:
> Did you include the "sync" in part of what you timed?
In my "generic" tests[0] I do "sync" after each of the cp/tar/rm
operations.
> Peter was quite
> right --- the fact that the measured bandwidth in your "cp" test is
> five times faster than the disk bandwidth as measured by hdparm, and
> many file systems had exactly the same bandwidth, makes me very
> suspicious that what was being measured was primarily memory bandwidth
That's right, and that's what I replied to Peter on jfs-discussion[1]:
>> * In the "generic" test the 'tar' test bandwidth is exactly the
>> same ("276.68 MB/s") for nearly all filesystems.
True, because I'm tarring up ~2.7GB of content while the box is equipped
with 8GB of RAM. So it *should* be the same for all filesystems, as
Linux could easily hold all this in its caches. Still, jfs and zfs
manage to be slower than the rest.
> --- and not very useful when trying to measure file system
> performance.
For the bonnie++ tests I chose an explicit filesize of 16GB, two times the
size of the machine's RAM to make sure it will tests the *disks*
performance. And to be consistent across one benchmark run, I should have
copied/tarred/removed 16GB as well. However, I figured not to do that -
but to *use* the filesystem buffers instead of ignoring them. After all,
it's not about disk performace (that's what hdparm could be for) but
filesystem performance (or comparision, more exactly) - and I'm not exited
about the fact, that almost all filesystems are copying with ~276MB/s but
I'm wondering why zfs is 13 times slower when copying data or xfs takes
200 seconds longer than other filesystems, while it's handling the same
size as all the others. So no, please don't compare the bonnie++ results
against my "generic" results withing these results - as they're
(obviously, I thought) taken with different parameters/content sizes.
Christian.
[0] http://nerdbynature.de/benchmarks/v40z/2009-12-22/env/fs-bench.sh.txt
[1] http://tinyurl.com/yz6x2sj
--
BOFH excuse #85:
Windows 95 undocumented "feature"
|
|
From: Larry McVoy <lm@bi...> - 2009-12-25 16:22:51
|
On Fri, Dec 25, 2009 at 11:14:53AM -0500, tytso@... wrote: > On Thu, Dec 24, 2009 at 05:52:34PM -0800, Christian Kujau wrote: > > > > Well, I do "sync" after each operation, so the data should be on disk, but > > that doesn't mean it'll clear the filesystem buffers - but this doesn't > > happen that often in the real world too. Also, all filesystem were tested > > equally (I hope), yet some filesystem perform better than another - even > > if all the content copied/tar'ed/removed would perfectly well fit into the > > machines RAM. > > Did you include the "sync" in part of what you timed? Peter was quite > right --- the fact that the measured bandwidth in your "cp" test is > five times faster than the disk bandwidth as measured by hdparm, and > many file systems had exactly the same bandwidth, makes me very > suspicious that what was being measured was primarily memory bandwidth > --- and not very useful when trying to measure file system > performance. Dudes, sync() doesn't flush the fs cache, you have to unmount for that. Once upon a time Linux had an ioctl() to flush the fs buffers, I used it in lmbench. ioctl(fd, BLKFLSBUF, 0); No idea if that is still supported, but sync() is a joke for benchmarking. -- --- Larry McVoy lm at bitmover.com http://www.bitkeeper.com |
|
From: <tytso@mi...> - 2009-12-25 16:33:58
|
On Fri, Dec 25, 2009 at 08:22:38AM -0800, Larry McVoy wrote: > > Dudes, sync() doesn't flush the fs cache, you have to unmount for that. > Once upon a time Linux had an ioctl() to flush the fs buffers, I used > it in lmbench. > > ioctl(fd, BLKFLSBUF, 0); > > No idea if that is still supported, but sync() is a joke for benchmarking. Depends on what you are trying to do (flush has multiple meanings, so using can be ambiguous). BLKFLSBUF will write out any dirty buffers, *and* empty the buffer cache. I use it when benchmarking e2fsck optimization. It doesn't do anything for the page cache. If you are measuring the time to write a file, using fsync() or sync() will include the time to actually write the data to disk. It won't empty caches, though; if you are going to measure read as well as writes, then you'll probably want to do something like "echo 3 > /proc/sys/vm/drop-caches". - Ted |
|
From: Casey Allen Shobe <casey@sh...> - 2010-01-11 01:57:58
|
On Dec 25, 2009, at 11:22 AM, Larry McVoy wrote: > Dudes, sync() doesn't flush the fs cache, you have to unmount for > that. > Once upon a time Linux had an ioctl() to flush the fs buffers, I used > it in lmbench. You do not need to unmount - 2.6.16+ have a mechanism in /proc to flush caches. See http://linux-mm.org/Drop_Caches Cheers, -- Casey Allen Shobe casey@... |