From: Adam G. <mai...@we...> - 2008-02-28 22:52:20
|
dan wrote: > can anyone here help me through this? I would like to write a guide when > I'm done but I have to be able to get the system up first! :) One of the general complaints that seems to apply to most 'alternative' OS and applications.... However, one piece of advise is if you can't install it, then how will you diagnose/fix it at 3am when it all goes pear shaped and you really need to restore that important file ? Is the perceived speed advantage that important? PS, please, I'm not suggesting this is specifically a problem with freebsd, or any other OS, nor am I questioning you capability/etc, it is good that you are stretching yourself and learning something new. This is more of a discussion question in general. > on a side note, i did some rudementary benchmarks on an ubuntu 7.10 > server install and a freebsd7 install in vmware server. UFS was about > 10% slower than ext3 in creating 10,000 directories and about 20% slower > at creating 10,000 hard links to 1 file(same *virtual hardware, virtual > hardware leaves some margin for error though). I watched TOP and IOSTAT > on both systems and the disk was definitely the only resource being > consumed that would effect performance. I did the same test on ZFS and > it was 20x faster than UFS at creating hardlinks and 50x faster at > creating directories BUT consumed ~200MB of ram doing it. in fact, i > had assigned 512MB of ram to the freebsd7 install and without anything > running, no X, no apache, nothing but the basics, and rsync of the ports > directory(UFS) onto another ufs directory took about 120MB, while that > same rsync from ufs to zfs(compress=gzip) took twice the RAM. the ports > directory is about 220,000 files accouting to `find /usr/ports|wc -l` If I found an idle system that I could create various FS on, with various parameters, what series of tests should I run to compare 'real world' backuppc performance? ie, as was noted, creating 10,000 hardlinks could be entirely cached and therefore does not represent real performance, but if we created as many hardlinks as possible for 30 minutes, and then compared the number of hardlinks, would that be a better test? What about when the filesystem becomes more full, or after it becomes more fragmented, etc.... If we could come up with a series of commands which can be reasonably easily repeated, then it is simply a matter of finding some idle hardware to run the tests on..... Suggestions? Regards, Adam |