just some more info, I ran the same directory creation test on a live zfs filesystem on nexenta install. i got very similar performance differences between ext3 and zfs as i did in vmware except it went from 45seconds on ext3 to 15 seconds and from about 3 seconds to what felt instant on zfs. i tried to make another observation here in reguards to delayed writes. on ext3, when the directory script finished, the hard disk light went out immediately(as expected) but the script on the zfs volume kept the light on for about 4 seconds before it went out. i repeated the script 3 times and got pretty much the same result each time. zfs is definitely doing a delayed write and reporting faster performance than reallity BUT i still wall-clock it at about 6 seconds verses the 15 seconds for ext3.
i would expect that this quote from sun's website is the explanation:
ZFS is based on a transactional object model that removes most of the traditional constraints on the order of issuing I/Os, which results in huge performance gains.
dan wrote:The stock setup probably already has a cgi-bin directory configured for
> that covers all but the webserver. I installed apache22 via ports but I
> don't know what I'm doing wrong because when I install backuppc from the
> tgz, I can browse to the backuppc directory i created in apache's root
> and click on the CGI but it just spits out text. basically I'm just
> getting a file listing. i did try to install mod-perl via ports but dont
> know how to configure apache for it.
script execution. If you install backuppc so the BackupPC_Admin script
lands there it should just work. Otherwise you'll need something like
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
In your httpd.conf pointing to the real location.
Mod-perl is a little more complicated, so I'd try running as a cgi
> can anyone here help me through this? I would like to write a guide when
> I'm done but I have to be able to get the system up first! :)
There are options on both UFS and ext3 regarding when metadata must sync
> on a side note, i did some rudementary benchmarks on an ubuntu 7.10
> server install and a freebsd7 install in vmware server. UFS was about
> 10% slower than ext3 in creating 10,000 directories and about 20% slower
> at creating 10,000 hard links to 1 file(same *virtual hardware, virtual
> hardware leaves some margin for error though). I watched TOP and IOSTAT
> on both systems and the disk was definitely the only resource being
> consumed that would effect performance.
that probably affect speed more than the differences between the
I'm not sure if virtual machine performance is going to tell you how it
> I did the same test on ZFS and
> it was 20x faster than UFS at creating hardlinks and 50x faster at
> creating directories BUT consumed ~200MB of ram doing it. in fact, i
> had assigned 512MB of ram to the freebsd7 install and without anything
> running, no X, no apache, nothing but the basics, and rsync of the ports
> directory(UFS) onto another ufs directory took about 120MB, while that
> same rsync from ufs to zfs(compress=gzip) took twice the RAM. the ports
> directory is about 220,000 files accouting to `find /usr/ports|wc -l`
> my previous tests and research did say that ZFS could use 512MB to 1GB
> of ram when using compression so this is not so suprising as the gzip
> compression is bound to have some overhead but i thought everyone should
> be aware of it. you need at least 1GB of ram to run a
> freebsd7/backuppc/zfs compressed server. the standard compression use
> 1/2 the memory and also 1/2 the CPU but had a 2.1x compression ratio
> verses the 2.8x with gzip. I have 2GB RAM and an e8400 3Ghz dual core
> for the test so I will take the better compression.
> If you have a fast enough machine, ZFS will definitely offer a
> performance boost because of hardlink creation time which is the slow
> point in backuppc as far as the software is concerned.
> also, on real hardware I find ZFS raidz very effective but in virtual
> machines it is all but useless so this was done on single disks. on
> read hardware, raidz is very competitive with raid5 but doesn't have the
> raid5 write hole AND integrates the volume manager which is awesome.
will work on real hardware. You'll be sitting on top of some other host
filesystem with its own buffering and remapping.