From: Les M. <le...@fu...> - 2005-06-30 19:52:00
|
On Thu, 2005-06-30 at 12:39, John Pettitt wrote: > However if I just mount the file > system and use gtar cvf /dev/null /mnt I get close to 35MB/sec. Gtar cheats when it sees the output is /dev/null (even if you say "-f - >/dev/null") and doesn't bother reading the actual data. This is so it can do a quick estimate with --totals. > If I > replace dev/null with a tarfile on my backup pool drive I still get > 25MB/sec. During backups gstat shows the pool array to be less than > 20% loaded and doing mostly reads (as expected). > > Neither CPU is maxed out. > > Any ideas why it's getting 20% of the theoretical performance? Backuppc extracts from the tar to individual files as they are received. The limiting factor is probably the disk seeks needed to make the directory and inode entries and write data for new files. On Linux, some of the filesystems (reiserfs and xfs) are much faster at creating new files than others. I'm not sure if there are options for freebsd. Anything that indexes the directories should help - otherwise creating a file involves doing a complete directory scan to see if the name already exists with that directory locked until the new name is added. In production you may be doing this in parallel with several other runs too. -- Les Mikesell le...@fu... |