Long time user of dump on many different O/S's starting with SunOS 4. I would like to continue to use dump.
I did some recent testing on solaris and linux of various flavors. The idea was, dumping to /dev/null is best speed. dump -0a filesys /dev/null, seems to max out at ~15000 kB/s, on multiple platforms under multiple O/S's.
Trouble is, I can dd to an LTO drive, orders of magnitude faster. I am moving up to LTO3 and plan to use LVM snapshots to dump. It just seems that there is some limitation of dump that cannot stream the data fast enough to keep the drive from shoeshining.
Any way to ask dump to use bigger buffers?
Am I missing some tweak that must be done?
Does the -b parameter affect the dump speed ?
Does changing the SLAVES define from 3 to something bigger or smaller change a thing ?
Depending on the O/S, slightly. I have messed around with -b on all. Many of the implementations maxout at -b64. I have a few RH7.3 systems at -b64. The systems I plan on using the LTO on are FC5. FC5 allows -b up to 1024, but -b64 or -b128 seem to do best when writing to /dev/null, spiking up to 32mB/s, but averaging 15mB/s. -b1024 is actually slower.
I have only used the binary dump included with the O/S's. On FC5 it is currently 0.4b41-1.2.
I am not familiar with the SLAVES define or what it means.
The '#define SLAVES 3' is in dump/tape.c ...
I will download the current source and try my hand at a compile. Worth a try.
Further notes. Mixed up the content of the filesystem a little more and boosted the size. Backing up a filesystem with 160Gb with -b1024 resulted in a best average of 27mB/s. Seems to be very dependent on content of filesystem and seems to creep up on terminal speed, increasing as it goes. Also noted that with this larger filesystem, dump reports '100.00% ... finished in 0:00' repeatedly (6 times in the last pass).
I have now compiled with SLAVE of 6 and am running another test now. Will report back with results.
Bounced around with a couple of options. I guess that SLAVES is the thread count. I get 5 processes in ps -ef | grep dump when SLAVES is at 3, 8 with 6, 7 with 5, 4 with 2, and 3 with 1. Couldn't go lower cause divide by zero.
Best speed with SLAVES set at 1 of 35mB/s and time of 1:20 down from 1:41 at 3 SLAVES
Just in case more info is desired...
Xeon 2.8GHz CPU, 1Gb RAM
Fedora Core 5 (2.6.16-1.2080_FC5smp)
35 mB/s is not that bad... I'm not sure there are further optimisations to make.
Dump does raw reads on the disk, using the ext2 libraries (in userspace) to interpret the filesystem structures. These libraries are less optimized than their kernel counterparts...
What are the speeds you can achieve with tar (but do not tar directly to /dev/null because tar detects it and optimise out, tar to a pipe redirected to /dev/null instead) ?
Try also with just a raw dd on the filesystem (dd if=/dev/sda1 of=/dev/null bs=1024k).
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.