From: Helmut J. <jar...@ig...> - 2005-04-08 08:18:50
|
On 7 Apr, Kenneth Porter wrote: > I'm trying to dump a large (128 GB) partition to a Samba-mounted share on a > Win2k box over gigabit Ethernet. This is reasonably fast provided that I > mount the share with large buffers: > > FSOPTIONS="sockopt=SO_RCVBUF=65536,sockopt=SO_SNDBUF=65536" > mount //${SERVER}/BigBackup /mnt/Backup -t ${FSTYPE} -o ${FSOPTIONS} > > (Username and password are passed in environment variables here, but could > also be done by credential file, to hide from ps.) > > The dump command line: > > BLOCKSIZE=1024 > OUTDIR=/mnt/Backup/Newred > FILESIZE=1000000 > ERRORLIMIT=500 > COMPRESS= > dump 0u -b ${BLOCKSIZE} ${COMPRESS} -Mf ${OUTDIR}/sda1/dump -B ${FILESIZE} > /mnt/sda1 -Q ${OUTDIR}/sda1/qfa > > Performance stats reported: > > DUMP: 127629312 blocks (124638.00MB) on 128 volume(s) > DUMP: finished in 19082 seconds, throughput 6688 kBytes/sec > > That's about 5.25 hours. > > I tried to add -j2 (compression) to the dump command line but this kicks > the expected completion time to over 48 hours, which suggests that the > network buffers are starving. Is there some way to avoid this? I thought > the megabyte block size would be sufficient but it's not working when > compression is enabled. > There are 3 compression methods. You used -j ==> bzip2 : best compression but extremely slow There is -z ==> gzip : good compression and quite a bit faster -y ==> lzo : moderate compression (still a factor of 2 in most cases) but extremely fast Depending on your hardware you can expect at lot more than 10 Mb/sec. If you have lzop on your machine, you can test the speed. Otherwise the sources are free. -- Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |