|
From: Kenneth P. <sh...@se...> - 2005-04-08 02:44:52
|
I'm trying to dump a large (128 GB) partition to a Samba-mounted share on a
Win2k box over gigabit Ethernet. This is reasonably fast provided that I
mount the share with large buffers:
FSOPTIONS="sockopt=SO_RCVBUF=65536,sockopt=SO_SNDBUF=65536"
mount //${SERVER}/BigBackup /mnt/Backup -t ${FSTYPE} -o ${FSOPTIONS}
(Username and password are passed in environment variables here, but could
also be done by credential file, to hide from ps.)
The dump command line:
BLOCKSIZE=1024
OUTDIR=/mnt/Backup/Newred
FILESIZE=1000000
ERRORLIMIT=500
COMPRESS=
dump 0u -b ${BLOCKSIZE} ${COMPRESS} -Mf ${OUTDIR}/sda1/dump -B ${FILESIZE}
/mnt/sda1 -Q ${OUTDIR}/sda1/qfa
Performance stats reported:
DUMP: 127629312 blocks (124638.00MB) on 128 volume(s)
DUMP: finished in 19082 seconds, throughput 6688 kBytes/sec
That's about 5.25 hours.
I tried to add -j2 (compression) to the dump command line but this kicks
the expected completion time to over 48 hours, which suggests that the
network buffers are starving. Is there some way to avoid this? I thought
the megabyte block size would be sufficient but it's not working when
compression is enabled.
|