|
From: Jean-Yves B. <jy...@er...> - 2007-02-19 09:52:58
|
Hello,
I use dump to backup a large file system to a LTO2 tape changer.
The system is a debian Sarge running a 2.6.8-3-686-smp kernel.
The hardware is a HP LTO-2 tape drive with an Overland LoaderXpress tape
changer. The system have 2 GB or RAM and a swap of 2 GB too.
Version of dump/restore is the last one (dump 0.4b41 (using libext2fs
1.37 of 21-Mar-2005)), compiled by my own.
The file system is ext3, its size is 1.3 TB with 300 GB to backup.
Data to backup comes from BackupPC, a simple by efficient backup on disk
system.
Dump works perfectly. Here is the command I run :
/usr/local/sbin/dump -0 -b 64 -f /dev/st0 -M -F \
/usr/local/sbin/save_tape_change.sh -I 0 /dev/lvm0/backuppc
The script save_tape_change.sh drives the tape changer.
Data are dumped to tapes in 4h49m.
The problem is when I run restore to check tapes are OK.
Here is the dump command I run :
/usr/local/sbin/restore -r -y -b 64 -f /dev/st0 -M -F \
/usr/local/sbin/save_tape_change.sh -N
And restore takes hours, fulling memory and swap.
My question is how could I run restore in the same time dump takes ?
How much memory I need (or how could I compute the memory needed) ?
How could I run dump using less memory ?
Thanks for your answers.
|