Le jeudi 08 mai 2008 à 11:37 +0200, Tino Schwarze a écrit :
> On Wed, May 07, 2008 at 11:11:12PM +0200, Sam Przyswa wrote:
> > > > We have to change our BackupPC server to a new machine, how to copy the
> > > > entire BackupPC directory (120Gb) to an other machine ?
> > > >
> > > > I tried rsync, it crash after a long, long time, I tried scp but it
> > > > don't pass the link and the dest directory become out of size after
> > > > transferring about 50% of files...
> > > >
> > > > What is the right way to transfert a BackupPC with 120Gb of files ?
> > >
> > > An often suggested method is using dd and growing the filesystem
> > > afterwards. This will probably be a lot faster as dd doesn't need to
> > > know anything about the file structure and things like hardlinks.
> > Yes but the the machines are on Internet not on your office, we have to
> > do that over the net. I never do a dd over ssh !?
> It would work like this:
> oldserver# dd if=/dev/backuppc-filesystem bs=1M | ssh -c blowfish -C -o CompressionLevel=9 newserver "dd of=/de/newfilesystem bs=1M"
> (Of course, the filesystem has to be unmounted.) Later, you resize the
> file system to it's desired size, e.g. resize2fs /dev/newfilesystem
> Depending on the network connection between your servers, you might want
> to skip the compression (-C and -o).
I do that, the filesystem from an LVM to an other LVM (remote) seems
copied, I can mount it but it's on read-only and fsck find a lot of
errors, the repair mode can't write on the filesystem !?
Have you some ideas ?