|
From: Stelian P. <st...@po...> - 2005-06-09 08:45:40
|
Le mercredi 08 juin 2005 =C3=A0 13:26 -0400, Kolev, Nik a =C3=A9crit :
> Hi,
>=20
> I am a new dump user with some questions.
>=20
> First some background. We are adding support that would enable our app
> to run on Linux, currently it runs on Solaris. As part of the app's
> operation we have a script that nightly does a full file system dump
> of two directories (which reside on different partitions) onto tape.
> The tape drive we are using is "Quantum ValueLoader SDLT 320", and we
> use it in stacker mode; thus after each backup we "mt -f $TAPE
> offline" the currently loaded tape and the tape drive loads the one in
> the next slot. On Solaris we are using ufsdump/ufsrestore and I am
> investigating on whether we can use dump/restore on Linux (and hoping
> that we can).
You should be able to.
> Here's an example of how things are scripted:
>=20
> for d in $DIRS=20
> dump 0aqfL ${TAPE_NOREWIND} ${TAPE_LABEL} ${d} >> ${LOGFILE} 2>&1=20
> if [ $? !=3D "0" ]; then=20
> exit 1=20
> fi=20
> done=20
> mt -f ${TAPE} offline
>=20
> I am using the -a argument because I know that the tape is big enough
> to hold the contents (recursively of course) of both directories thus
> an end of media should never be returned. And we want to dump the
> first one and then dump append the second directory. When I run that
> the dump of the first directory succeeds but of the second one does
> not. The "mt -f ${TAPE} offline" also fails because I think the device
> was still busy (i ve seen this as a reason, but am not sure that it is
> the reason in this case). Any suggestions, comments, and
> recommendations on what I am doing wrong and what should I be doing
> instead?
>=20
> Backing up /app/scout to tape.=20
> DUMP: Date of this level 0 dump: Wed Jun 8 12:31:30 2005=20
> DUMP: Dumping /dev/sda3 (/app (dir /scout)) to /dev/nst0=20
> DUMP: Label: SCOUT_BACKUP=20
> DUMP: Writing 10 Kilobyte records=20
> DUMP: mapping (Pass I) [regular files]=20
> DUMP: mapping (Pass II) [directories]=20
> DUMP: estimated 42518 blocks.=20
> DUMP: Volume 1 started with block 1 at: Wed Jun 8 12:31:30 2005=20
> DUMP: dumping (Pass III) [directories]=20
> DUMP: dumping (Pass IV) [regular files]=20
> DUMP: Closing /dev/nst0=20
> DUMP: Volume 1 completed at: Wed Jun 8 12:31:45 2005=20
> DUMP: Volume 1 42510 blocks (41.51MB)=20
> DUMP: Volume 1 took 0:00:15=20
> DUMP: Volume 1 transfer rate: 2834 kB/s=20
> DUMP: 42510 blocks (41.51MB) on 1 volume(s)=20
> DUMP: finished in 1 seconds, throughput 42510 kBytes/sec=20
> DUMP: Date of this level 0 dump: Wed Jun 8 12:31:30 2005=20
> DUMP: Date this dump completed: Wed Jun 8 12:31:45 2005=20
> DUMP: Average transfer rate: 2834 kB/s=20
> DUMP: DUMP IS DONE=20
> Backing up /app/solid to tape.=20
> DUMP: Date of this level 0 dump: Wed Jun 8 12:31:45 2005=20
> DUMP: Dumping /dev/sda2 (/app/solid) to /dev/nst0=20
> DUMP: Label: SCOUT_BACKUP=20
> DUMP: Writing 10 Kilobyte records=20
> DUMP: mapping (Pass I) [regular files]=20
> DUMP: mapping (Pass II) [directories]=20
> DUMP: estimated 80568 blocks.=20
> DUMP: Volume 1 started with block 1 at: Wed Jun 8 12:32:00 2005=20
> DUMP: dumping (Pass III) [directories]=20
> DUMP: dumping (Pass IV) [regular files]=20
> DUMP: write error 68700 blocks into volume 1: Input/output error=20
So the second dump fails, but not immediately but only after having
written 68700 blocks (68 MB). I'd say that you either have a bad tape,
or maybe some SCSI problems.
Did you try on some other tape ?
> DUMP: Do you want to rewrite this volume? - forced abort=20
> DUMP: The ENTIRE dump is aborted.=20
> Backup to tape failed.=20
> Switching tapes.=20
> /dev/st0: Device or resource busy
I'm not sure about this one. Since the tape driver failed before, it
could be anything.
I suggest you do some more tape tests, like trying a=20
dd if=3D/dev/zero of=3D/dev/nst0 bs=3D10k
and see what amount of data you're able to write on the tape. You could
also look at the kernel logs and search for some messages from the scsi
tape driver.
=20
Stelian.
--=20
Stelian Pop <st...@po...>
|