You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Arnaud A. <arn...@um...> - 2000-07-18 12:00:33
|
hello, thanks for your last messages. I've installed openssh in order to secure my connexion. Now I've a new problem. The dump start but I obtain this messages : If anybody has an idea, I would be happy to try it ! # dump -0unf - /dev/sda7 | ssh barbeyrac dd of=/dev/rmt/0 DUMP: Date of this level 0 dump: Tue Jul 18 13:45:06 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/sda7 (/usr) to standard output DUMP: Label: none DUMP: mapping (Pass I) [regular files] ro...@ba...'s password: DUMP: mapping (Pass II) [directories] DUMP: estimated 1402620 tape blocks. DUMP: Volume 1 started at: Tue Jul 18 13:45:16 2000 DUMP: dumping (Pass III) [directories] DUMP: SIGSEGV: ABORTING! DUMP: SIGSEGV: ABORTING! DUMP: SIGSEGV: ABORTING! DUMP: SIGSEGV: ABORTING! DUMP: SIGSEGV: ABORTING! Erreur de segmentation ---------------------------------------------------------------------- Arnaud Antonelli Ingenieur d'Etudes Administrateur Reseaux Centre de Calcul Applique aux Sciences Humaines ( CCASH ) 14 rue René Descartes 67084 Strasbourg cedex Tel : 03 88 15 72 17 / Fax : 03 88 24 09 30 Email : Arn...@um... ---------------------------------------------------------------------- |
From: David G. <dj...@dr...> - 2000-07-12 11:50:35
|
I am running Redhat 6.1 with dump 0.4b18-1. On some disks dump gives much poorer performance that tar. I can improve it some using hdparm to change the number of blocks read ahead. Is their any better way to improve the performance? The example here is a old machine with slow disk. The newer machine has the same performance issue on a 1.6G that came with it. The 20G I added later doesn't show this problem. Commands used for testing time tar --atime-preserve -c -l -b 64 -f - -C /mnt/pdp . | cat > /dev/null time dump 0bBfu 32 20971520 /dev/null /mnt/pdp For read ahead 8 (hdparm raw disk speed .91 MB/sec) tar 3.09user 172.07system 5:41.55elapsed 51%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (76506major+98minor)pagefaults 0swaps dump (50% the speed of tar) DUMP: finished in 661 seconds, throughput 448 KBytes/sec 16.53user 194.68system 11:22.68elapsed 30%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (193major+200minor)pagefaults 0swaps For read ahead 64 (hdparm raw disk speed 1.42 MB/sec) tar 3.54user 165.21system 5:35.14elapsed 50%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (76540major+98minor)pagefaults 0swaps dump (85% the speed of tar) DUMP: finished in 378 seconds, throughput 784 KBytes/sec 17.29user 207.91system 6:32.97elapsed 57%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (185major+201minor)pagefaults 0swaps Thanks, David Gesswein http://www.pdp8.net/ -- Old computers with blinkenlights |
From: Bernhard E. <ber...@gm...> - 2000-07-10 07:29:32
|
Arnaud Antonelli wrote: > > >> I think I've forget something but what ? > >> Could you help me ? > > > >Can you do rsh itself? I.e. "rsh sun bash -i"? > > > > Exact when I run rsh sun bash -i I get a permission denied So, fix rsh first and don't blame dump on it. Create a ~root/.rhosts file on sun. Put <client-machine> root in it. Do chmod 600 on it. |
From: Bernhard E. <ber...@gm...> - 2000-07-06 16:12:36
|
Arnaud Antonelli wrote: > > I want to backup from a Linux machine to a Sun Solaris machine. > my RSH env. variable is set to rexec AFAIR you don't need to set any environment if you just want to use rsh. > I use this command > > dump -0 -u -a -f sun:/dev/rmt/0 /usr > > I obtain this messages : > DUMP: Connection to sun established. > DUMP: Date of this level 0 dump: Thu Jul 6 14:37:24 2000 > DUMP: Date of last level 0 dump: the epoch > DUMP: Dumping /dev/sda7 (/usr) to /dev/rmt/0 on host barbeyrac > DUMP: Label: none > DUMP: mapping (Pass I) [regular files] > DUMP: mapping (Pass II) [directories] > DUMP: estimated 1402610 tape blocks. > DUMP: Lost connection to remote host. > > I think I've forget something but what ? > Could you help me ? Can you do rsh itself? I.e. "rsh sun bash -i"? |
From: Arnaud A. <arn...@um...> - 2000-07-06 13:54:08
|
I want to backup from a Linux machine to a Sun Solaris machine. my RSH env. variable is set to rexec I use this command dump -0 -u -a -f sun:/dev/rmt/0 /usr I obtain this messages : DUMP: Connection to sun established. DUMP: Date of this level 0 dump: Thu Jul 6 14:37:24 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/sda7 (/usr) to /dev/rmt/0 on host barbeyrac DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 1402610 tape blocks. DUMP: Lost connection to remote host. I think I've forget something but what ? Could you help me ? Thanks ! ---------------------------------------------------------------------- Arnaud Antonelli Ingenieur d'Etudes Administrateur Reseaux Centre de Calcul Applique aux Sciences Humaines ( CCASH ) 14 rue René Descartes 67084 Strasbourg cedex Tel : 03 88 15 72 17 / Fax : 03 88 24 09 30 Email : Arn...@um... ---------------------------------------------------------------------- |
From: Stelian P. <st...@ca...> - 2000-07-06 07:49:27
|
On Wed, 5 Jul 2000, Michael Heldebrant wrote: > I'm looking for a good howto on how to get both semi secure(suid dump and > rcmd?) This is not secure at all (it enables you to do network backups but all data/passwords are transmited in clear text). > and really secure (ssh) remote dumps working. If you want security (and accept to have lower speeds) this is the way to do it... > I've looked around in > the man pages and the linux-howto archives but haven't found a good set of > steps to follow. Did you look also at the dump-users archives ? :) > If anyone has some tips or knows where this document exists > please email me back directly as I am not subscribed to the mailing list as > of yet. No comment :( > I've tried with ssh but I'm not sure how to get RSA passphraseless > authentication working since I'm relatively new to ssh and it's many > options. Man ssh-agent Man sshd, search RhostsRSAAuthentication option. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Michael H. <hm...@at...> - 2000-07-06 03:50:41
|
I'm looking for a good howto on how to get both semi secure(suid dump and rcmd?) and really secure (ssh) remote dumps working. I've looked around in the man pages and the linux-howto archives but haven't found a good set of steps to follow. If anyone has some tips or knows where this document exists please email me back directly as I am not subscribed to the mailing list as of yet. I've tried with ssh but I'm not sure how to get RSA passphraseless authentication working since I'm relatively new to ssh and it's many options. --mike |
From: Bernhard E. <be...@be...> - 2000-07-01 14:05:02
|
Kenneth Porter wrote: > > On Fri, 30 Jun 2000 20:35:17 +0100 (IST), Chris Bradshaw wrote: > > >When I run the dump command, it seems to wildly miscalulate > >how many tapes it will need to fit my dumps. For example, it > >tells me it needs 34.57 tapes to dump ~800Mb onto a DDS > >SCSI tape drive. > > Dump assumes a fairly ancient tape technology. You need to override its > assumptions with the b and B options. "man dump". Yeah, read about dump in "Unix System Administration" (Æleen Frisch) from O'Reilly: The values dump is based on are the characteristics of 9-track tapes. For 4 mm tapes (DDS-1, 2 GB) they publish "-s 5000" (brit. feet) and "-d 42500" (bytes per inch). These aren't the actual values, they are choosen for compatibility with older versions of dump. As Stelian told you, you can use "-a" instead. |
From: Stelian P. <po...@cy...> - 2000-07-01 12:34:42
|
On Fri, 30 Jun 2000, Kenneth Porter wrote: > >When I run the dump command, it seems to wildly miscalulate > >how many tapes it will need to fit my dumps. For example, it > >tells me it needs 34.57 tapes to dump ~800Mb onto a DDS > >SCSI tape drive. > > Dump assumes a fairly ancient tape technology. You need to override its > assumptions with the b and B options. "man dump". Or even better, use the 'a'utomatic end of tape detection option of dump. Stelian. -- /\ / \ Stelian Pop / DS \ Email: po...@cy... \____/ |
From: Kenneth P. <sh...@we...> - 2000-06-30 19:54:25
|
On Fri, 30 Jun 2000 20:35:17 +0100 (IST), Chris Bradshaw wrote: >When I run the dump command, it seems to wildly miscalulate >how many tapes it will need to fit my dumps. For example, it >tells me it needs 34.57 tapes to dump ~800Mb onto a DDS >SCSI tape drive. Dump assumes a fairly ancient tape technology. You need to override its assumptions with the b and B options. "man dump". Ken mailto:sh...@we... http://www.sewingwitch.com/ken/ http://www.harrybrowne2000.org/ |
From: <ch...@be...> - 2000-06-30 19:39:54
|
Hi... I am (trying) to use dump 0.4b16 on Redhat 6.1. I installed the software via the RPM which I downloaded from SourceForge. When I run the dump command, it seems to wildly miscalulate how many tapes it will need to fit my dumps. For example, it tells me it needs 34.57 tapes to dump ~800Mb onto a DDS SCSI tape drive. I was just wondering if anyone could tell me if there is something I am doing wrong, or if there are any options I could set? Or should I have compiled the dump/restore programs from the source? Any help much appreciated. Thanx in advance. Chris. |
From: Bernhard E. <be...@be...> - 2000-06-27 10:32:44
|
Stelian Pop wrote: [...] > if you set the blocksize with 'mt setblk 32768', are you able > to do a 'dd ofs=32k' onto the tape ? > > I'm really not sure it's dump's fault or the driver/drive fault... It works well using dump with 32KB blocks: # mt setblk 32768 # /sbin/dump 0ab 32 /tmp DUMP: Date of this level 0 dump: Tue Jun 27 12:16:32 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdb11 (/tmp) to /dev/nst0 DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 57048 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 12:16:32 2000 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Closing /dev/nst0 DUMP: Volume 1 completed at: Tue Jun 27 12:18:30 2000 DUMP: Volume 1 took 0:01:58 DUMP: Volume 1 transfer rate: 483 KB/s DUMP: 57058 tape blocks (55.72MB) on 1 volume(s) DUMP: finished in 116 seconds, throughput 491 KBytes/sec DUMP: Date of this level 0 dump: Tue Jun 27 12:16:32 2000 DUMP: Date this dump completed: Tue Jun 27 12:18:30 2000 DUMP: Average transfer rate: 483 KB/s DUMP: DUMP IS DONE Now try dd: # mt setblk 32768 # /sbin/dump 0af - /tmp | dd obs=32k of=$TAPE DUMP: Date of this level 0 dump: Tue Jun 27 12:22:01 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdb11 (/tmp) to standard output DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 57048 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 12:22:02 2000 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Volume 1 completed at: Tue Jun 27 12:23:53 2000 DUMP: Volume 1 took 0:01:51 DUMP: Volume 1 transfer rate: 513 KB/s DUMP: 57036 tape blocks (55.70MB) DUMP: finished in 111 seconds, throughput 513 KBytes/sec DUMP: Date of this level 0 dump: Tue Jun 27 12:22:01 2000 DUMP: Date this dump completed: Tue Jun 27 12:23:53 2000 DUMP: Average transfer rate: 513 KB/s DUMP: DUMP IS DONE dd: /dev/nst0: Input/output error 114060+0 records in 1782+0 records out Using 10KB blocks with tape drive set to 32KB: # /sbin/dump 0af - /tmp | dd obs=10k of=$TAPE DUMP: Date of this level 0 dump: Tue Jun 27 12:24:45 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdb11 (/tmp) to standard output DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 57048 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 12:24:45 2000 DUMP: dumping (Pass III) [directories] dd: /dev/nst0: Input/output error 20+0 records in 0+0 records out DUMP: Broken pipe DUMP: The ENTIRE dump is aborted. # /sbin/dump 0a /tmp DUMP: Date of this level 0 dump: Tue Jun 27 12:25:35 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdb11 (/tmp) to /dev/nst0 DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 57048 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 12:25:35 2000 DUMP: dumping (Pass III) [directories] [waiting... nothing happens...] |
From: Stelian P. <st...@ca...> - 2000-06-27 10:12:06
|
On Tue, 27 Jun 2000, Bernhard Erdmann wrote: > Yes, I stopped it 9 minutes later. > > > Maybe you should enable debug in dump (./configure --enable-debug) > > in order to have some traces... > > Well, I can compile dump with this switch, but then? How to use the > traces? Well, the traces should tell you how many bytes exactly dump tries to write to the device... maybe you'll see something wrong. Just a thought: if you set the blocksize with 'mt setblk 32768', are you able to do a 'dd ofs=32k' onto the tape ? I'm really not sure it's dump's fault or the driver/drive fault... -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Bernhard E. <be...@be...> - 2000-06-27 10:09:45
|
Stelian Pop wrote: [...] > > > Try to run a /sbin/dump 0ab 32 /tmp/. Maybe it helps... > > > > I already tried it, didn't help then, if I remember correctly. Now it > > works. ?? > > Well, I don't remember having changed something about that for a long > time... Maybe you tried it with a very old version of dump/restore, > and haven't tried since then again ? No, some weeks ago I used 0.4b16. > > Let's have a try: > > # /sbin/restore rb 10 > > > > With "-b 10", restore works well reading a tape witten by "dump 0a > > /tmp". Maybe a bug in restore to figure out the tape's blocksize? > > Sure. I'll take a look and will contact you later if I cannot reproduce > this here... It happened to me using an old Seagate DDS-2 and a HP DDS-2 drive (HP C1533A) on RedHat 6.1 boxes using kernel 2.2.14 to 2.2.16. The Seagate was connected to an Adaptec card using the aic7xxx driver, the HP is connected to a Dawicontrol DC2974 (AMD chip) using the tmscsim driver. A DDS-3 drive (HP C1539A or C1557A) on another RHL 6.1 box works well without telling a blocksize to restore. (Adaptec on-board using aic7xxx) |
From: Bernhard E. <be...@be...> - 2000-06-27 09:59:28
|
> > # mt setblk 32768 > > # /sbin/dump 0a /tmp > > DUMP: Date of this level 0 dump: Tue Jun 27 11:37:09 2000 > > DUMP: Date of last level 0 dump: the epoch > > DUMP: Dumping /dev/hdb11 (/tmp) to /dev/nst0 > > DUMP: Label: none > > DUMP: mapping (Pass I) [regular files] > > DUMP: mapping (Pass II) [directories] > > DUMP: estimated 114085 tape blocks. > > DUMP: Volume 1 started at: Tue Jun 27 11:37:10 2000 > > DUMP: dumping (Pass III) [directories] > > > > [waiting, waiting, tape drive's LED does not blink, nothing seems to > > happen] > > You mean it's hanged ? Yes, I stopped it 9 minutes later. > Maybe you should enable debug in dump (./configure --enable-debug) > in order to have some traces... Well, I can compile dump with this switch, but then? How to use the traces? > I wonder also if there is some way to enable traces in the st driver. I'll have a look at it. |
From: Stelian P. <st...@ca...> - 2000-06-27 09:52:43
|
On Tue, 27 Jun 2000, Bernhard Erdmann wrote: > Stelian Pop wrote: > [...] > > Wouldn't that be that, for some reason, your DDS-2 drive is configured > > for only a fixed blocksize of 32k ? > > No, if I really set the drive to 32k blocks, dump slows down: > > # mt setblk 32768 > # /sbin/dump 0a /tmp > DUMP: Date of this level 0 dump: Tue Jun 27 11:37:09 2000 > DUMP: Date of last level 0 dump: the epoch > DUMP: Dumping /dev/hdb11 (/tmp) to /dev/nst0 > DUMP: Label: none > DUMP: mapping (Pass I) [regular files] > DUMP: mapping (Pass II) [directories] > DUMP: estimated 114085 tape blocks. > DUMP: Volume 1 started at: Tue Jun 27 11:37:10 2000 > DUMP: dumping (Pass III) [directories] > > [waiting, waiting, tape drive's LED does not blink, nothing seems to > happen] You mean it's hanged ? Maybe you should enable debug in dump (./configure --enable-debug) in order to have some traces... I wonder also if there is some way to enable traces in the st driver. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Stelian P. <st...@ca...> - 2000-06-27 09:50:24
|
On Tue, 27 Jun 2000, Bernhard Erdmann wrote: > Stelian Pop wrote: > > Dump, by default, uses a 10k blocksize. > > The default blocksize of dump isn't mentioned in the man page. You're right. I'll correct that. > > Try to run a /sbin/dump 0ab 32 /tmp/. Maybe it helps... > > I already tried it, didn't help then, if I remember correctly. Now it > works. ?? Well, I don't remember having changed something about that for a long time... Maybe you tried it with a very old version of dump/restore, and haven't tried since then again ? > Dump with "-b 32" works well, thanks for the hint. Seems to eliminate > the need for "| dd obs=32k of=$TAPE". Good. > I wonder why restore says "Tape block size is 32". You told me, dump > used 10k blocks and the tape drive is set to variable blocksize. So why > 32KB blocks? Well, I'll take a look at the code. But restore tries figure out the good blocksize by trying several ones. For some reasons, it believes that 32k is your blocksize, which is obviously wrong... > Let's have a try: > # /sbin/restore rb 10 > > With "-b 10", restore works well reading a tape witten by "dump 0a > /tmp". Maybe a bug in restore to figure out the tape's blocksize? Sure. I'll take a look and will contact you later if I cannot reproduce this here... Stelian. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Bernhard E. <be...@be...> - 2000-06-27 09:47:27
|
Stelian Pop wrote: [...] > Wouldn't that be that, for some reason, your DDS-2 drive is configured > for only a fixed blocksize of 32k ? No, if I really set the drive to 32k blocks, dump slows down: # mt setblk 32768 # /sbin/dump 0a /tmp DUMP: Date of this level 0 dump: Tue Jun 27 11:37:09 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdb11 (/tmp) to /dev/nst0 DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 114085 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 11:37:10 2000 DUMP: dumping (Pass III) [directories] [waiting, waiting, tape drive's LED does not blink, nothing seems to happen] |
From: Bernhard E. <be...@be...> - 2000-06-27 09:42:49
|
Stelian Pop wrote: [...] > Wouldn't that be that, for some reason, your DDS-2 drive is configured > for only a fixed blocksize of 32k ? No, it's default is variable blocksize: # mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x13 (DDS (61000 bpi)). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN > Dump, by default, uses a 10k blocksize. The default blocksize of dump isn't mentioned in the man page. > Try to run a /sbin/dump 0ab 32 /tmp/. Maybe it helps... I already tried it, didn't help then, if I remember correctly. Now it works. ?? Dump with "-b 32" works well, thanks for the hint. Seems to eliminate the need for "| dd obs=32k of=$TAPE". > Or add try a /sbin/restore rdv (debug and verbose enabled). Tape written with "dump 0a /tmp": # /sbin/restore rvd Verify tape and initialize maps Tape block size is 32 Volume header (old inode format) Dump date: Tue Jun 27 10:24:36 2000 Dumped from: the epoch Level 0 dump of /tmp on ente:/dev/hdb11 Label: none maxino = 57345 Used inodes map header Dumped inodes map header Begin level 0 restore Initialize symbol table. Extract directories from tape File header, ino 46305 File header, ino 46306 File header, ino 46307 File header, ino 46315 File header, ino 46317 . is not on the tape Root directory is not on tape abort? [yn] y dump core? [yn] n I wonder why restore says "Tape block size is 32". You told me, dump used 10k blocks and the tape drive is set to variable blocksize. So why 32KB blocks? Let's have a try: # /sbin/restore rb 10 With "-b 10", restore works well reading a tape witten by "dump 0a /tmp". Maybe a bug in restore to figure out the tape's blocksize? |
From: Stelian P. <st...@ca...> - 2000-06-27 08:56:11
|
On Tue, 27 Jun 2000, Bernhard Erdmann wrote: > Stelian Pop wrote: > [...] > > Use the no rewinding tape device (such as /dev/nst0) and: > > > > mt -f /dev/nst0 rewind > > dump 0f /dev/nst0 /dev/sda1 > > dump 0f /dev/nst0 /dev/sda2 > > dump 0f /dev/nst0 /dev/sda3 > > mt -f /dev/nst0 rewind > > I've seen really ugly things dumping directly to DDS-2 drives on several > different boxes. (DDS-3 seems to be ok.) I have to use dd to write at > fixed block sizes: > dump 0af - /dev/sda1 | dd obs=32k of=$TAPE Wouldn't that be that, for some reason, your DDS-2 drive is configured for only a fixed blocksize of 32k ? Dump, by default, uses a 10k blocksize. > # /sbin/dump 0a /tmp/ Try to run a /sbin/dump 0ab 32 /tmp/. Maybe it helps... > # /sbin/restore r Or add try a /sbin/restore rdv (debug and verbose enabled). Stelian. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Bernhard E. <be...@be...> - 2000-06-27 08:35:46
|
Stelian Pop wrote: [...] > Use the no rewinding tape device (such as /dev/nst0) and: > > mt -f /dev/nst0 rewind > dump 0f /dev/nst0 /dev/sda1 > dump 0f /dev/nst0 /dev/sda2 > dump 0f /dev/nst0 /dev/sda3 > mt -f /dev/nst0 rewind I've seen really ugly things dumping directly to DDS-2 drives on several different boxes. (DDS-3 seems to be ok.) I have to use dd to write at fixed block sizes: dump 0af - /dev/sda1 | dd obs=32k of=$TAPE # /sbin/dump 0a /tmp/ DUMP: Date of this level 0 dump: Tue Jun 27 10:24:36 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdb11 (/tmp) to /dev/nst0 DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 57046 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 10:24:37 2000 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Closing /dev/nst0 DUMP: Volume 1 completed at: Tue Jun 27 10:26:31 2000 DUMP: Volume 1 took 0:01:54 DUMP: Volume 1 transfer rate: 500 KB/s DUMP: 57034 tape blocks (55.70MB) on 1 volume(s) DUMP: finished in 112 seconds, throughput 509 KBytes/sec DUMP: Date of this level 0 dump: Tue Jun 27 10:24:36 2000 DUMP: Date this dump completed: Tue Jun 27 10:26:31 2000 DUMP: Average transfer rate: 500 KB/s DUMP: DUMP IS DONE # mt rewind # /sbin/restore r . is not on the tape Root directory is not on tape abort? [yn] y dump core? [yn] n "dd bs=32k..." (ibs=obs) didn't work for me, I have to use "obs=32k". |
From: Bernhard E. <be...@be...> - 2000-06-27 08:22:49
|
li...@li... wrote: [...] > One important remark, though: I run dump remotely (using ssh) > and experienced several inconveniences/problems: > > -- If I don't set up a .rhosts (so that ssh prompts for a passwd) > then somehow it seems that dump has grabbed stdin, and won't pass > on the password to ssh. So the only way to dump remoely is to > set up a .rhosts file (yuck). Can't verify it: (there's no ~root/.rhosts on apollo) # ssh apollo "/sbin/dump 0af - /" | dd obs=32k of=$TAPE root@apollo's password: DUMP: Date of this level 0 dump: Tue Jun 27 09:59:30 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hda6 (/) to standard output DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 59977 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 09:59:31 2000 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Volume 1 completed at: Tue Jun 27 10:01:33 2000 DUMP: Volume 1 took 0:02:02 DUMP: Volume 1 transfer rate: 495 KB/s DUMP: 60420 tape blocks (59.00MB) DUMP: finished in 122 seconds, throughput 495 KBytes/sec DUMP: Date of this level 0 dump: Tue Jun 27 09:59:30 2000 DUMP: Date this dump completed: Tue Jun 27 10:01:33 2000 DUMP: Average transfer rate: 495 KB/s DUMP: DUMP IS DONE 120840+0 records in 1888+1 records out > -- suppose that above problem was solved. There is then the > inconvenience that said passwd would have to be entered for > each invocation of dump (as well as for the mt), which makes > running from a shell script hard/impossible. Ideally, I'd be > able to just log in one session, and run the shell script in that > and have it all exit when done. Read some man pages about the ssh suite. Generate some RSA keys without a passphrase, put it into ~/.ssh/authorized_keys (on remote side) and you won't have to enter a password. Check /var/log/messages. Don't use the root account on remote side for scripting. Dump just needs read access to the devices. So choose a user with group read permission to /dev/sda5, e.g. > Other than that, it seems to work fine (although ssh on a 486 on > an old ne2000 ethernet is slowwww at 70kb a sec). Did you switch off compression along ssh? Maybe your 486 is too slow for on-the-fly compression what dump grabs off the disks. (/etc/ssh/ssh_config, /etc/ssh/sshd_config) > Which brings me to another question: > How much buffering, if any, does dump do, and is this adjustable? > At 70kb/sec, the tape drive seems to be doing a lot of > rewind/preroll/record type back-n-forth tape movement. > Of course, tape drives are usually happier when you can stream to them > at thier natural streaming speed. Do some buffering on your hard disk: # ssh apollo "/sbin/dump 0af - /" | dd of=/tmp/apollo.dmp root@apollo's password: DUMP: Date of this level 0 dump: Tue Jun 27 10:05:39 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hda6 (/) to standard output DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 59977 tape blocks. DUMP: Volume 1 started at: Tue Jun 27 10:05:40 2000 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Volume 1 completed at: Tue Jun 27 10:07:00 2000 DUMP: Volume 1 took 0:01:20 DUMP: Volume 1 transfer rate: 755 KB/s DUMP: 60420 tape blocks (59.00MB) DUMP: finished in 80 seconds, throughput 755 KBytes/sec DUMP: Date of this level 0 dump: Tue Jun 27 10:05:39 2000 DUMP: Date this dump completed: Tue Jun 27 10:07:00 2000 DUMP: Average transfer rate: 755 KB/s DUMP: DUMP IS DONE 120840+0 records in 120840+0 records out # dd if=/tmp/apollo.dmp obs=32k of=$TAPE 120840+0 records in 1888+1 records out |
From: Stelian P. <st...@ca...> - 2000-06-27 08:13:53
|
On Tue, 27 Jun 2000 li...@li... wrote: > and experienced several inconveniences/problems: > > -- If I don't set up a .rhosts (so that ssh prompts for a passwd) > then somehow it seems that dump has grabbed stdin, and won't pass > on the password to ssh. So the only way to dump remoely is to > set up a .rhosts file (yuck). Well, this was fixed some times ago if I recall well... What version of dump are you using? > > -- suppose that above problem was solved. There is then the > inconvenience that said passwd would have to be entered for > each invocation of dump (as well as for the mt), which makes > running from a shell script hard/impossible. Ideally, I'd be > able to just log in one session, and run the shell script in that > and have it all exit when done. Take a look at ssh-agent. I think that's what you want... > > Other than that, it seems to work fine (although ssh on a 486 on > an old ne2000 ethernet is slowwww at 70kb a sec). rsh is much faster, but not secure... You have to choose between security and performance... > Which brings me to another question: > How much buffering, if any, does dump do, and is this adjustable? Dump does no buffering (the buffering is done by the pipes used to invoke rsh/ssh etc). But dump writes the data in chunks, and by extending the size of the chunks you may have better performance (see the -b option of dump). Stelian. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Stelian P. <st...@ca...> - 2000-06-27 08:08:05
|
On Tue, 27 Jun 2000, Bernhard Erdmann wrote: > Chuck Pierce wrote: > > > > is there a way to compress a partition before you put it on to tape?? I'd like > > to keep all of my data on one tape, and I don't really care how long it takes.. > > thanks - Chuck > > /sbin/dump 0af - /partition | gzip -c | dd obs=32k of=$TAPE But in this case, if your tape is tape is damaged, you lose all the benefits of recovering some data dump has... -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: <li...@li...> - 2000-06-27 06:12:26
|
Salut, It's been rumoured that Stelian Pop said: > > Use the no rewinding tape device (such as /dev/nst0) and: > > mt -f /dev/nst0 rewind > dump 0f /dev/nst0 /dev/sda1 > dump 0f /dev/nst0 /dev/sda2 Dohh, thanks, I figured this out shortly after sending the email. One important remark, though: I run dump remotely (using ssh) and experienced several inconveniences/problems: -- If I don't set up a .rhosts (so that ssh prompts for a passwd) then somehow it seems that dump has grabbed stdin, and won't pass on the password to ssh. So the only way to dump remoely is to set up a .rhosts file (yuck). -- suppose that above problem was solved. There is then the inconvenience that said passwd would have to be entered for each invocation of dump (as well as for the mt), which makes running from a shell script hard/impossible. Ideally, I'd be able to just log in one session, and run the shell script in that and have it all exit when done. Other than that, it seems to work fine (although ssh on a 486 on an old ne2000 ethernet is slowwww at 70kb a sec). Which brings me to another question: How much buffering, if any, does dump do, and is this adjustable? At 70kb/sec, the tape drive seems to be doing a lot of rewind/preroll/record type back-n-forth tape movement. Of course, tape drives are usually happier when you can stream to them at thier natural streaming speed. --linas |