You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Stelian P. <st...@po...> - 2004-02-18 16:27:16
|
On Wed, Feb 18, 2004 at 04:54:29PM +0100, Peter Münster wrote: > Hello, > > I have one file-system "/" and some directories /d1 /d2 /d3 ... > For some reasons I want small backups for each directory instead of one big > backup of the whole. At the same time, I need incremental backups. > How could I do this? You can't directly. I disabled this some time ago because there were incompatibilites between the incrementals and the directory mode, which could lead to inconstancies. You can do dumps of the whole filesystem and excluding the directories you don't want to save using the exclusion (-e) command line parameter. However, don't expect this to behave correctly on corner cases (like when you move a file from /d1 to /d2 for example...) Stelian. -- Stelian Pop <st...@po...> |
From: <pm...@fr...> - 2004-02-18 16:00:09
|
Hello, I have one file-system "/" and some directories /d1 /d2 /d3 ... For some reasons I want small backups for each directory instead of one big backup of the whole. At the same time, I need incremental backups. How could I do this? Thanks in advance for any help! Cheers, Peter -- http://pmrb.free.fr/contact/ ---------------------------------------------------------------- FilmSearch engine with a lot of new features: http://f-s.sf.net/ |
From: Marjorie S. <Jam...@mx...> - 2004-02-18 02:42:06
|
<html> <head> <title>S 231.156.20.246</title> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859= -1"> </head> <body> <p> </p> <p>Affiliate programs were never this easy in the past. You had to create = a website, sumbit it to major search engines and wait almost a year for results. Wi= th <a href=3D"http://www.globalmarketing2000.biz/cashinwithgoogle/">my program</a> you won't have to worry about any of this.</p> <p></p> <p><font size=3D"2">no more <a href=3D"http://www.globalmarketing2000.biz/= remove.html">emails</a> please </font></p> </body> </html> |
From: Stelian P. <st...@po...> - 2004-02-13 09:48:24
|
On Fri, Feb 13, 2004 at 10:21:23AM +0200, Antonios Christofides wrote: > I wrote: > > I don't think "echo" should work. You should try dd for a start. Pick up > > a large file and write it to tape: > > Just to avoid confusion, I wanted to point out that I sent the reply > above before Stelian sent his; but some mail server delayed it for more > than 24 hours. I thought echo wouldn't work because it has no knowledge > of blocking (I've never fully understood blocking in tapes :-). :) In fact I believe this is dependent on the tape driver: when using a fixed block size and trying to write less than the blocksize, some tape drivers will auto-pad the data up to the blocksize, others will report an error. Same thing when writing more than the blocksize: some tape drivers will silently truncate, others will report an error... Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2004-02-13 09:39:25
|
[Re-added dump-users in CC: On Thu, Feb 12, 2004 at 12:37:54PM -0800, Page, Jon T wrote: > >Before trying to make dump/restore work, let's make sure your drive is > functionning corectly. > > > >First, like you started doing, try writing a simple label to the tape: > > mt -f /dev/nst0 rewind > > echo foo > /dev/nst0 > > mt -f /dev/nst0 rewind > > cat /dev/nst0 > > This fails. The tape is moving and the in-use light blinks but the cat > /dev/nst0 produces garbage ouput. > > I know the tape is working because I can tar files to it and recover the > files. Could you please post the exact tar options you use ? What about the dd test ? Stelian. -- Stelian Pop <st...@po...> |
From: Antonios C. <an...@it...> - 2004-02-13 08:23:11
|
I wrote: > I don't think "echo" should work. You should try dd for a start. Pick up > a large file and write it to tape: Just to avoid confusion, I wanted to point out that I sent the reply above before Stelian sent his; but some mail server delayed it for more than 24 hours. I thought echo wouldn't work because it has no knowledge of blocking (I've never fully understood blocking in tapes :-). But yes, it does work, I tried it. I even used "cat" to write a 33 KB file, and it wrote it and read it OK. I'm going to re-read (for 3rd or 4th time) about tape blocking :-) |
From: Antonios C. <an...@it...> - 2004-02-13 02:01:27
|
I don't think "echo" should work. You should try dd for a start. Pick up a large file and write it to tape: dd if=original_file of=/dev/nst0 bs=32k Then try to read it from the tape: mt -t /dev/nst0 rewind dd if=/dev/nst0 of=/tmp/copied_file bs=32k diff original_file /tmp/copied_file If this works your system appears to be OK. If you still can't get dump and restore to work, tell us exactly the commands you are giving. |
From: Stelian P. <st...@po...> - 2004-02-12 09:43:48
|
On Wed, Feb 11, 2004 at 02:26:15PM -0800, Page, Jon T wrote: > I am trying to get dump/restore working and am having limited success. > > I have a Dell PE 2650 configured with a PERC 3/QC SCSI controller. The > system is running RedHat Advanced Server 2.1 (kernel 2.4.9-e35). The > system has a DLT7000 tape drive connected and has dump-0.4b25-1.72.0 > installed. > > The system sees the tape drive and it seems to respond to mt commands > such as rewind and fsf. If I echo a label out the the tape (echo test > > /dev/nst0) the in-use light blinks, like it is doing something, but if I > cat out the tape label I get garbage. I attempted a backup which seemed > to work, at least no errors. But a subsequent restore generates the > following errors: > > resync restore, skipped 15620 blocks > resync restore, skipped 15598 blocks > resync restore, skipped 22 blocks > Missing blocks at the end of <directory file - name unknown>, assuming > hole > resync restore, skipped 374 blocks > . is not on the tape > Root directory is not on tape > abort? [yn] > > Any help would be appreciative. Before trying to make dump/restore work, let's make sure your drive is functionning corectly. First, like you started doing, try writing a simple label to the tape: mt -f /dev/nst0 rewind echo foo > /dev/nst0 mt -f /dev/nst0 rewind cat /dev/nst0 If it works, it is time to try writing more data: mt -f /dev/nst0 rewind dd if=/dev/zero of=/dev/nst0 bs=10k count=1000 mt -f /dev/nst0 rewind dd if=/dev/nst0 of=/tmp/test bs=10k count=1000 cmp /tmp/test /dev/zero if you get: cmp: EOF on /tmp/test then it's ok. Stelian. -- Stelian Pop <st...@po...> |
From: Page, J. T <Jon...@pn...> - 2004-02-11 22:27:09
|
I am trying to get dump/restore working and am having limited success. I have a Dell PE 2650 configured with a PERC 3/QC SCSI controller. The system is running RedHat Advanced Server 2.1 (kernel 2.4.9-e35). The system has a DLT7000 tape drive connected and has dump-0.4b25-1.72.0 installed. =20 The system sees the tape drive and it seems to respond to mt commands such as rewind and fsf. If I echo a label out the the tape (echo test > /dev/nst0) the in-use light blinks, like it is doing something, but if I cat out the tape label I get garbage. I attempted a backup which seemed to work, at least no errors. But a subsequent restore generates the following errors: resync restore, skipped 15620 blocks resync restore, skipped 15598 blocks resync restore, skipped 22 blocks Missing blocks at the end of <directory file - name unknown>, assuming hole resync restore, skipped 374 blocks . is not on the tape Root directory is not on tape abort? [yn] Any help would be appreciative. |
From: Jon N. B. <ma...@nv...> - 2004-02-10 13:37:50
|
Actually, it turned out to be my dump flags. I changed a -B for a -a and suddenly everything worked. (I think the other failed SCSI device was because I was digging around inside the box). Anyway, does anyone know why dumping with a -B flag would work for as long as it did, and then quit? Thanks to all for your help. ~Jon. On Tue, 10 Feb 2004, Antonios Christofides wrote: > Jon N. Brelie wrote: > > I have tried brand new media, and am on my third tape drive. > > Given the fact that, as you said, another SCSI device of yours also > showed a problem one day, I'd try unplugging and plugging all relevant > interfaces, i.e. the SCSI card, if you have one. Better unplug and plug > all cards. If your SCSI interface is on board, then try another > motherboard :-) > > In any case, this does not appear to be a dump problem (did you try dd?) > > > ------------------------------------------------------- > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > Dump-users mailing list > Dum...@li... > https://lists.sourceforge.net/lists/listinfo/dump-users > -- ******************************* Jon N. Brelie Information Systems Manager NVE Corp. ******************************* |
From: Stelian P. <st...@po...> - 2004-02-10 09:10:54
|
On Mon, Feb 09, 2004 at 12:59:45PM -0600, Jon N. Brelie wrote: > Okay. Still no luck getting the tape drive to work with > RH7.3. For two years I have had no trouble with the backup scripts, and > now I do. When I run the script from cron, I get this: [...] My last mail on this matter said: >Are you able to read the tape using a simple dd ? Like in: > mt rewind > dd if=/dev/st0 of=/tmp/test bs=10k > >I suspect this will behave exactly the same way as dump showing that >the problem is at the kernel (or hardware) level. Contacting the >tape driver maintainer (Kai Makisara) directly is the best way to >proceed in this case... Did you try that ? Stelian. -- Stelian Pop <st...@po...> |
From: Antonios C. <an...@it...> - 2004-02-10 08:45:30
|
Jon N. Brelie wrote: > I have tried brand new media, and am on my third tape drive. Given the fact that, as you said, another SCSI device of yours also showed a problem one day, I'd try unplugging and plugging all relevant interfaces, i.e. the SCSI card, if you have one. Better unplug and plug all cards. If your SCSI interface is on board, then try another motherboard :-) In any case, this does not appear to be a dump problem (did you try dd?) |
From: Jon N. B. <ma...@nv...> - 2004-02-09 19:01:19
|
Okay. Still no luck getting the tape drive to work with RH7.3. For two years I have had no trouble with the backup scripts, and now I do. When I run the script from cron, I get this: st0: Error with sense data: [valid=0] Info fld=0x0, Deferred st09:00: sense key Medium Error If I run dump from the command line, I get this error: DUMP: write error 20 blocks into volume 1: Input/output error I then cannot even issue mt commands to rewind and eject the tape (though they work before running dump.) I have tried brand new media, and am on my third tape drive. All give the same errors. I am working with a Dell branded Seagate 20/40gb dds4 Dat. Model number STD2401LW. Like I said, the original drive worked awesome for two straight years. Now both of the subsequent replacements are failing in the same way. None of my partitions are even 40% full. I am using dump version 0.4b25. Anyone have any ideas? Thanks. -- ******************************* Jon N. Brelie Information Systems Manager NVE Corp. ******************************* |
From: Stelian P. <st...@po...> - 2004-02-07 09:02:23
|
On Fri, Feb 06, 2004 at 08:02:59PM +0200, Delian Krustev wrote: > On Friday 06 February 2004 16:39, Stelian Pop wrote: > > Are you perfectly sure you updated dump ? You use > > /usr/sbin/dump below, but my RPMS try to install > > /sbin/dump... Make sure you don't have several versions of dump > > on your disk. > > [root@smtp0 tmp]# which dump > /usr/sbin/dump > [root@smtp0 tmp]# whereis dump > dump: /usr/sbin/dump /usr/share/man/man8/dump.8.gz > [root@smtp0 tmp]# rpm -q dump > dump-0.4b35-1 > > I've used rpm -Uvh, the old version is removed ... Yup, you're correct, I was looking at the dump.static package, sorry. > > Hmm, really strange, the direct pipe should be the same as when > > going through cat/gzip. It alsa seems to crash after the full dump > > and restore has finished ( > > > > I cannot see what is causing this. Try running the same command > > but with dump under strace -ff (strace -ff /usr/sbin/dump -1u...). > > Does this show something about the crash ? > > You probably mean -fF here. I suspect the logs will be terribly > big if I'm tracing all the syscals(may be I'll shorten the output with > e.g. -e trace=..), but may be I'll try that also. Strace -f is sufficient indeed. > > > > One more question. What are these times that dump reports ? > > > > 54 minutes of real execution (clock time), 25 effective execution > > in algorithms, 36 seconrds in doing syscalls. Most of the time > > (53 minutes) was spend in waiting for the disk to be ready for read/write. > > > > See man 1 time and man 2 times for details. > > I was not asking about the times "time" reports. I've asked about: > DUMP: 32.57% done at 812 kB/s, finished in 0:20 > DUMP: 39.55% done at 657 kB/s, finished in 0:22 > DUMP: 52.02% done at 648 kB/s, finished in 0:18 > > Aren't these seconds, seconds of what ? Ah, didn't understood you were refering to those times. Those are dump estimates of the remaining time, and the units are hours:minutes. Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-06 18:03:11
|
On Friday 06 February 2004 16:39, Stelian Pop wrote: > Are you perfectly sure you updated dump ? You use > /usr/sbin/dump below, but my RPMS try to install > /sbin/dump... Make sure you don't have several versions of dump > on your disk. [root@smtp0 tmp]# which dump /usr/sbin/dump [root@smtp0 tmp]# whereis dump dump: /usr/sbin/dump /usr/share/man/man8/dump.8.gz [root@smtp0 tmp]# rpm -q dump dump-0.4b35-1 I've used rpm -Uvh, the old version is removed ... > Hmm, really strange, the direct pipe should be the same as when > going through cat/gzip. It alsa seems to crash after the full dump > and restore has finished ( > > I cannot see what is causing this. Try running the same command > but with dump under strace -ff (strace -ff /usr/sbin/dump -1u...). > Does this show something about the crash ? You probably mean -fF here. I suspect the logs will be terribly big if I'm tracing all the syscals(may be I'll shorten the output with e.g. -e trace=3D..), but may be I'll try that also. > > Second thought: compile dump with debug enabled and run it through > gdb, then when it crashes use commands like "bt" to see where > it happened. > Ok, I'll try that also. > > One more question. What are these times that dump reports ? > > 54 minutes of real execution (clock time), 25 effective execution > in algorithms, 36 seconrds in doing syscalls. Most of the time > (53 minutes) was spend in waiting for the disk to be ready for read/wri= te. > > See man 1 time and man 2 times for details. I was not asking about the times "time" reports. I've asked about: DUMP: 32.57% done at 812 kB/s, finished in 0:20 DUMP: 39.55% done at 657 kB/s, finished in 0:22 DUMP: 52.02% done at 648 kB/s, finished in 0:18 Aren't these seconds, seconds of what ? Regards, Delian |
From: Stelian P. <st...@po...> - 2004-02-06 14:40:11
|
On Fri, Feb 06, 2004 at 03:54:51PM +0200, Delian Krustev wrote: > I've updated dump/e2fsprogs to the latest versions and performed the tests. Are you perfectly sure you updated dump ? You use /usr/sbin/dump below, but my RPMS try to install /sbin/dump... Make sure you don't have several versions of dump on your disk. > > This went fine: > time /usr/sbin/dump -0uf - /dev/vg1/snaphome |gzip |split -b 1024m > > This also: > time cat xaa xab |gunzip|restore -rdvuf - >/home/out.log 2>/home/err.log > > However this has failed: > time /usr/sbin/dump -1uf - /dev/vg1/snaphome | restore -rdvuf - >/home/out1.log 2>/home/err1.log Hmm, really strange, the direct pipe should be the same as when going through cat/gzip. It alsa seems to crash after the full dump and restore has finished ( I cannot see what is causing this. Try running the same command but with dump under strace -ff (strace -ff /usr/sbin/dump -1u...). Does this show something about the crash ? Second thought: compile dump with debug enabled and run it through gdb, then when it crashes use commands like "bt" to see where it happened. > real 54m42.457s > user 0m25.730s > sys 0m36.370s [...] > One more question. What are these times that dump reports ? 54 minutes of real execution (clock time), 25 effective execution in algorithms, 36 seconrds in doing syscalls. Most of the time (53 minutes) was spend in waiting for the disk to be ready for read/write. See man 1 time and man 2 times for details. Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-06 13:55:02
|
I've updated dump/e2fsprogs to the latest versions and performed the test= s. This went fine: time /usr/sbin/dump -0uf - /dev/vg1/snaphome |gzip |split -b 1024m This also: time cat xaa xab |gunzip|restore -rdvuf - >/home/out.log 2>/home/err.log However this has failed: time /usr/sbin/dump -1uf - /dev/vg1/snaphome | restore -rdvuf - >/home/ou= t1.log 2>/home/err1.log DUMP: Date of this level 1 dump: Fri Feb 6 12:15:22 2004 DUMP: Date of last level 0 dump: Thu Feb 5 21:27:28 2004 DUMP: Dumping /dev/vg1/snaphome (an unlisted file system) to standard o= utput DUMP: Label: none DUMP: Writing 10 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 1496288 blocks. DUMP: Volume 1 started with block 1 at: Fri Feb 6 12:20:27 2004 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 26.14% done at 1304 kB/s, finished in 0:14 DUMP: 32.57% done at 812 kB/s, finished in 0:20 DUMP: 39.55% done at 657 kB/s, finished in 0:22 DUMP: 52.02% done at 648 kB/s, finished in 0:18 DUMP: 63.38% done at 631 kB/s, finished in 0:14 DUMP: 71.66% done at 595 kB/s, finished in 0:11 DUMP: 79.98% done at 569 kB/s, finished in 0:08 DUMP: 89.24% done at 555 kB/s, finished in 0:04 DUMP: 100.00% done at 555 kB/s, finished in 0:00 DUMP: Broken pipe DUMP: The ENTIRE dump is aborted. real 54m42.457s user 0m25.730s sys 0m36.370s [root@smtp0 tmp]# tail /home/out1.log extract file ./ttt/domains/ttt.biz/t/index.html extract file ./ttt/domains/ttt.biz/webcam-privees/webcam-privees.html extract file ./ttt/domains/ttt.biz/webcam-privees/index.html extract file ./ttt/domains/ttt.biz/x-video/x-video.html extract file ./ttt/domains/ttt.biz/x-video/index.html Add links Set directory mode, owner, and times. Check the symbol table. Verify the directory structure Check pointing the restore [root@smtp0 tmp]# tail /home/err1.log File header, ino 3918076 File header, ino 3918077 File header, ino 3918078 File header, ino 3918079 File header, ino 3918080 File header, ino 3918081 File header, ino 3918082 File header, ino 3918083 File header, ino 3918084 Warning: missing name ./vpopmail/domains/1/ttt.com/ttt/Maildir/new/107601= 4788.27968.smtp0.ttt.com,S=3D32899 [root@smtp0 tmp]# One more question. What are these times that dump reports ? Regards, Delian Krustev |
From: Stelian P. <st...@po...> - 2004-02-05 15:58:39
|
On Thu, Feb 05, 2004 at 05:43:19PM +0200, Delian Krustev wrote: > On Thursday 05 February 2004 12:48, Stelian Pop wrote: > > You have to restore the level 0 dump every time too. Remember, level 1 > > dumps contain the differences between the 0-level dump and the 1-level > > dump, so you will have to do something like: > > dump 0uf /tmp/0-level.dump /fs > > dump 1uf /tmp/1-level.dump /fs > > ... > > cd /mynewfs > > restore rf /tmp/0-level.dump > > restore rf /tmp/1-level.dump > > ... > > dump 1uf /tmp/1-level.dump /fs > > ... > > cd /mynewfs > > rm -rf * > > restore rf /tmp/0-level.dump > > restore rf /tmp/1-level.dump > > ... > > etc... > > > > I don't want to restore level 0 dump each time. It will be terribly > slow. I prefer dumping from levels 0-9 in this case. This way I'll > dump/restore level 0 only once on every 10 backups. Indeed, it is better to use levels 0-9 in this case. > What does the restoresymtable contain. Isn't it the info for inodes > which have already been restored ? I've tried restoring only level one > dumps, one after another and it seemed to work just fine. It will be used in order to detect renames/deletions etc. > > > > dump version ? kernel version ? > > > > dump-0.4b27-3 - from rh7.3 You need to update it, as you tried below. > kernel 2.4.24 with patches from sistina, needed for snapshotting ext3 > > > Try separating the two steps in order to see who exactly is failing > > (dump or restore): > > dump -0uf - /dev/vg1/snaphome > /dev/null > > first, then: > > dump -0uf - /dev/vg1/snaphome |restore -rufdv - > > (note the addition of (d)ebug and (v)erbose flag). > > > > I'll do that. First I would like to upgrade dump/e2fsprogs to the latest > versions. I've succeeded with e2fsprogs{,-devel} 1.34. However dump-0.4b35 gives > me a compilation error: > > make[1]: Entering directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' > i386-redhat-linux-gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -O2 -march=i386 -mcpu=i686 -pipe -I.. -I../compat/include -I../dump -DRDUMP -DRRESTORE -DLINUX_FORK_BUG -DHAVE_LZO -D_PATH_DUMPDATES=\"/etc/dumpdates\" -D_DUMP_VERSION=\"0.4b35\" main.c -o main.o > In file included from /usr/include/linux/types.h:5, > from /usr/include/linux/fs.h:13, > from main.c:69: > /usr/include/asm/types.h:11: warning: redefinition of `__s8' > /usr/include/ext2fs/ext2_types.h:11: warning: `__s8' previously declared here > /usr/include/asm/types.h:12: warning: redefinition of `__u8' > /usr/include/ext2fs/ext2_types.h:10: warning: `__u8' previously declared here > /usr/include/asm/types.h:14: warning: redefinition of `__s16' > /usr/include/ext2fs/ext2_types.h:37: warning: `__s16' previously declared here > /usr/include/asm/types.h:15: warning: redefinition of `__u16' > /usr/include/ext2fs/ext2_types.h:38: warning: `__u16' previously declared here > /usr/include/asm/types.h:17: warning: redefinition of `__s32' > /usr/include/ext2fs/ext2_types.h:45: warning: `__s32' previously declared here > /usr/include/asm/types.h:18: warning: redefinition of `__u32' > /usr/include/ext2fs/ext2_types.h:46: warning: `__u32' previously declared here > /usr/include/asm/types.h:21: warning: redefinition of `__s64' > /usr/include/ext2fs/ext2_types.h:23: warning: `__s64' previously declared here > /usr/include/asm/types.h:22: warning: redefinition of `__u64' > /usr/include/ext2fs/ext2_types.h:27: warning: `__u64' previously declared here > main.c: In function `do_exclude_ino': > main.c:1285: parse error before `unsigned' > main.c:1286: `j' undeclared (first use in this function) > main.c:1286: (Each undeclared identifier is reported only once > main.c:1286: for each function it appears in.) > make[1]: *** [main.o] Error 1 > make[1]: Leaving directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' > make: *** [all] Error 1 > > I've looked at the source of main.c and I'm seeing variable definitions > in the middle of a function, which I think is not an ANSI C (main.c not > main.cpp) .. Yup, known problem: http://cvs.sourceforge.net/viewcvs.py/dump/dump/dump/main.c?r1=1.88&r2=1.89 Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-05 15:43:31
|
On Thursday 05 February 2004 12:48, Stelian Pop wrote: > You have to restore the level 0 dump every time too. Remember, level 1 > dumps contain the differences between the 0-level dump and the 1-level > dump, so you will have to do something like: > =09dump 0uf /tmp/0-level.dump /fs > =09dump 1uf /tmp/1-level.dump /fs > =09=09... > =09=09=09cd /mynewfs > =09=09=09restore rf /tmp/0-level.dump > =09=09=09restore rf /tmp/1-level.dump > =09=09... > =09dump 1uf /tmp/1-level.dump /fs > =09=09... > =09=09=09cd /mynewfs > =09=09=09rm -rf * > =09=09=09restore rf /tmp/0-level.dump > =09=09=09restore rf /tmp/1-level.dump > =09=09... > =09etc... > I don't want to restore level 0 dump each time. It will be terribly slow. I prefer dumping from levels 0-9 in this case. This way I'll=20 dump/restore level 0 only once on every 10 backups. What does the restoresymtable contain. Isn't it the info for inodes which have already been restored ? I've tried restoring only level one dumps, one after another and it seemed to work just fine. > Hmmm, strange. > > dump version ? kernel version ? > dump-0.4b27-3 - from rh7.3 kernel 2.4.24 with patches from sistina, needed for snapshotting ext3 > Try separating the two steps in order to see who exactly is failing > (dump or restore): > =09dump -0uf - /dev/vg1/snaphome > /dev/null > first, then: > =09dump -0uf - /dev/vg1/snaphome |restore -rufdv - > (note the addition of (d)ebug and (v)erbose flag). > I'll do that. First I would like to upgrade dump/e2fsprogs to the latest versions. I've succeeded with e2fsprogs{,-devel} 1.34. However dump-0.4b3= 5 gives me a compilation error: make[1]: Entering directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' i386-redhat-linux-gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -O2 -march=3Di3= 86 -mcpu=3Di686 -pipe -I.. -I../compat/include -I../dump -DRDUMP -DRRES= TORE -DLINUX_FORK_BUG -DHAVE_LZO -D_PATH_DUMPDATES=3D\"/etc/dumpdates\" -= D_DUMP_VERSION=3D\"0.4b35\" main.c -o main.o In file included from /usr/include/linux/types.h:5, from /usr/include/linux/fs.h:13, from main.c:69: /usr/include/asm/types.h:11: warning: redefinition of `__s8' /usr/include/ext2fs/ext2_types.h:11: warning: `__s8' previously declared = here /usr/include/asm/types.h:12: warning: redefinition of `__u8' /usr/include/ext2fs/ext2_types.h:10: warning: `__u8' previously declared = here /usr/include/asm/types.h:14: warning: redefinition of `__s16' /usr/include/ext2fs/ext2_types.h:37: warning: `__s16' previously declared= here /usr/include/asm/types.h:15: warning: redefinition of `__u16' /usr/include/ext2fs/ext2_types.h:38: warning: `__u16' previously declared= here /usr/include/asm/types.h:17: warning: redefinition of `__s32' /usr/include/ext2fs/ext2_types.h:45: warning: `__s32' previously declared= here /usr/include/asm/types.h:18: warning: redefinition of `__u32' /usr/include/ext2fs/ext2_types.h:46: warning: `__u32' previously declared= here /usr/include/asm/types.h:21: warning: redefinition of `__s64' /usr/include/ext2fs/ext2_types.h:23: warning: `__s64' previously declared= here /usr/include/asm/types.h:22: warning: redefinition of `__u64' /usr/include/ext2fs/ext2_types.h:27: warning: `__u64' previously declared= here main.c: In function `do_exclude_ino': main.c:1285: parse error before `unsigned' main.c:1286: `j' undeclared (first use in this function) main.c:1286: (Each undeclared identifier is reported only once main.c:1286: for each function it appears in.) make[1]: *** [main.o] Error 1 make[1]: Leaving directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' make: *** [all] Error 1 I've looked at the source of main.c and I'm seeing variable definitions in the middle of a function, which I think is not an ANSI C (main.c not main.cpp) .. The build system is rh7.3. I'm using the dump.spec from the src.rpm from http://dump.sourceforge.net/ Thanks, Delian Krustev |
From: Stelian P. <st...@po...> - 2004-02-05 10:48:43
|
On Thu, Feb 05, 2004 at 12:21:05AM +0200, Delian Krustev wrote: > On Wednesday 04 February 2004 03:40, Florian Zumbiehl wrote: > > Dunno, UTSL - or maybe some developer will be able to help you out with > > that one?! You can use -r over an already used filesystem, but pay attention to the fact that if a file/dir has the same name as the one you try to restore then it will be overwritten. > > > > But can't you use some separate filesystem for that mirror? > > It is a separate filesystem. But I'm going to restore over it everytime, > since I'm going to use only level 1 dump|restore. You have to restore the level 0 dump every time too. Remember, level 1 dumps contain the differences between the 0-level dump and the 1-level dump, so you will have to do something like: dump 0uf /tmp/0-level.dump /fs dump 1uf /tmp/1-level.dump /fs ... cd /mynewfs restore rf /tmp/0-level.dump restore rf /tmp/1-level.dump ... dump 1uf /tmp/1-level.dump /fs ... cd /mynewfs rm -rf * restore rf /tmp/0-level.dump restore rf /tmp/1-level.dump ... etc... > > > BTW, how do you think should dump read blocks from the filesystem > > without context switches? The only thing dump does not use is the kernel's > > filesystem code. The block device drivers and the block buffering > > mechanisms are just the same as with the kernel's filesystem driver. This is correct. > It is quite simple. I've got an ext3 over lvm. The filesystem is 30GB, > 13GB full. There are about 1 million inodes used on that filesystem. > The hard drive is ide. I've decided to perform the test before > writing back too You. Unfortunately, the test failed. [...] > lvcreate -s -L1G -n snaphome /dev/vg1/home [...] > dump -0uf - /dev/vg1/snaphome |restore -ruf - [..] > DUMP: 100.00% done at 744 kB/s, finished in 0:00 > DUMP: Broken pipe > DUMP: The ENTIRE dump is aborted. Hmmm, strange. dump version ? kernel version ? Try separating the two steps in order to see who exactly is failing (dump or restore): dump -0uf - /dev/vg1/snaphome > /dev/null first, then: dump -0uf - /dev/vg1/snaphome |restore -rufdv - (note the addition of (d)ebug and (v)erbose flag). Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-04 22:21:16
|
On Wednesday 04 February 2004 03:40, Florian Zumbiehl wrote: > Dunno, UTSL - or maybe some developer will be able to help you out with > that one?! > > But can't you use some separate filesystem for that mirror? It is a separate filesystem. But I'm going to restore over it everytime, since I'm going to use only level 1 dump|restore. > BTW, how do you think should dump read blocks from the filesystem > without context switches? The only thing dump does not use is the kerne= l's > filesystem code. The block device drivers and the block buffering > mechanisms are just the same as with the kernel's filesystem driver. Read the FAT(inode tables) from the block device, parse it and see which=20 inodes are changed. > > However, I don't know whether dump optimizes block device access by > sorting requests or something. This is not needed, the inode tables locations are well known and located on adjacent blocks. > > Maybe you could provide some more detailled information on the problem > you want to solve/the kind of data you have to backup? > It is quite simple. I've got an ext3 over lvm. The filesystem is 30GB, 13GB full. There are about 1 million inodes used on that filesystem. The hard drive is ide. I've decided to perform the test before writing back too You. Unfortunately, the test failed.=20 The dump processes were quite small 1-2 MB, restore took about 92 MB. I've created the snapshot: lvcreate -s -L1G -n snaphome /dev/vg1/home I've created a fresh filesystem, mounted it, and cd to it's top level dir= =2E [root@smtp0 bkp]# cat /root/bin/tmp/dump.sh #!/bin/sh dump -0uf - /dev/vg1/snaphome |restore -ruf - [root@smtp0 bkp]# time /root/bin/tmp/dump.sh DUMP: Date of this level 0 dump: Wed Feb 4 14:58:19 2004 DUMP: Dumping /dev/vg1/snaphome (an unlisted file system) to standard o= utput DUMP: Added inode 8 to exclude list (journal inode) DUMP: Added inode 7 to exclude list (resize inode) DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 12540221 tape blocks. DUMP: Volume 1 started with block 1 at: Wed Feb 4 14:58:48 2004 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 2.41% done at 289 kB/s, finished in 11:43 DUMP: 4.48% done at 418 kB/s, finished in 7:57 DUMP: 6.32% done at 480 kB/s, finished in 6:46 DUMP: 8.07% done at 519 kB/s, finished in 6:09 DUMP: 9.88% done at 551 kB/s, finished in 5:41 DUMP: 11.59% done at 570 kB/s, finished in 5:23 DUMP: 13.31% done at 586 kB/s, finished in 5:09 DUMP: 15.05% done at 599 kB/s, finished in 4:56 DUMP: 16.94% done at 615 kB/s, finished in 4:41 DUMP: 18.72% done at 626 kB/s, finished in 4:31 DUMP: 20.56% done at 636 kB/s, finished in 4:20 DUMP: 22.40% done at 645 kB/s, finished in 4:11 DUMP: 24.25% done at 654 kB/s, finished in 4:02 DUMP: 26.22% done at 664 kB/s, finished in 3:52 DUMP: 27.96% done at 668 kB/s, finished in 3:45 DUMP: 29.85% done at 674 kB/s, finished in 3:37 DUMP: 31.74% done at 680 kB/s, finished in 3:29 DUMP: 33.67% done at 686 kB/s, finished in 3:21 DUMP: 35.65% done at 693 kB/s, finished in 3:13 DUMP: 37.67% done at 699 kB/s, finished in 3:06 DUMP: 39.61% done at 704 kB/s, finished in 2:59 DUMP: 41.51% done at 707 kB/s, finished in 2:52 DUMP: 43.23% done at 708 kB/s, finished in 2:47 DUMP: 45.72% done at 720 kB/s, finished in 2:37 DUMP: 47.96% done at 728 kB/s, finished in 2:29 DUMP: 49.86% done at 730 kB/s, finished in 2:23 DUMP: 51.85% done at 734 kB/s, finished in 2:17 DUMP: 53.74% done at 736 kB/s, finished in 2:11 DUMP: 55.96% done at 742 kB/s, finished in 2:03 DUMP: 57.90% done at 744 kB/s, finished in 1:58 DUMP: 59.73% done at 745 kB/s, finished in 1:52 DUMP: 61.65% done at 746 kB/s, finished in 1:47 DUMP: 63.68% done at 749 kB/s, finished in 1:41 DUMP: 65.53% done at 750 kB/s, finished in 1:36 DUMP: 67.60% done at 753 kB/s, finished in 1:29 DUMP: 69.53% done at 754 kB/s, finished in 1:24 DUMP: 71.51% done at 756 kB/s, finished in 1:18 DUMP: 73.49% done at 758 kB/s, finished in 1:13 DUMP: 75.48% done at 759 kB/s, finished in 1:07 DUMP: 77.40% done at 760 kB/s, finished in 1:02 DUMP: 79.35% done at 762 kB/s, finished in 0:56 DUMP: 81.28% done at 763 kB/s, finished in 0:51 DUMP: 83.15% done at 763 kB/s, finished in 0:46 DUMP: 85.14% done at 764 kB/s, finished in 0:40 DUMP: 87.09% done at 765 kB/s, finished in 0:35 DUMP: 89.14% done at 767 kB/s, finished in 0:29 DUMP: 91.05% done at 768 kB/s, finished in 0:24 DUMP: 92.96% done at 768 kB/s, finished in 0:19 DUMP: 94.87% done at 769 kB/s, finished in 0:13 DUMP: 96.85% done at 770 kB/s, finished in 0:08 DUMP: 98.35% done at 767 kB/s, finished in 0:04 DUMP: 99.82% done at 765 kB/s, finished in 0:00 DUMP: 100.00% done at 762 kB/s, finished in 0:00 DUMP: 100.00% done at 759 kB/s, finished in 0:00 DUMP: 100.00% done at 756 kB/s, finished in 0:00 DUMP: 100.00% done at 753 kB/s, finished in 0:00 DUMP: 100.00% done at 751 kB/s, finished in 0:00 DUMP: 100.00% done at 749 kB/s, finished in 0:00 DUMP: 100.00% done at 746 kB/s, finished in 0:00 DUMP: 100.00% done at 744 kB/s, finished in 0:00 DUMP: Broken pipe DUMP: The ENTIRE dump is aborted. real 324m1.259s user 1m21.260s sys 3m44.360s [root@smtp0 bkp]# I've tried the same on a smaller partition, it worked just fine. The snapshot was NOT exausted and removed by the kernel. I've removed it manually later. Btw as You could see, the times reported by dump are not accurate, at least if it counts real time. > PS: Could you please limit your quoting to the parts needed? n.p. p.s. Any ideas on this topic, Stelian ? ;)) |
From: Florian Z. <fl...@gm...> - 2004-02-04 01:41:01
|
Hi, [...] > > So, after all it should work if you initially create the mirror using > > dump -0 | restore. > > Probably You mean restore -r here. Yep, something like that ... > I've tried that. You're right. It works. However the documentation states > that -r should be used only on a fresh filesystem. Isn't it dangerous > if it's used on already populated one ? Dunno, UTSL - or maybe some developer will be able to help you out with that one?! But can't you use some separate filesystem for that mirror? > > As an alternative to dump I'd recommend rsync which is made exactly for > > this kind of task. However, take care to disable checksumming for > > local copying - maybe that rsync does that automatically, though ... > > > > I'm currently using rsync. However even > find . |wc -l > takes huge amount of time when we talk about let's say one million files. > Find/rsync uses stat for each file. I'm not sure whether the kernel keeps > a copy of the FAT(don't know how it's called for ext2/3) in memory, but > even in that case the context switches between the kernel and userspace for > each stat seems expensive and unnecessary, when we could read that information > directly from the FAT. That's what dump does, right ? The information that is needed for the decision whether to dump a particular file or not is stored in the inode on ext2/3 and in the directory entry/ies on FAT filesystems - the FAT itself is needed only when actually dumping a file (and for finding directory blocks), as it only indicates which blocks ("clusters") belong together, not any other properties of this group of blocks, not even its exact used size or if it's a file or a directory, which both is stored in the corresponding directory entry. This information is stored with the inode (at least for smaller files) on ext2/3, thus eliminating any need for an additional read. However, linux caches inodes in main memory just as any other filesystem information, thus for caching purposes it shouldn't matter where the needed information is stored. BTW, how do you think should dump read blocks from the filesystem without context switches? The only thing dump does not use is the kernel's filesystem code. The block device drivers and the block buffering mechanisms are just the same as with the kernel's filesystem driver. However, I don't know whether dump optimizes block device access by sorting requests or something. Maybe you could provide some more detailled information on the problem you want to solve/the kind of data you have to backup? Cyas, Florian PS: Could you please limit your quoting to the parts needed? |
From: Delian K. <kr...@kr...> - 2004-02-03 20:05:30
|
On Friday 30 January 2004 17:05, Florian Zumbiehl wrote: > Hi, > > > sync > > dump -0 -f - /dev/hda1 -A archive_file |restore -uvxf - > > date "+%a %b %e %H:%M:%S %Y" > last_dump > > > > Later I'm doing a level 1 dump and writing over the level 0 dump > > files: > > dump -1 -T "`cat last_dump`" -f - /dev/hda1 -A archive_file.new \ > > > > |restore -uvxf - > > [files deleted on the source are not deleted on the mirror in that > second step above] > > > Unfortunately the archive_file.new, created during the level 1 dump > > contains only the added/changed files and nothing about the removed o= nes. > > It will be > > It _does_ contain information about the removed ones - though indirectl= y. > Dump is not about backing up files (=3Ddirectory entries and their asso= ciated > data) but about backing up inodes. By dumping all changed inodes (inclu= ding > directories) relative to some previous backup you automatically also re= cord > which inodes from a previous backup are not referenced any longer and t= hus > can be deleted upon restore of the incremental backup. > > The problem here is that inode numbers are not preserved by restore, wh= ich > is why restoring an incremental backup on top of a filesystem that was > filled by other means than by restoring a level 0 dump does not work - > incremental restore only works because restore keeps track of correspon= ding > inode numbers in /restoresymtable on the filesystem being restored. > > This is how I understand it - if I should be wrong, please correct me .= =2E. > > So, after all it should work if you initially create the mirror using > dump -0 | restore. Probably You mean restore -r here. I've tried that. You're right. It works. However the documentation states that -r should be used only on a fresh filesystem. Isn't it dangerous if it's used on already populated one ? Here's what I've done: step1: dump -0uf - /dev/hda1 |restore -ruf - step2: dump -1uf - /dev/hda1 |restore -ruf - - edit the contents of /etc/dumpdates and delete the line for level 0 dum= p on hda1 and make the line for level 1 to be the line for level 0 - delete a file dump -1uf - /dev/hda1 |restore -ruf - Step 2 could be performed multiple times just after itself then. > > As an alternative to dump I'd recommend rsync which is made exactly for > this kind of task. However, take care to disable checksumming for > local copying - maybe that rsync does that automatically, though ... > I'm currently using rsync. However even find . |wc -l takes huge amount of time when we talk about let's say one million files. Find/rsync uses stat for each file. I'm not sure whether the kernel keeps a copy of the FAT(don't know how it's called for ext2/3) in memory, but even in that case the context switches between the kernel and userspace f= or each stat seems expensive and unnecessary, when we could read that inform= ation directly from the FAT. That's what dump does, right ? > Cyas, Florian > Thanks for the response, Delian Krustev |
From: Brent B. <br...@ca...> - 2004-02-02 07:44:26
|
On Sun, Feb 01, 2004 at 12:42:59PM +0200, Antonios Christofides wrote: > Brent Busby wrote: > > 1) Will dump track file deletions between incremental backups? > > Yes. > (And GNU tar can also do this, though I found its options much harder to > figure out.) Really?? That's amazing. I didn't think tar paid enough attention to directory lists to even notice if a file went away. > > 2) If a file is copied onto a partition from another machine, and the > > file has a datestamp that's older than the latest dump recorded in > > /etc/dumpdates for that volume, will the next incremental catch the new > > file, on the premise that it didn't exist there before? > > It will catch it, but not on the premise that it didn't exist before; on > the premise that its ctime is later than the dump time. [...] That's actually quite a bit more robust than I suspected. Thank you very much for the information.... -- + Brent A. Busby, "The killer application for Windows was Visual + CATMIND RECORDINGS Basic. It allowed you to make your hokey, self- + br...@ca... made applications that did something stupid for + Pekin, IL (USA) your enterprise...." --Linus Torvalds |
From: Antonios C. <an...@it...> - 2004-02-01 21:28:53
|
Brent Busby wrote: > 1) Will dump track file deletions between incremental backups? Yes. (And GNU tar can also do this, though I found its options much harder to figure out.) > 2) If a file is copied onto a partition from another machine, and the > file has a datestamp that's older than the latest dump recorded in > /etc/dumpdates for that volume, will the next incremental catch the new > file, on the premise that it didn't exist there before? It will catch it, but not on the premise that it didn't exist before; on the premise that its ctime is later than the dump time. A file has three times: mtime (last modification), atime (last read), and ctime (last inode modification). If you make any change to the inode, such as change permissions, modify atime or mtime (e.g. with touch), or change owner, then ctime gets modified. There is no way to override this, and there is no way to alter ctime. So, when you copy a file (e.g. with cp) with the option to preserve its attributes, copying goes like this: - First cp creates the destination file and copies the file contents there. - While cp does this, the kernel automatically sets the file's mtime to the current time. - When cp finishes copying the data, it asks the kernel to set the file's mtime to the source file's mtime. - When the kernel does this, it will automatically set ctime to the current time. Because that's how the kernel works, the same will happen whichever utility you use for copying, including restore. Dump looks at mtime and ctime and uses the max of these two in order to decide if the file has changed or not. |