You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Peter V. <pe...@wn...> - 2007-06-13 17:01:38
|
Hi, I have many DDS4 dump tapes created on Redhat 9 (dump 0.4b36) and have no problem restoring to the same RH9 computer or even to a different computer running FC2. I have just built a new computer with FC6 (dump/restore version 0.4b41) and am having limited success restoring tapes to this computer that were created on my RH9 computer. Some tapes restore fine but most do not and fail with : "Tape read error while skipping over inode 17008 continue? [yn] " and the following in dmesg : "st0: Current: sense key: Medium Error Additional sense: Sequential positioning error" Or I get a "Tape read error on first record" message. Are there any differences between the versions of dump/restore on RH9 and FC6 that could account for this ? Thanks, Peter |
From: Stelian P. <st...@po...> - 2007-06-07 19:48:52
|
Le jeudi 07 juin 2007 à 09:24 -0700, Kenneth Porter a écrit : > Stelian, I just wanted to let you know that just because the list is quiet, Those are indeed quiet times :) > doesn't mean your product goes unused. It's just nice and stable. I run it > nightly from a script and it just works. Thanks again for a solid piece of > code. I continue to recommend it as the best system for backing up ext2/3 > filesystems. Thanks for your continuous support ! Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2007-06-07 16:24:49
|
Stelian, I just wanted to let you know that just because the list is quiet, doesn't mean your product goes unused. It's just nice and stable. I run it nightly from a script and it just works. Thanks again for a solid piece of code. I continue to recommend it as the best system for backing up ext2/3 filesystems. |
From: Stelian P. <st...@po...> - 2007-03-19 13:19:11
|
Le dimanche 18 mars 2007 à 12:40 +0100, Helmut Jarausch a écrit : > Hi, > > trying to dump a partition I get a (input) disk error on > /dev/hdc2 ... [sector 1753267, ext2blk 0]: count=219158 > > Can anybody tell how I can find out which file von /dev/hdc2 > is involved? debugfs' icheck command might be helpful. -- Stelian Pop <st...@po...> |
From: Helmut J. <jar...@ig...> - 2007-03-18 11:40:11
|
Hi, trying to dump a partition I get a (input) disk error on /dev/hdc2 ... [sector 1753267, ext2blk 0]: count=219158 Can anybody tell how I can find out which file von /dev/hdc2 is involved? Many thanks, Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: Stelian P. <st...@po...> - 2007-03-12 11:14:15
|
Le vendredi 09 mars 2007 à 17:19 +0000, Ben Harris a écrit : > > Dump does anyway need to have enough priviledges to access the raw block > > device directly, and I would have expected BLKFLSBUF to work in these > > conditions. > > It doesn't. BLKFLSBUF requires CAP_SYS_ADMIN, which isn't required just > to open the device for reading. See linux/block/ioctl.c::blkdev_ioctl(). Right. Unfortunate but correct. > > > > How can this be ? The remount process explicitely flushes the data to > > the disk, and in R/O mode no further modifications are allowed. > > Actually, I was mistaken -- using a r/o mount gives me a different failure > mode. Using this test script: [...] > Each loop adds another unexpected file to the list, and running blockdev > --flushbufs wipes the list out. If I actually restore the dump, the new > files are missing from it. You're correct, I reproduced this here using a simple - and small - loop mounted filesystem. I never saw this before because I always do the dumps as root - so BLKFLSBUF works. However, I am not sure if this is the intented behaviour ("mounting r/o means all further writings are disallowed, but you still need to manually flush the buffers if you want to make sure the data reached the disk" or a genuine bug in the kernel block layer. Stelian. -- Stelian Pop <st...@po...> |
From: Ben H. <bj...@ca...> - 2007-03-09 17:20:15
|
On Fri, 9 Mar 2007, Stelian Pop wrote: > Le jeudi 08 mars 2007 à 16:31 +0000, Ben Harris a écrit : > > On Wed, 7 Feb 2007, Peter Münster wrote: > > > > > I think, it's expected, since you use dump on an active file-system. > > > > So it would appear. It looks like dump actually calls BLKFLSBUF itself, > > so the problem only occurs when dump isn't running as root. > > Dump does anyway need to have enough priviledges to access the raw block > device directly, and I would have expected BLKFLSBUF to work in these > conditions. It doesn't. BLKFLSBUF requires CAP_SYS_ADMIN, which isn't required just to open the device for reading. See linux/block/ioctl.c::blkdev_ioctl(). > > > How to deal with it: > > > - you can mount the file-system read-only during the dump > > > > This doesn't help. Without the BLKFLSBUF, the dump is still inconsistent. > > How can this be ? The remount process explicitely flushes the data to > the disk, and in R/O mode no further modifications are allowed. Actually, I was mistaken -- using a r/o mount gives me a different failure mode. Using this test script: i=0 while sleep 1; do echo $$.$i > testfile.$$.$i mount -o remount,ro /dev/stuff/test1 /mnt su -c "/sbin/dump -0f - . 2>/dev/null | /sbin/restore -Cf -" dump mount -o remount,rw /dev/stuff/test1 /mnt i=$(($i + 1)) done I get results like this: Dump date: Fri Mar 9 16:31:53 2007 Dumped from: the epoch Level 0 dump of /mnt on wraith:/dev/mapper/stuff-test1 Label: none filesys = /mnt expected next file 32769, got 460 expected next file 32769, got 461 expected next file 32769, got 462 Some files were modified! Each loop adds another unexpected file to the list, and running blockdev --flushbufs wipes the list out. If I actually restore the dump, the new files are missing from it. -- Ben Harris, University of Cambridge Computing Service. Tel: (01223) 334728 |
From: Stelian P. <st...@po...> - 2007-03-09 16:15:18
|
Le jeudi 08 mars 2007 à 16:31 +0000, Ben Harris a écrit : > On Wed, 7 Feb 2007, Peter Münster wrote: > > > I think, it's expected, since you use dump on an active file-system. > > So it would appear. It looks like dump actually calls BLKFLSBUF itself, > so the problem only occurs when dump isn't running as root. Dump does anyway need to have enough priviledges to access the raw block device directly, and I would have expected BLKFLSBUF to work in these conditions. > > How to deal with it: > > - you can mount the file-system read-only during the dump > > This doesn't help. Without the BLKFLSBUF, the dump is still inconsistent. How can this be ? The remount process explicitely flushes the data to the disk, and in R/O mode no further modifications are allowed. Stelian. -- Stelian Pop <st...@po...> |
From: Ben H. <bj...@ca...> - 2007-03-08 16:31:22
|
On Wed, 7 Feb 2007, Peter Münster wrote: > I think, it's expected, since you use dump on an active file-system. So it would appear. It looks like dump actually calls BLKFLSBUF itself, so the problem only occurs when dump isn't running as root. > How to deal with it: > - you can mount the file-system read-only during the dump This doesn't help. Without the BLKFLSBUF, the dump is still inconsistent. > - you can use the snapshot feature of LVM (that's what I do, very nice) That looks like by far the best solution. Unfortunately, not all of my users use LVM at the moment. Meanwhile, I think <http://dump.sourceforge.net/isdumpdeprecated.html> really ought to be updated a little, since the following statements are now untrue: "you can safely use dump on ... read-only filesystems." "You can also safely use dump on idle filesystems if you sync before dumping" -- Ben Harris, University of Cambridge Computing Service. Tel: (01223) 334728 |
From: Stelian P. <st...@po...> - 2007-02-19 12:55:37
|
Le lundi 19 février 2007 à 10:52 +0100, Jean-Yves Boisiaud a écrit : > Hello, > > I use dump to backup a large file system to a LTO2 tape changer. > > The system is a debian Sarge running a 2.6.8-3-686-smp kernel. > The hardware is a HP LTO-2 tape drive with an Overland LoaderXpress tape > changer. The system have 2 GB or RAM and a swap of 2 GB too. > > Version of dump/restore is the last one (dump 0.4b41 (using libext2fs > 1.37 of 21-Mar-2005)), compiled by my own. > > The file system is ext3, its size is 1.3 TB with 300 GB to backup. > Data to backup comes from BackupPC, a simple by efficient backup on disk > system. > > Dump works perfectly. Here is the command I run : > > /usr/local/sbin/dump -0 -b 64 -f /dev/st0 -M -F \ > /usr/local/sbin/save_tape_change.sh -I 0 /dev/lvm0/backuppc > > The script save_tape_change.sh drives the tape changer. > Data are dumped to tapes in 4h49m. > > The problem is when I run restore to check tapes are OK. > Here is the dump command I run : > > /usr/local/sbin/restore -r -y -b 64 -f /dev/st0 -M -F \ > /usr/local/sbin/save_tape_change.sh -N > > And restore takes hours, fulling memory and swap. > > My question is how could I run restore in the same time dump takes ? You won't be able to run it "in the same time" for the simple reason that while dump accesses the raw filesystem (which is quite optimal), restore does the filesystem accesses in a pure standard way (using open/read/write etc). So it is normal for restore to be an order of magnitude slower than dump. > How much memory I need (or how could I compute the memory needed) ? You need at least enough memory to handle the inode maps (2 binary maps, each being max_ino bits in size). But this is a constant amount, once allocated it shouldn't grow up much during the execution. But restore also uses plenty of space in /tmp (well, TMPDIR) to cache the whole directory structure of the filesystem being restored. Is /tmp on your systems backed by disk or memory space (tmpfs ?) ? -- Stelian Pop <st...@po...> |
From: Jean-Yves B. <jy...@er...> - 2007-02-19 09:52:58
|
Hello, I use dump to backup a large file system to a LTO2 tape changer. The system is a debian Sarge running a 2.6.8-3-686-smp kernel. The hardware is a HP LTO-2 tape drive with an Overland LoaderXpress tape changer. The system have 2 GB or RAM and a swap of 2 GB too. Version of dump/restore is the last one (dump 0.4b41 (using libext2fs 1.37 of 21-Mar-2005)), compiled by my own. The file system is ext3, its size is 1.3 TB with 300 GB to backup. Data to backup comes from BackupPC, a simple by efficient backup on disk system. Dump works perfectly. Here is the command I run : /usr/local/sbin/dump -0 -b 64 -f /dev/st0 -M -F \ /usr/local/sbin/save_tape_change.sh -I 0 /dev/lvm0/backuppc The script save_tape_change.sh drives the tape changer. Data are dumped to tapes in 4h49m. The problem is when I run restore to check tapes are OK. Here is the dump command I run : /usr/local/sbin/restore -r -y -b 64 -f /dev/st0 -M -F \ /usr/local/sbin/save_tape_change.sh -N And restore takes hours, fulling memory and swap. My question is how could I run restore in the same time dump takes ? How much memory I need (or how could I compute the memory needed) ? How could I run dump using less memory ? Thanks for your answers. |
From: Stelian P. <st...@po...> - 2007-02-10 22:12:59
|
Le mardi 06 février 2007 à 22:37 -0600, Brent Busby a écrit : > About a year or so ago, I found myself fighting a problem described in > the man page for the Linux SCSI tape driver, st(4): [...] > I know this is really more of a tape driver problem than a dump problem, > but the Linux SCSI tape list is practically dead -- even spam doesn't > live there anymore, so I thought this might be a better place to ask, > especially since I use dump almost exclusively for backups. Well, it doesn't look like someone's going to answer here neither :) I'd suggest complaining about this directly to the SCSI tape maintainer (Kai Makisara), or posting your mail directly on the linux-kernel mailing list... -- Stelian Pop <st...@po...> |
From: <pm...@fr...> - 2007-02-07 19:17:38
|
On Wed, 7 Feb 2007, Ben Harris wrote: > Is this behaviour to be expected? Does it represent a bug in either dump > or Linux? How do other people deal with it? Hello Ben, I think, it's expected, since you use dump on an active file-system. How to deal with it: - you can mount the file-system read-only during the dump - you can use the snapshot feature of LVM (that's what I do, very nice) Cheers, Peter -- http://pmrb.free.fr/contact/ |
From: Ben H. <bj...@ca...> - 2007-02-07 15:03:08
|
I've recently found that some of the dumps produced by dump don't correctly record certain files. In the example that brought this problem to my attention, a file that was created six hours before dump was run was recorded in the dump as consisting entirely of zeroes, which it didn't when viewed through the filesystem. I can easily reproduce the problem using the following test script: i=0 while sleep 1; do echo $$.$i > testfile.$$.$i /sbin/dump -0f - . 2>/dev/null | /sbin/restore -Cf - i=$(($i + 1)) done After a few iterations, there will usually be some test files that repeatedly appear different in tape and disk copies, and they can continue to be dumped incorrectly for tens of minutes at least (I've not yet run the test for longer). Running "sync" doesn't seem to help matters. Running "blockdev --flushbufs" does help, in that files created before the flush start appearing correctly, but it can only be run by root, which makes it a bit of a nuisance to set up. Is this behaviour to be expected? Does it represent a bug in either dump or Linux? How do other people deal with it? My tests have so far been on the following systems: SUSE LINUX 10.1 (X86-64) dump 0.4b41-14 kernel 2.6.16.27-0.6-smp e2fsprogs 1.38-25.9 glibc 2.4-31.1 SUSE LINUX Enterprise Server 9 (i586) dump 0.4b35-41.1 kernel 2.6.5-7.283-smp e2fsprogs 1.38-4.18 glibc 2.3.3-98.73 Debian testing/unstable dump 0.4b41-2 kernel 2.6.15-1-amd64-k8-smp e2fsprogs 1.38+1.39-WIP-2005.12.31-1 libc6 2.3.6-10 Sample dump output is: DUMP: Date of this level 0 dump: Wed Feb 7 14:56:10 2007 DUMP: Dumping /dev/sda5 (/home (dir /dump/test)) to standard output DUMP: Label: none DUMP: Writing 10 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 804 blocks. DUMP: Volume 1 started with block 1 at: Wed Feb 7 14:56:10 2007 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Volume 1 completed at: Wed Feb 7 14:56:10 2007 DUMP: Volume 1 800 blocks (0.78MB) DUMP: 800 blocks (0.78MB) DUMP: finished in less than a second DUMP: Date of this level 0 dump: Wed Feb 7 14:56:10 2007 DUMP: Date this dump completed: Wed Feb 7 14:56:10 2007 DUMP: Average transfer rate: 0 kB/s DUMP: DUMP IS DONE -- Ben Harris, University of Cambridge Computing Service. Tel: (01223) 334728 |
From: Brent B. <br...@ke...> - 2007-02-07 04:37:40
|
About a year or so ago, I found myself fighting a problem described in the man page for the Linux SCSI tape driver, st(4): MT_ST_BUFFER_WRITES (Default: true) Buffer all write operations in fixed-block mode. If this option is false and the drive uses a fixed block size, then all write operations must be for a multiple of the block size. This option must be set false to write reliable multi-volume archives. MT_ST_ASYNC_WRITES (Default: true) When this option is true, write operations return immediately without waiting for the data to be trans- ferred to the drive if the data fits into the driver's buffer. The write threshold determines how full the buffer must be before a new SCSI write command is issued. Any errors reported by the drive will be held until the next operation. This option must be set false to write reliable multi-volume archives. Basically, this is saying that you have to 'mt stclearoptions async-writes buffer-writes' before you start writing a multivolume tape set, or you're going to corrupt files at the span points between tapes. This had me deliriously frustrated for awhile last year until I found out about it, partially because it confounded me that two settings in the tape driver such as these which have the potential to make backups spanning more than one tape impossible was set to do just that BY DEFAULT, and partially because of the amount of research it took me to uncover the syntax to fix it with 'mt stclearoptions async-writes buffer-writes', searching for which was a jungle safari also. (There's nowhere in the man pages for st(4) or mt(1) that it exactly comes right out with that.) But finally, I discovered that, and tested it, and it was reliable over a 4-tape set. I verified the data back, and it was good. I think I was using 2.4.31 then. (Yes, I stayed with 2.4 a long time...) I'm now using 2.6.19.2, and using the same exact tape drive, and it appears that sometime between then and now, the maintainers of the tape driver 'st' have done something to once again make spanning multiple volumes difficult or impossible. The very same 4-volume set, read with the very same tape driver settings, on the same tape drive, now yields familiar errors about "Missing blocks at the end of ..., assuming hole", "resync restore, skipped 321 blocks", "expected next file 3698222, got 0", and so on, every time a new tape is mounted during the multivolume restore. That is exactly what it used to do for files at the span point of every tape change when you *didn't* take care to 'mt stclearoptions async-writes buffer-writes' beforehand. Now it does that regardless. I know this is really more of a tape driver problem than a dump problem, but the Linux SCSI tape list is practically dead -- even spam doesn't live there anymore, so I thought this might be a better place to ask, especially since I use dump almost exclusively for backups. -- + Brent A. Busby + "...the killer application for Windows was Visual + UNIX Systems Admin + Basic. It allowed you to make your hokey, self- + University of Chicago + made applications that did something stupid for + James Franck Institute + your enterprise." --Linus Torvalds |
From: Tony N. <ton...@ge...> - 2007-01-10 22:51:09
|
At 12:23 PM -0500 1/10/07, Dragan Krnic wrote: >I am trying to salvage data from an unlucky volume. >The dump utility starts with following messages: > >DUMP: Date of this level 0 dump: Wed Jan 10 18:07:37 2007 >DUMP: Dumping /dev/sdl (/disc) to standard output >DUMP: Excluding inode 7 (resize inode) from dump >DUMP: Label: none >DUMP: Writing 64 Kilobyte records >DUMP: mapping (Pass I) [regular files] > >and then after about two and a half minutes it says > >/dev/sdl: Can't read next inode while scanning inode #2342912 > >The volume was quite big, 3.5 TB, but it was only about 1.2 TB >full. The dump utility apparently finds more than 2 million >inodes but unfortunately it breaks with the above message >before actually dumping any data. > >Is there some version of dump which would ignore such errors >and just try to make sense out of the remaining sound data, >even if there are corrupted chunks in between? > >Do you have some other suggestion, how to salvage as much >data as possible (e2fsck has even less patience with it)? Off the top of my head, you might start by patching dump's traverse.c mapfiles() so that instead of exiting it tries to continue, as it does for a EXT2_ET_BAD_BLOCK_IN_INODE_TABLE error. Dump may barf in other places as well, and you'd need to deal with them as they arise. -- ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |
From: Stelian P. <st...@po...> - 2007-01-10 22:12:51
|
Le mercredi 10 janvier 2007 =E0 12:23 -0500, Dragan Krnic a =E9crit : > I am trying to salvage data from an unlucky volume.=20 > The dump utility starts with following messages:=20 >=20 > DUMP: Date of this level 0 dump: Wed Jan 10 18:07:37 2007=20 > DUMP: Dumping /dev/sdl (/disc) to standard output=20 > DUMP: Excluding inode 7 (resize inode) from dump=20 > DUMP: Label: none=20 > DUMP: Writing 64 Kilobyte records=20 > DUMP: mapping (Pass I) [regular files]=20 >=20 > and then after about two and a half minutes it says=20 >=20 > /dev/sdl: Can't read next inode while scanning inode #2342912=20 >=20 > The volume was quite big, 3.5 TB, but it was only about 1.2 TB=20 > full. The dump utility apparently finds more than 2 million=20 > inodes but unfortunately it breaks with the above message=20 > before actually dumping any data.=20 >=20 > Is there some version of dump which would ignore such errors=20 > and just try to make sense out of the remaining sound data,=20 > even if there are corrupted chunks in between?=20 The problem here is that the filesystem structure itself is corrupted.=20 You can modify dump to make it stop iterating the inodes just before the corrupted one, but there is a good chance that you'll encounter even more data corruption when trying to find the data blocks associated to the inodes. > Do you have some other suggestion, how to salvage as much=20 > data as possible (e2fsck has even less patience with it)?=20 e2fsck is based on the same ext2 low-level libraries as dump, but it has much more intelligence to deal with errors. If e2fsck is not able to salvage at least a part of the data, I don't think dump will. --=20 Stelian Pop <st...@po...> |
From: Dragan K. <dk...@ly...> - 2007-01-10 17:23:26
|
I am trying to salvage data from an unlucky volume. <br>The dump utility starts with following messages: <br> <br> DUMP: Date of this level 0 dump: Wed Jan 10 18:07:37 2007 <br> DUMP: Dumping /dev/sdl (/disc) to standard output <br> DUMP: Excluding inode 7 (resize inode) from dump <br> DUMP: Label: none <br> DUMP: Writing 64 Kilobyte records <br> DUMP: mapping (Pass I) [regular files] <br> <br>and then after about two and a half minutes it says <br> <br>/dev/sdl: Can't read next inode while scanning inode #2342912 <br> <br>The volume was quite big, 3.5 TB, but it was only about 1.2 TB <br>full. The dump utility apparently finds more than 2 million <br>inodes but unfortunately it breaks with the above message <br>before actually dumping any data. <br> <br>Is there some version of dump which would ignore such errors <br>and just try to make sense out of the remaining sound data, <br>even if there are corrupted chunks in between? <br> <br>Do you have some other suggestion, how to salvage as much <br>data as possible (e2fsck has even less patience with it)? <br> <br>Regards <br>Dragan |
From: Stelian P. <st...@po...> - 2006-12-03 17:09:46
|
Le lundi 27 novembre 2006 =E0 17:14 -0600, Mark E. Walker a =E9crit : > Greetings, Hi, > I've got a problem with restoring dumps using version 0.4b40. Here's m= y > scenario. [...] > reading QFA positions from full-root.qa > resync restore, skipped 1 blocks > resync restore, skipped 1 blocks > resync restore, skipped 1 blocks > resync restore, skipped 1 blocks > resync restore, skipped 1 blocks > /root/bin/restore.sh: line 3: 8254 Segmentation fault /sbin/resto= re -x > -A $1.toc -f $1.dump -Q $1.qa $2 This seems to be definately a bug, and a new one not reported before. Do you experience the same problem with the latest version of restore, 0.4b41 ? Do you experience the same problem if you're running restore without the -A option ? Or without the -Q option ? Stelian. --=20 Stelian Pop <st...@po...> |
From: Mark E. W. <mw...@dv...> - 2006-11-27 23:14:33
|
Greetings, I've got a problem with restoring dumps using version 0.4b40. Here's my scenario. Running FC3: Linux version 2.6.9-1.681_FC3smp (bhc...@tw...) (gcc version 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)) #1 SMP Thu Nov 18 15:19:10 EST 2004 Backing up with the following script: ***** PROG=/sbin/dump OPT="-0 -j -u" DUMP="$PROG $OPT" DATE=`/bin/date +%y%m%d` BACKDIR="/backup/$DATE" LOG="$BACKDIR/log" if [ ! -r $BACKDIR ] ; then mkdir $BACKDIR fi echo LOG: start backup on `date` >> $LOG 2>&1 (time $DUMP -f $BACKDIR/full-root.dump \ -Q $BACKDIR/full-root.qa \ -A $BACKDIR/full-root.toc / ) >> $LOG 2>&1 echo LOG:EXIT:$? >> $LOG 2>&1 echo LOG: backup ended on `date` >> $LOG 2>&1 ***** The backup works fine. The restore does this: reading QFA positions from full-root.qa resync restore, skipped 1 blocks resync restore, skipped 1 blocks resync restore, skipped 1 blocks resync restore, skipped 1 blocks resync restore, skipped 1 blocks /root/bin/restore.sh: line 3: 8254 Segmentation fault /sbin/restore -x -A $1.toc -f $1.dump -Q $1.qa $2 My dump file is a little over 3.2G, the .toc is about 51M. I have tried doing the dump to local filespace so the NFS is taken out of the picture completely and got the same result. I have this same solution running on 4 or 5 other linux servers, various flavors from Slack to FC5 and it works fine. All of them are 0.4b39 or earlier. I'll be the first to admin that I know nothing about diagnosing problems like this. I can copy the .4b39 files from a working server and they will backup and restore fine, albeit without the ACL attributes which I would like to have. I also have a couple of newer centos builds that are having the same problem. I will apologize in advance if this is a really stupid question. If it is, please point me in the proper direction and I'll be eternally grateful! Mark |
From: Stelian P. <st...@po...> - 2006-10-18 14:12:51
|
Le mercredi 18 octobre 2006 =E0 15:41 +0200, Vincenzo Versi a =E9crit : > Hi Stelian, > I did it! I found that in the ext2fs.h wasn't defined the struct > ext2fs_inode that was defined in the header ext2_fs.h, so I included > the ext2_fs.h file in the ext2fs.h and it compiled good. Now I have > dump and restore working fine now.=20 Congratulations :) =20 > Thank you for all your suggestions. > =20 > P.S. Will I have troubles in the future for this modification, I mean > in the system? You should be safe. Stelian. --=20 Stelian Pop <st...@po...> |
From: Vincenzo V. <vin...@gm...> - 2006-10-18 13:42:13
|
Hi Stelian, I did it! I found that in the ext2fs.h wasn't defined the struct ext2fs_inode that was defined in the header ext2_fs.h, so I included the ext2_fs.h file in the ext2fs.h and it compiled good. Now I have dump and restore working fine now. Thank you for all your suggestions. P.S. Will I have troubles in the future for this modification, I mean in the system? |
From: Stelian P. <st...@po...> - 2006-10-15 21:39:14
|
Le vendredi 13 octobre 2006 =E0 11:07 +0200, Vincenzo Versi a =E9crit : > Hi Stelian, tx for the latest advice because I installed dump and it > seems it works. The troubles now are with restore I can't make the bin > file and here is the output of make: > =20 > for i in compat/lib compat/include common dump restore rmt; do \ > (cd $i && make all) || exit 1; \ > done > make[1]: Entering directory > `/home/vincenzo/dump/dump-0.4b41/compat/lib' > make[1]: Nothing to be done for `all'.=20 > make[1]: Leaving directory > `/home/vincenzo/dump/dump-0.4b41/compat/lib' > make[1]: Entering directory > `/home/vincenzo/dump/dump-0.4b41/compat/include' > make[1]: Nothing to be done for `all'. > make[1]: Leaving directory `/home/vincenzo/dump/dump- > 0.4b41/compat/include' > make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/common' > make[1]: Nothing to be done for `all'. > make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/common' > make[1]: Entering directory `/home/vincenzo/dump/dump- 0.4b41/dump' > make[1]: Nothing to be done for `all'. > make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/dump' > make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/restore' > gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -g -O2 -pipe -I.. > -I../compat/include -I../restore -DRDUMP -DRRESTORE -DLINUX_FORK_BUG > -DHAVE_LZO -D_PATH_DUMPDATES=3D\"/usr/local/etc/dumpdates\" > -D_DUMP_VERSION=3D\" 0.4b41\" xattr.c -o xattr.o > In file included from ../compat/include/bsdcompat.h:15, > from xattr.c:44: > /usr/include/ext2fs/ext2fs.h:209: warning: `struct ext2_inode' > declared inside parameter list=20 > /usr/include/ext2fs/ext2fs.h:209: warning: its scope is only this > definition or declaration, which is probably not what you want. [...] I'm afraid you'll have to dig yourself a solution for this, most likely /usr/include/ext2fs/ext2fs.h, around line 209, uses some type which is defined in a system header file which has not been included before. Since all the other files in dump and restore, except xattr, compiled fine, I suggest you look at the #includes at the top of the different files and determine which one is missing in xattr.c This is a problem with libext2fs headers, not with dump per se... Stelian. --=20 Stelian Pop <st...@po...> |
From: Vincenzo V. <vin...@gm...> - 2006-10-13 09:07:07
|
Hi Stelian, tx for the latest advice because I installed dump and it seems it works. The troubles now are with restore I can't make the bin file and here is the output of make: for i in compat/lib compat/include common dump restore rmt; do \ (cd $i && make all) || exit 1; \ done make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/compat/lib' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/compat/lib' make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/compat/include' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/compat/include' make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/common' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/common' make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/dump' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/dump' make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/restore' gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -g -O2 -pipe -I.. -I../compat/include -I../restore -DRDUMP -DRRESTORE -DLINUX_FORK_BUG -DHAVE_LZO -D_PATH_DUMPDATES=\"/usr/local/etc/dumpdates\" -D_DUMP_VERSION=\" 0.4b41\" xattr.c -o xattr.o In file included from ../compat/include/bsdcompat.h:15, from xattr.c:44: /usr/include/ext2fs/ext2fs.h:209: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h:209: warning: its scope is only this definition or declaration, which is probably not what you want. /usr/include/ext2fs/ext2fs.h:211: warning: `struct ext2_inode' declared inside parameter list In file included from ../compat/include/bsdcompat.h:15, from xattr.c:44: /usr/include/ext2fs/ext2fs.h:531: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h:584: warning: `struct ext2_dir_entry' declared inside parameter list /usr/include/ext2fs/ext2fs.h:602: warning: `struct ext2_dir_entry' declared inside parameter list /usr/include/ext2fs/ext2fs.h:666: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h:679: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h:681: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h:784: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h:787: warning: `struct ext2_inode' declared inside parameter list /usr/include/ext2fs/ext2fs.h: In function `ext2fs_group_of_blk': /usr/include/ext2fs/ext2fs.h:958: dereferencing pointer to incomplete type /usr/include/ext2fs/ext2fs.h:959: dereferencing pointer to incomplete type /usr/include/ext2fs/ext2fs.h: In function `ext2fs_group_of_ino': /usr/include/ext2fs/ext2fs.h:967: dereferencing pointer to incomplete type make[1]: *** [xattr.o] Error 1 make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/restore' make: *** [all] Error 1 well sorry but I don't know| thank you in advance Vincenzo |
From: Stelian P. <st...@po...> - 2006-10-11 22:02:31
|
Le mercredi 11 octobre 2006 =E0 16:42 +0200, Vincenzo Versi a =E9crit : > hi all, sorry for I'm late. I tried to install the 0.4b41 but I've > received these errors in the make output, [....] > gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -g -O2 -pipe -I.. > -I../compat/include > -I../dump -DRDUMP -DRRESTORE -DLINUX_FORK_BUG -DHAVE_LZO > -D_PATH_DUMPDATES=3D\"/us > r/local/etc/dumpdates\" -D_DUMP_VERSION=3D\" 0.4b41\" dumprmt.c -o > dumprmt.o > In file included from ../compat/include/bsdcompat.h:14, > from dumprmt.c:56: > /usr/include/ext2fs/ext2fs.h:700: parse error before `FILE' I'm not sure what the problem is. Most probably it's a bug in this very old version of ext2fs libraries, where ext2fs.h doesn't include stdio.h to get the definition of the FILE structure. You can probably work around this by either editing /usr/include/ext2fs/ext2fs.h directly or by modifying locally compat/include/bsdcompat.h to=20 #include <stdio.h> before the inclusion of ext2fs.h Stelian. --=20 Stelian Pop <st...@po...> |