You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Brent B. <br...@ke...> - 2008-09-06 04:27:58
|
I have been restoring a set of ten tapes, and the last one has proven to be in horrible shape. Any tips on making a 'restore -R' keep going even if it wants to die? I am sure many files could be read if it weren't for the tendency it has to "lose sync" and give up. (Just using '-y' doesn't make it determined enough to stop quitting...) -- + Brent A. Busby + "We've all heard that a million monkeys + UNIX Systems Admin + banging on a million typewriters will + University of Chicago + eventually reproduce the entire works of + Physical Sciences Div. + Shakespeare. Now, thanks to the Internet, + James Franck Institute + we know this is not true." -Robert Wilensky |
From: Kenneth P. <sh...@se...> - 2008-09-05 23:19:38
|
Is there a way to resume "restore -C" after a transient hardware read failure? I'm backing up to an external USB NTFS drive, using compression. The resulting backup is about 150 gigabytes (after compression). After the backup I do a verify with "restore -C". For the last two full backups, I get a read error at sector 268435455 which turns out to be 0xFFFFFFF. I'm guessing this might be an issue either in the ntfs-3g FUSE filesystem or the underlying USB2 subsystem. I'm able to read the sector fine with dd: dd if=/dev/sdb of=/tmp/my.sector bs=512 count=10 skip=268435453 (Numbers chosen here to read a few blocks on either side of the claimed failing block.) The kernel messages look like this: Sep 5 15:41:37 segw2 kernel: sd 5:0:0:0: Device not ready: <6>: Current: sense key: Not Ready Sep 5 15:41:37 segw2 ntfs-3g[22860]: ntfs_attr_pread: ntfs_pread failed: Input/output error Sep 5 15:41:37 segw2 kernel: Add. Sense: Logical unit not ready, cause not reportable Sep 5 15:41:37 segw2 kernel: Sep 5 15:41:37 segw2 kernel: end_request: I/O error, dev sdb, sector 268435455 Sep 5 15:41:37 segw2 ntfs-3g[22860]: ntfs_attr_pread error reading '/Shelob/0/root/dump' at offset 132989304832: 4096 <> -1: Input/ Sep 5 15:41:37 segw2 kernel: Buffer I/O error on device sdb1, logical block 33554424 |
From: Stelian P. <st...@po...> - 2008-08-21 21:13:09
|
Le jeudi 21 août 2008 à 13:57 -0700, Perry Hutchison a écrit : > > > What would you think of adding some options: > > > > > > -D filesystem > > [...] > > > -i > > [...] > > > -X > > [...] > > > > I'm confused. To what program do you intend to pass those options ? > > (Linux) dump manpage or (my local copy of) losetup manpage do not > > mention those. > > I'm suggesting adding those capabilities to dump. Ok, got it now. > I would expect -D to be fairly easy, indeed > and -i also provided it's > not necessary to have read the Nth inode block in order to find > the (N+1)th inode block. dump used e2fsprogs' ext2fs_block_iterate2() to iterate over the blocks of one inode. I'm not sure (at least it is not documented) that ext2fs_block_iterate2() allows the behaviour you're searching for. > -X might be more troublesome, depending > on how much block accounting dump is already doing. Dump does not do any block accounting, it relies on e2fsprogs functions to obtain the block numbers it needs to save. Stelian. -- Stelian Pop <st...@po...> |
From: <phu...@wi...> - 2008-08-21 20:57:55
|
> > What would you think of adding some options: > > > > -D filesystem > [...] > > -i > [...] > > -X > [...] > > I'm confused. To what program do you intend to pass those options ? > (Linux) dump manpage or (my local copy of) losetup manpage do not > mention those. I'm suggesting adding those capabilities to dump. I would expect -D to be fairly easy, and -i also provided it's not necessary to have read the Nth inode block in order to find the (N+1)th inode block. -X might be more troublesome, depending on how much block accounting dump is already doing. |
From: Stelian P. <st...@po...> - 2008-08-21 20:42:54
|
Le jeudi 21 août 2008 à 13:07 -0700, Perry Hutchison a écrit : > > Yes, dump really expects a block device. So you should > > 'losetup' a loop device on top of the disk file. > > What would you think of adding some options: > > -D filesystem [...] > -i [...] > -X [...] I'm confused. To what program do you intend to pass those options ? (Linux) dump manpage or (my local copy of) losetup manpage do not mention those. Stelian. -- Stelian Pop <st...@po...> |
From: <phu...@wi...> - 2008-08-21 20:07:32
|
> > I had already made a copy using dd with conv=noerror ... > > - the dd manpage does not say what dd does about the > > unreadable blocks ... I suspect it skipped them entirely ... > > From what I read, you should have been using: > dd conv=noerror,sync I considered that, but was not entirely sure from the manpage whether it would have the desired effect. > Even better, ddrescue seems to be more appropriate: > http://www.gnu.org/software/ddrescue/ddrescue.html Definitely. > > - If I try to have dump read from the image file, it > > misinterprets this as a request to dump the image > > file itself: > > Yes, dump really expects a block device. So you should > 'losetup' a loop device on top of the disk file. What would you think of adding some options: -D filesystem Interpret _filesystem_ as the filesystem to be dumped, even if it is neither a mounted filesystem nor a device (e.g. it is a regular file containing an image of an ext2 filesystem, such as might be produced by ddrescue(8)). If any _files-to-dump_ are specified they are interpreted as names within _filesystem_. -i Ignore read errors in blocks containing inodes as well as in data blocks, treating the inodes in such blocks as unallocated, instead of terminating. This enables recovery of data represented by readable inodes, even when some inodes are not readable. -X Any data blocks which the freelist shows as allocated, but which are not accounted for by any inode, are dumped with made-up names that include the block number. This is mostly useful along with -i, or with a -D image produced by ddrescue(8) from a filesystem which contained unreadable inode blocks, to recover the data associated with the missing inodes. |
From: Stelian P. <st...@po...> - 2008-08-20 21:30:08
|
Le mercredi 20 août 2008 à 17:34 +0200, Dr. Daniel Eichelsbacher a écrit : > /dev/hde1: File not found by ext2_lookup while translating var/www/foobar This means that you want to backup the var/www/foobar directory which is supposed to be on /dev/hde1, but is it not there. Most probably you have another filesystem mounted upon var/ or var/www/ and 'foobar' is part of the other filesystem. However, dump should have found out by itself the correct filesystem (dump looks at /etc/mtab and /etc/fstab, in this order, and search for a mounted filesystem on /var/www/foobar, then on /var/www, then on /var, and finally on /). Do you see something strange on /etc/mtab or /etc/fstab that could explain your problem ? -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2008-08-20 21:22:20
|
Le mercredi 20 août 2008 à 12:12 -0700, Perry Hutchison a écrit : > I had already made a copy using dd with conv=noerror, and have > encountered two problems (one definite, the other suspected): > > - If I try to have dump read from the image file, it misinterprets > this as a request to dump the image file itself: Yes, dump really expects a block device. So you should 'losetup' a loop device on top of the disk file. e2fsck works with a block device or a regular file (it just issues a warning on startup iirc). [...] > - the dd manpage does not say what dd does about the unreadable > blocks. Ideally it would have filled the corresponding parts > of the image file with zeros, but I suspect it skipped them > entirely (so everything beyond the first unreadable block is > probably at the wrong offset for either e2fsck or dump). >From what I read, you should have been using: dd conv=noerror,sync Even better, ddrescue seems to be more appropriate: http://www.gnu.org/software/ddrescue/ddrescue.html -- Stelian Pop <st...@po...> |
From: <phu...@wi...> - 2008-08-20 19:12:14
|
> > I am attempting to dump a filesystem which has developed a bad > > block, and the bad block happens to contain some of the inodes. > > Even though I specified a large -I number, dump is terminating > > upon reaching the unreadable inode block. > > > > How do I get it to just skip over unreadable blocks, including > > unreadable inode blocks, and dump whatever can be read? > > I'm afraid you can't. > > At this point you should make a low level copy of the filesystem > (using dd for example), and try to repair the filesystem using > e2fsck ... I had already made a copy using dd with conv=noerror, and have encountered two problems (one definite, the other suspected): - If I try to have dump read from the image file, it misinterprets this as a request to dump the image file itself: $ ls -l hda7-ddImage -r--r----- 1 phutchis wrs 12707980800 Aug 8 21:51 hda7-ddImage $ file hda7-ddImage hda7-ddImage: Linux rev 1.0 ext3 filesystem data (errors) $ /sbin/dump -0 -f hda7-dump-from-ddImage hda7-ddImage DUMP: Date of this level 0 dump: Wed Aug 20 11:58:43 2008 DUMP: Dumping /dev/mapper/VolGroup00-LogVol00 (/ (dir pdx-punster1/cricket_backups/hda7-ddImage)) to hda7-dump-from-ddImage DUMP: Cannot open /dev/mapper/VolGroup00-LogVol00 DUMP: The ENTIRE dump is aborted. - the dd manpage does not say what dd does about the unreadable blocks. Ideally it would have filled the corresponding parts of the image file with zeros, but I suspect it skipped them entirely (so everything beyond the first unreadable block is probably at the wrong offset for either e2fsck or dump). |
From: Dr. D. E. <dan...@de...> - 2008-08-20 16:55:55
|
Hi Stelian, first of all thanks a lot for your reply. Meanwhile I have investigated a lot of time to solve the problem, but at the moment I have no solution. I'm using the following versions: dump 0.4b41 (using libext2fs 1.40.2 of 12-Jul-2007) restore 0.4b41 (using libext2fs 1.40.8 of 13-Mar-2008) Meanwhile I reproduce the following error: /dev/hde1: File not found by ext2_lookup while translating var/www/foobar But the file I want to dump is DEFINITELY there! Here's the last complete commands I used: ssh root@foo "dump -0 -af - /var/www/foobar" | gzip -5 | dd of=/tmp/foobar/foo.gz DUMP: Date of this level 0 dump: Tue Aug 19 16:17:11 2008 DUMP: Dumping /dev/hde1 (/ (dir var/www/foobar)) to standard output DUMP: Label: none DUMP: Writing 10 Kilobyte records DUMP: mapping (Pass I) [regular files] /dev/hde1: File not found by ext2_lookup while translating var/www/foobar The filesystem where I want to dump the missing files is ext3. I already made a 'sudo fsck.ext3 -n /dev/hde1 -v -c' but can't find bad blocks. I really have no idea why dump isn't finding the files. The error msg talkes about ext2_lookup but the fs is ext3. Do you think that is the problem? As I read on the web dump seems to be able dumping ext3-fs of course. For every hint I will be very thankful. Cheers. Daniel Stelian Pop schrieb: > Le mercredi 06 août 2008 à 14:30 +0200, Dr. Daniel Eichelsbacher a > écrit : >> Hi, >> >> I'm using the command >> >> sudo dump -0f - /root | restore -ruf - >> >> to dump and restore all files and directories of /root. For some strange >> reason only some files are restored. dot-files like .bashrc are >> completly missing. The filesystem ist ext3. >> >> Does someone knows, why not all files will be restored? > > Hi Daniel, > > Sorry for the delay. I am unable to reproduce the problem here. > > What version of dump/restore are you using ? What is your target > filesystem ? > > Stelian. Mit herzlichen Grüßen, Dr. Daniel Eichelsbacher Systems Development Manager Claranet GmbH -- Tel +49 (0)69 -408018 -254 Fax -229 E-Mail dan...@de... -- Claranet GmbH - Managed Services Provider DE | UK | FR | NL | ES | PT | US Hanauer Landstraße 196 60314 Frankfurt am Main Geschäftsführung Olaf Fischer Hrb 50381 AG Frankfurt am Main Vat-ID de 812918694 www.clara.net -- Claranet vom Verband der deutschen Internetwirtschaft ausgezeichnet: 1. Platz beim ECO AWARD 2006 & 2007 als "Bester Geschäftskunden Provider" |
From: Stelian P. <st...@po...> - 2008-08-20 15:01:11
|
Hi Perry, Le dimanche 10 août 2008 à 15:09 -0700, Perry Hutchison a écrit : > I am attempting to dump a filesystem which has developed a bad > block, and the bad block happens to contain some of the inodes. > Even though I specified a large -I number, dump is terminating > upon reaching the unreadable inode block. > > How do I get it to just skip over unreadable blocks, including > unreadable inode blocks, and dump whatever can be read? I'm afraid you can't. At this point you should make a low level copy of the filesystem (using dd for example), and try to repair the filesystem using e2fsck. Once the filesystem is clean, you should be able to use dump again... Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2008-08-20 14:59:11
|
Le mercredi 06 août 2008 à 14:30 +0200, Dr. Daniel Eichelsbacher a écrit : > Hi, > > I'm using the command > > sudo dump -0f - /root | restore -ruf - > > to dump and restore all files and directories of /root. For some strange > reason only some files are restored. dot-files like .bashrc are > completly missing. The filesystem ist ext3. > > Does someone knows, why not all files will be restored? Hi Daniel, Sorry for the delay. I am unable to reproduce the problem here. What version of dump/restore are you using ? What is your target filesystem ? Stelian. -- Stelian Pop <st...@po...> |
From: Daneel Y. <rtf...@gm...> - 2008-08-17 09:26:23
|
From: <phu...@wi...> - 2008-08-10 22:10:01
|
I am attempting to dump a filesystem which has developed a bad block, and the bad block happens to contain some of the inodes. Even though I specified a large -I number, dump is terminating upon reaching the unreadable inode block. How do I get it to just skip over unreadable blocks, including unreadable inode blocks, and dump whatever can be read? (Aologies if the answer is in the archives; when I attempted to search them I got "Unable to connect to Search Server".) # /sbin/dump -0 -f - -I 32767 /dev/hda7 | rsh ... DUMP: Date of this level 0 dump: Sun Aug 10 14:39:51 2008 DUMP: Dumping /dev/hda7 (an unlisted file system) to standard output DUMP: Added inode 8 to exclude list (journal inode) DUMP: Added inode 7 to exclude list (resize inode) DUMP: Label: /cricket1 DUMP: mapping (Pass I) [regular files] /dev/hda7: Can't read next inode while scanning inode #212576 # tune2fs -l /dev/hda7 tune2fs 1.32 (09-Nov-2002) Filesystem volume name: /cricket1 Last mounted on: <not available> Filesystem UUID: 55ba4ec9-762f-4df8-beb5-d608a59cc7bb Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 1553440 Block count: 3102545 Reserved block count: 155127 Free blocks: 1526138 Free inodes: 1491331 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16352 Inode blocks per group: 511 Filesystem created: Fri Dec 19 14:13:35 2003 Last mount time: Mon Jul 28 10:46:18 2008 Last write time: Mon Aug 4 10:20:01 2008 Mount count: 8 Maximum mount count: 20 Last checked: Mon May 5 10:04:49 2008 Check interval: 15552000 (6 months) Next check after: Sat Nov 1 10:04:49 2008 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal UUID: <none> Journal inode: 8 Journal device: 0x0000 First orphan inode: 0 This is on Red Hat 9 (yes, kind of old), and the filesystem is not mounted. The dump version appears to be 0.4b28: $ rpm -qf /sbin/dump dump-0.4b28-7 |
From: Dr. D. E. <dan...@de...> - 2008-08-06 12:31:04
|
Hi, I'm using the command sudo dump -0f - /root | restore -ruf - to dump and restore all files and directories of /root. For some strange reason only some files are restored. dot-files like .bashrc are completly missing. The filesystem ist ext3. Does someone knows, why not all files will be restored? Thanks for every answer. Cheers, Daniel |
From: Stelian P. <st...@po...> - 2008-06-30 09:09:00
|
Le vendredi 27 juin 2008 à 14:09 -0700, Kenneth Porter a écrit : > The missing file case would need to walk up the path to the mountpoint of > the filesystem being verified, looking for a node that exists, What you need is a routine that, given a pathname, returns the filesystem on which this file would be stored on. So for a pathname of the form: /a/b/c/d/node You need to check: if '/a/b/c/d' is a mountpoint return the filesystem mounted on '/a/b/c/d/.' else if '/a/b/c' is a mountpoint return the filesystem mounted on '/a/b/c/.' else if '/a/b' is a mountpoint return the filesystem mounted on '/a/b/.' else if '/a' is a mountpoint return the filesystem mounted on '/a/.' else return the filesystem mounted on '/' > and either > report the current message if the node is in the right filesystem, or the > "masked by mountpoint" message if not. exactly. I guess it is unnecessary to even continue the compare for given inode it it is masked. > Is the runtime cost of doing that > prohibitive? If you build a table of mountpoints before starting the compare (like dump does for example), and you limit the runtime checks to the strcmp() calls walking up the chain of mountpoints the cost should not be prohibitive, and restore isn't performance driven anyway. BTW, maybe you can reuse the code existing in dump for building the mountpoint table, but I haven't looked at that. > If not, I can look into constructing a patch to do this. This would be perfect. Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-06-27 21:06:19
|
On Friday, June 27, 2008 11:47 AM +0200 Stelian Pop <st...@po...> wrote: > This would work if the target file exists, but how to deal with a > missing target file ? (for example, supposed that you dumped /dev/hda, > but when comparing you don't found this file). > > What we need when comparing is to find out if a whole path is masked my > a mount point and then stop comparing all the leafs of this path. But > since the comparing is done in the inode number order (not in the > directory / contents order), it's not that simple. Good point. I don't tend to see a lot of missing files as my dumps and verifies are done in the middle of the night when the system is quiet, and I haven't masked any large hierarchies with a mount point. Legitimate missing files in my verifies tend to be a result of a tmpwatch sweep. The missing file case would need to walk up the path to the mountpoint of the filesystem being verified, looking for a node that exists, and either report the current message if the node is in the right filesystem, or the "masked by mountpoint" message if not. Is the runtime cost of doing that prohibitive? If not, I can look into constructing a patch to do this. |
From: Stelian P. <st...@po...> - 2008-06-27 09:47:44
|
Le vendredi 20 juin 2008 à 17:32 -0700, Kenneth Porter a écrit : > On Friday, June 06, 2008 2:32 PM -0700 Kenneth Porter > <sh...@se...> wrote: > > > It would be desirable if these miscompares were instead reported with a > > different message to identify the true nature of the miscompare. For > > example, "<path> masked by mount point <mount point>". > > > (Alas, I don't know which API to use to query a path's filesystem. I use > > "df <path>" from the command line, so I'd go digging through the df > > source for that.) > > It looks like the filesystem is identified by the st_dev member returned by > stat(2). stat() is invoked on the root of the compared filesystem at the > start of the compare operation, and for each node compared, so in principle > one could compare the st_dev of the two before comparing anything else. This would work if the target file exists, but how to deal with a missing target file ? (for example, supposed that you dumped /dev/hda, but when comparing you don't found this file). What we need when comparing is to find out if a whole path is masked my a mount point and then stop comparing all the leafs of this path. But since the comparing is done in the inode number order (not in the directory / contents order), it's not that simple. Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-06-22 11:13:49
|
On Friday, June 06, 2008 2:32 PM -0700 Kenneth Porter <sh...@se...> wrote: > It would be desirable if these miscompares were instead reported with a > different message to identify the true nature of the miscompare. For > example, "<path> masked by mount point <mount point>". > (Alas, I don't know which API to use to query a path's filesystem. I use > "df <path>" from the command line, so I'd go digging through the df > source for that.) It looks like the filesystem is identified by the st_dev member returned by stat(2). stat() is invoked on the root of the compared filesystem at the start of the compare operation, and for each node compared, so in principle one could compare the st_dev of the two before comparing anything else. |
From: Kenneth P. <sh...@se...> - 2008-06-06 21:30:18
|
I see lots of miscompares in my root filesystem dump that result from a mount point on the filesystem masking the matching underlying structures in the dump file. (Examples include /misc, /dev, /proc, /sys, /selinux, and of course /mnt/*.) Most of the miscompares are for attributes (both mode and selinux labels). It would be desirable if these miscompares were instead reported with a different message to identify the true nature of the miscompare. For example, "<path> masked by mount point <mount point>". My intuition is that these reports should not be counted towards the miscompare count, since mount points are intentional and not visible to dump. (Alas, I don't know which API to use to query a path's filesystem. I use "df <path>" from the command line, so I'd go digging through the df source for that.) |
From: Stelian P. <st...@po...> - 2008-06-04 19:30:39
|
Le mardi 03 juin 2008 à 11:22 -0700, Kenneth Porter a écrit : > --On Monday, June 02, 2008 10:08 PM +0200 Stelian Pop <st...@po...> > wrote: > > > Keep me informed, I haven't commited the change in CVS yet. > > Looks good. Thanks for the testing, fix commited. I guess it's time to do a new release since there are quite a few changes pending. I'll probably do this in a couple of days. Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-06-03 18:22:57
|
--On Monday, June 02, 2008 10:08 PM +0200 Stelian Pop <st...@po...> wrote: > Keep me informed, I haven't commited the change in CVS yet. Looks good. I dumped 152 Gbytes, and see no duplicate inodes in the QFA file. I did an interactive restore: restore -l -i -b 64 -f /mnt/Backup/0/root/dump -a -Q /mnt/Backup/0/root/qfa -vv I had about 180,000 entries in the QFA file, so I picked one at about 100,000 and found its name: find . -xdev -inum 102019570 -ls I then added this in restore and extracted it. (Quite fast.) Turned out to be an animated GIF file for a SolidWorks project, and it looks reasonable in Firefox, so I got the right content. |
From: Stelian P. <st...@po...> - 2008-06-02 20:08:49
|
Le lundi 02 juin 2008 à 13:06 -0700, Kenneth Porter a écrit : > On Wednesday, May 28, 2008 2:46 PM +0200 Stelian Pop <st...@po...> > wrote: > > > So I guess the best way is still to make the change in dump, and wait > > for people to upgrade. See the patch below. Comments ? > > So essentially QFA isn't usable on SELinux systems before this patch. Yes. (well, you may be lucky and not see any problems but it may happen) > I've applied the patch to my build and it will go into effect tonight on my > nightly backup. I'll eyeball the resulting QFA file tomorrow and try a test > restore from some file in the middle. Keep me informed, I haven't commited the change in CVS yet. Thanks, Stelian -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-06-02 20:03:41
|
On Wednesday, May 28, 2008 2:46 PM +0200 Stelian Pop <st...@po...> wrote: > So I guess the best way is still to make the change in dump, and wait > for people to upgrade. See the patch below. Comments ? So essentially QFA isn't usable on SELinux systems before this patch. I've applied the patch to my build and it will go into effect tonight on my nightly backup. I'll eyeball the resulting QFA file tomorrow and try a test restore from some file in the middle. |
From: Stelian P. <st...@po...> - 2008-05-28 12:46:31
|
Le mercredi 28 mai 2008 à 12:00 +0200, Stelian Pop a écrit : > This means that the duplicates appear only in some cases. For > example, if an x count for 8kb, a and b are inodes, A and B are the > extended attributes of a and b): > > xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx > aaaaaaaA Abbbbbbb bbbbBcCd dddddddD > > QFA: ^ ^ ^ ^ > > In the example above, only b/B will generate duplicates. > > The simplest way to correct this would be to modify dump in order to > forbid the creation of QFA positions for EAs, only for real inodes. But > I need to think a bit if there isn't something we can do to cope with > those duplicates in restore (to handle QFA files generated by an older > version of dump). > > Let me think a bit about this and I'll propose you a patch. On a second thought, this doesn't look like it's feasible. For the example above, I could modify restore so to use the second QFA position instead of the third. But there will still be a problem with the forth position, if you want to restore 'd/D'. We could modify restore so that the search for a QFA position will return the position of the previous inode (so a search for b/B will return the first mark, a search for d/D will return the third mark etc), but this makes the whole QFA feature less precise: in the case you have a big file before the file you want to restore, you will not be able to position the tape on the needed file, but on the precedent one, and wait for the tape to advance until the needed one. So I guess the best way is still to make the change in dump, and wait for people to upgrade. See the patch below. Comments ? Index: dump/tape.c =================================================================== RCS file: /cvsroot/dump/dump/dump/tape.c,v retrieving revision 1.89 diff -u -r1.89 tape.c --- dump/tape.c 20 Aug 2005 21:00:48 -0000 1.89 +++ dump/tape.c 28 May 2008 12:45:52 -0000 @@ -1310,7 +1310,8 @@ if ((spclptr->c_magic == NFS_MAGIC) && (spclptr->c_type == TS_INODE) && (spclptr->c_date == gThisDumpDate) && - !(spclptr->c_dinode.di_mode & S_IFDIR) + !(spclptr->c_dinode.di_mode & S_IFDIR) && + !(spclptr->c_flags & DR_EXTATTRIBUTES) ) { foundone = 1; /* if (cntntrecs >= maxntrecs) { only write every maxntrecs amount of data */ -- Stelian Pop <st...@po...> |