You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Stelian P. <st...@po...> - 2008-05-28 10:00:17
|
Le mardi 27 mai 2008 à 16:57 -0700, Kenneth Porter a écrit : > On Tuesday, May 27, 2008 4:47 PM -0700 Kenneth Porter > <sh...@se...> wrote: > > > Note the repeated inode. I'm running a find right now to find the file > > with that inode. > > The file is a PNG in some doxygen output. All of the other files in that > directory have identical labeling (as listed by "ls --scontext"). [...] > All the duplicates have 2 copies, no more. Normal: there is one copy for the inode, and another one for the extended attributes header. > So almost 9000 dups in 551790 > files. And it is also normal to have duplicates only for some inodes. The QFA file does not contain the position for _all_ the inodes, only some of them (the position is saved once per ntrec - meaning once per 64 * 1024 bytes -, when a start header - inode _or_ EA - is seen in the output stream). This means that the duplicates appear only in some cases. For example, if an x count for 8kb, a and b are inodes, A and B are the extended attributes of a and b): xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx aaaaaaaA Abbbbbbb bbbbBcCd dddddddD QFA: ^ ^ ^ ^ In the example above, only b/B will generate duplicates. The simplest way to correct this would be to modify dump in order to forbid the creation of QFA positions for EAs, only for real inodes. But I need to think a bit if there isn't something we can do to cope with those duplicates in restore (to handle QFA files generated by an older version of dump). Let me think a bit about this and I'll propose you a patch. > I'm running SELinux in Permissive mode. Could dump get confused if I have > it on but not enforcing? No, this is not related. Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-05-27 23:55:35
|
On Tuesday, May 27, 2008 4:47 PM -0700 Kenneth Porter <sh...@se...> wrote: > Note the repeated inode. I'm running a find right now to find the file > with that inode. The file is a PNG in some doxygen output. All of the other files in that directory have identical labeling (as listed by "ls --scontext"). > (I'm thinking it should be easy to write a Perl script or Perl + uniq to > find all the duplicate inodes in the QFA file.) [root@segw2 /]# wc /mnt/Backup/0/root/qfa 183933 551790 4402128 /mnt/Backup/0/root/qfa [root@segw2 /]# awk '{print $1;}' /mnt/Backup/0/root/qfa | uniq -d -c | wc 4490 8980 77899 All the duplicates have 2 copies, no more. So almost 9000 dups in 551790 files. I'm running SELinux in Permissive mode. Could dump get confused if I have it on but not enforcing? |
From: Kenneth P. <sh...@se...> - 2008-05-27 23:44:43
|
On Wednesday, May 28, 2008 12:10 AM +0200 Stelian Pop <st...@po...> wrote: > Looks like the selinux extended attributes got restored instead of the > actual file. Do you have two entries (for the same inode number) in your > QFA file ? I've since recycled that backup, but looking at the file mounted today, I see this towards the end: 119078913 1 141329956864 119078915 1 141330022400 119078916 1 141330087936 119078916 1 141330153472 119078919 1 141330219008 119111682 1 141330284544 Note the repeated inode. I'm running a find right now to find the file with that inode. (I'm thinking it should be easy to write a Perl script or Perl + uniq to find all the duplicate inodes in the QFA file.) |
From: Stelian P. <st...@po...> - 2008-05-27 22:11:08
|
Le lundi 19 mai 2008 à 17:03 -0700, Kenneth Porter a écrit : > I'm dumping and generating a QFA file, and today tried to restore a large > mbox file using it. The result was a gibberish 4k binary file that appeared > to have selinux stuff in it. Using "restore -i" the file is shown to be 107 > Mbytes. Looks like the selinux extended attributes got restored instead of the actual file. Do you have two entries (for the same inode number) in your QFA file ? -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2008-05-27 21:55:19
|
Hi Eric, > restore -t -b 64 -f /dev/nst0 > > I get an error saying that the block size is wrong, and I should try > "-b 64" instead! [...] > But if instead I do this: > > dd if=/dev/nst0 bs=64k | restore -t -b 64 -f - > > everything works fine. That is, restore can read 64k blocks from > standard input just fine, but doesn't seem to want to read them > directly from the tape (or somehow the tape drive isn't delivering > them correctly to restore when called directly). I checked the source code and didn't find anything obviously wrong which could explain what is happenning. I suggest adding -d and -v to your restore command line (debug and verbose enabled) and see if this generates more information. You can also try to instrument the findtapeblksize() function in restore/tape.c, which is probably the place where something goes wrong. Thanks, Stelian. -- Stelian Pop <st...@po...> |
From: Eric J. <eje...@sw...> - 2008-05-22 20:52:47
|
Sorry for multiple messages in the thread, but for completeness, here's the exact error message: [root ~]# restore -t -b 64 -f /dev/nst0 restore: Tape blocksize is too large, use '-b 64' As I noted before, reading that dump with 'dd bs=64k' then piping to restore works fine. |
From: Eric J. <eje...@sw...> - 2008-05-22 15:43:20
|
Hi all, I'm using dump and restore (0.4b41) to do some dumps from a remote machine to a local tape drive (which is device /dev/nst0). All seems to run fine; I do the dump using: ssh remote_machine dump -0ua -b 64 -f - / | dd of=/dev/nst0 bs=64k starting from the machine that has the local tape drive at /dev/ nst0. Dump runs fine, no problem as far as I can see. When I went to double-check that the dump file on tape seemed OK, I tried to do this (again, locally on the machine where the tape drive resides) just to list the contents: restore -t -b 64 -f /dev/nst0 I get an error saying that the block size is wrong, and I should try "-b 64" instead! (Sorry I don't have the exact language here; I'm doing another dump right now and can't re-test it.) Of course, since I'm already using "-b 64", that message doesn't help much. (And removing the "-b 64" just generates the same error again, as it probably should.) But if instead I do this: dd if=/dev/nst0 bs=64k | restore -t -b 64 -f - everything works fine. That is, restore can read 64k blocks from standard input just fine, but doesn't seem to want to read them directly from the tape (or somehow the tape drive isn't delivering them correctly to restore when called directly). Note that a dump done from the local machine itself (i.e. for a local disk), without ever having 'dd' in the mix, works fine with 64k blocks. I can dump locally with -b 64 and then restore with -b 64 and I get no errors. Any thoughts about why restore can't read the dump from tape itself, and has to go through dd instead? Could the remote dump protocol via SSH be writing something different as the initial block somehow? (Note: I'm repositioning the tape to block 0 of the dumpfile between each of these commands, so that's not the problem.) Thanks in advance for any help. Eric |
From: Kenneth P. <sh...@se...> - 2008-05-20 00:30:11
|
On Monday, May 19, 2008 5:03 PM -0700 Kenneth Porter <sh...@se...> wrote: > I'm now attempting the restore the hard way, without QFA, and expect it > to take quite awhile as the dump is huge (140 Gbytes on a USB drive). The restore completed successfully without the QFA file, and I got a reasonable-looking 55 megabyte mbox file. |
From: Kenneth P. <sh...@se...> - 2008-05-20 00:01:32
|
I'm dumping and generating a QFA file, and today tried to restore a large mbox file using it. The result was a gibberish 4k binary file that appeared to have selinux stuff in it. Using "restore -i" the file is shown to be 107 Mbytes. I'm now attempting the restore the hard way, without QFA, and expect it to take quite awhile as the dump is huge (140 Gbytes on a USB drive). I'm pretty confident in the dump file, as I always do a -C verify and see the expected log files and more volatile user mailboxes miscompare. I'm guessing that the QFA file is somehow wrong, but the short file size also worries me. Dump command: $DUMP 0u -h 0 -b 64 -f $DUMPSUBDIR/root/dump -Q $DUMPSUBDIR/root/qfa -B 200000000 / -E ${EXCLUDE_FILE} My attempted restore command: restore -i -l -b 64 -f /mnt/Backup/0/root/dump -Q /mnt/Backup/0/root/qfa -vv Version is dump-0.4b41-2.fc6.kp2. Package changelog: * Wed Mar 12 2008 Kenneth Porter <shi...@se...> 0.4b41-2.fc6.kp2 - expectedinode patch to suppress error from unexpected inode on tape when comparing * Tue Jan 08 2008 Kenneth Porter <shi...@se...> 0.4b41-2.fc6.kp - hugemodefile patch to handle very large file of mode settings * Mon Aug 07 2006 Jindrich Novy <jn...@re...> 0.4b41-2.fc6 - fix miscompares detected by restore -C caused by SELinux (#189845) - link properly against device-mapper and selinux libraries - add autoconf BuildRequires - use %{?dist} |
From: Stelian P. <st...@po...> - 2008-05-12 21:16:58
|
Le samedi 10 mai 2008 à 11:24 -0700, Todd and Margo Chester a écrit : > Is there a way I can dump both sda1 and sda3 > onto the same backup set (-f file)? No. Dump reference the files in a dump set by their inode number, so only one filesystem can be dumped at once. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-05-12 17:51:23
|
--On Saturday, May 10, 2008 11:24 AM -0700 Todd and Margo Chester <Tod...@ve...> wrote: > Is there a way I can dump both sda1 and sda3 > onto the same backup set (-f file)? Or, do I have > do (continue to) dump them as separate sets? Why do you want to do that? |
From: Todd a. M. C. <Tod...@ve...> - 2008-05-12 16:32:26
|
Hi All, I have three partitions off my root drive ("/"): They are /dev/sda1, sda3, and sdb1. I backup sda1 and sda3, but not sdb1. Is there a way I can dump both sda1 and sda3 onto the same backup set (-f file)? Or, do I have do (continue to) dump them as separate sets? In other words, is there a way to give dump TWO mount points? Many thanks, -T |
From: Todd a. M. C. <Tod...@ve...> - 2008-05-12 03:57:25
|
Hi All, I have three partitions off my root drive ("/"): They are /dev/sda1, sda3, and sdb1. I backup sda1 and sda3, but not sdb1. Is there a way I can dump both sda1 and sda3 onto the same backup set (-f file)? Or, do I have do (continue to) dump them as separate sets? In other words, is there a way to give dump TWO mount points? Many thanks, -T |
From: Todd a. M. C. <Tod...@ve...> - 2008-05-10 18:24:32
|
Hi All, I have three partitions off my root drive ("/"): They are /dev/sda1, sda3, and sdb1. I backup sda1 and sda3, but not sdb1. Is there a way I can dump both sda1 and sda3 onto the same backup set (-f file)? Or, do I have do (continue to) dump them as separate sets? In other words, is there a way to give dump TWO mount points? Many thanks, -T |
From: Kenneth P. <sh...@se...> - 2008-03-18 20:38:08
|
--On Friday, July 27, 2007 4:04 PM +0200 Stelian Pop <st...@po...> wrote: > I'm afraid I haven't had time to look at ext4 yet. > > But dump does almost all filesystem access using libext2fs, so if > libext2fs will support ext4, dump will follow. An update: An interview with ext4 developer Eric Sandeen about the inclusion of ext4 in Fedora 9: <http://fedoraproject.org/wiki/Interviews/EricSandeen> It looks like the extent format should have an effect on dump: > Probably the biggest "feature," which is not something end-users care > about directly, is the new on-disk extent format. This allows the > filesystem to keep track of file data in [offset, length] pairs rather > than block-by-block, and this is much more efficient than the ext3 > mechanism. Deletion of large files should be noticeably faster, for > example. Also: > Other features that will be nice for users include finer-grained > timestamps, and a larger maximum subdirectory limit (now 65k subdirs). > ext4 will also make use of in-inode extended attributes, which should > make things like SELinux, beagle, and samba acls more efficient. |
From: Todd a. M. C. <Tod...@ve...> - 2008-03-17 04:02:09
|
Hi All, I collected up some test data on Ecrix/Exabyte/ Tandberg/Company of the week VXA 320, vs a SATA hard drive. I think you all will find it interesting. I think I am going to switch to removable hard drives for medium business servers. Nine times faster and I never have to call tech support. --T # vxaTool /dev/sg1 vxaTool V4.62 -- Copyright (c) 1996-2006, Exabyte Corp. Tape Drive identified as VXA320 /dev/sg1 - Vendor : EXABYTE /dev/sg1 - Product ID: VXA-3 /dev/sg1 - Firmware : 3221 /dev/sg1 - Serialnum : 0088124249 /dev/sg1 - Cleaning : 22 Tape Motion minutes ago /dev/sg1 - This tape : 26 times loaded into a drive #cat /sys/block/sda/queue/scheduler noop [anticipatory] deadline cfq # uname -r 2.6.18-53.1.14.el5 (Cent OS 5.1) #! /bin/bash -x mt -f /dev/nst0 rewind dump -0a -f /dev/nst0 /root mt -f /dev/nst0 offline DUMP: Closing /dev/nst0 DUMP: Volume 1 completed at: Fri Mar 14 18:16:38 2008 DUMP: Volume 1 152390 blocks (148.82MB) DUMP: Volume 1 took 0:01:18 DUMP: Volume 1 transfer rate: 1953 kB/s DUMP: 152390 blocks (148.82MB) on 1 volume(s) DUMP: finished in 77 seconds, throughput 1979 kBytes/sec #! /bin/bash -x dump -0a -f /export/eraseme.dump /root DUMP: Closing /export/eraseme.dump DUMP: Volume 1 completed at: Fri Mar 14 18:07:51 2008 DUMP: Volume 1 162650 blocks (158.84MB) DUMP: Volume 1 took 0:00:09 DUMP: Volume 1 transfer rate: 18072 kB/s DUMP: 162650 blocks (158.84MB) on 1 volume(s) 2.6.9-67.EL (Cent OS 4.6 rescue mode) dump -0a -f /dev/nst0 /mnt/sysimage/root DUMP: Closing /dev/nst0 DUMP: Volume 1 completed at: Sat Mar 15 02:47:57 2008 DUMP: Volume 1 156740 blocks (153.07MB) DUMP: Volume 1 took 0:01:14 DUMP: Volume 1 transfer rate: 2118 kB/s DUMP: 156740 blocks (153.07MB) on 1 volume(s) DUMP: finished in 73 seconds, throughput 2147 kBytes/sec |
From: Kenneth P. <sh...@se...> - 2008-03-13 15:39:14
|
--On Thursday, March 13, 2008 12:49 PM +0100 Stelian Pop <st...@po...> wrote: > The only cause I can think of is that you had an earlier comparision > error causing the current file to be skipped instead of comparing, and > this is why curfile.action == SKIP. Did restore report some compare > errors before asking for the next volume ? I think so. I suspect tmpwatch ran between the backup and verify, and so a file in /tmp was missing that had been backed up just at the end of one of the multi-files. Here's what I'm seeing: /sbin/restore: unable to stat ./tmp/Transfer.h: No such file or directory /sbin/restore: unable to stat ./tmp/rstdir1204185728-iRW4Q2: No such file or directory /sbin/restore: unable to stat ./tmp/rstmode1204185728-iixqxV: No such file or directory You have read volumes: 2, 3, 4, 5, 6 Specify next volume # (none if no more volumes): Dump date: Sun Mar 9 00:17:08 2008 (I should add /tmp to the exclude list with chattr. But I can see this happening with other files.) |
From: Stelian P. <st...@po...> - 2008-03-13 11:49:14
|
Le mercredi 12 mars 2008 à 16:52 -0700, Kenneth Porter a écrit : > On Wednesday, March 12, 2008 10:44 PM +0100 Stelian Pop > <st...@po...> wrote: > > > Well, it is supposed to work this way when -M is used.. > > I added -a and it now runs without the prompt. Hmm, the only place where it does matter is in tape.c: > if (aflag || curfile.action != SKIP) { > ... do not ask for volume number ... Normally, curfile.action is never SKIP when comparing, since all the files are extracted and compared with the original ones. The only cause I can think of is that you had an earlier comparision error causing the current file to be skipped instead of comparing, and this is why curfile.action == SKIP. Did restore report some compare errors before asking for the next volume ? -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-03-12 23:53:06
|
On Wednesday, March 12, 2008 10:44 PM +0100 Stelian Pop <st...@po...> wrote: > Well, it is supposed to work this way when -M is used.. I added -a and it now runs without the prompt. |
From: Stelian P. <st...@po...> - 2008-03-12 21:46:54
|
Le mercredi 12 mars 2008 à 11:57 -0700, Kenneth Porter a écrit : > When comparing, this is treated as a compare error, and counted against the > error limit. The comments in the code suggest that this isn't an error: > > /* > * If we find files on the tape that have no corresponding > * directory entries, then we must have found a file that > * was created while the dump was in progress. Since we have > * no name for it, we discard it knowing that it will be > * on the next incremental tape. > */ > if (first != curfile.ino) { > fprintf(stderr, "expected next file %ld, got %lu\n", > (long)first, (unsigned long)curfile.ino); > do_compare_error; > skipfile(); > goto next; > } > > Is there a reason to treat this as a hard error? I'd suggest removing the > do_compare_error. Seems reasonable indeed... -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2008-03-12 21:45:08
|
Le mardi 11 mars 2008 à 15:24 -0700, Kenneth Porter a écrit : > I'm doing a multi-file dump (-B 1000000 -Mf) to an external USB drive. I > then do a verify (restore -C) to check the backup. My last verify stopped > with this prompt: > > You have read volumes: 2, 3, 4, 5, 6 > Specify next volume # (none if no more volumes): 7 > resync restore, skipped 230 blocks > > Normally I run this from a script at midnight Saturday, so the prompt fails > the backup with no input. Do you mean that you get prompted for the next volume number each night ? > Here I ran the verify by hand from a shell window > and was able to let it continue by entering the next file number. > > It would be nice if the operation would continue without prompting in this > situation, reading until no more files were available. Well, it is supposed to work this way when -M is used... -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-03-12 18:58:27
|
When comparing, this is treated as a compare error, and counted against the error limit. The comments in the code suggest that this isn't an error: /* * If we find files on the tape that have no corresponding * directory entries, then we must have found a file that * was created while the dump was in progress. Since we have * no name for it, we discard it knowing that it will be * on the next incremental tape. */ if (first != curfile.ino) { fprintf(stderr, "expected next file %ld, got %lu\n", (long)first, (unsigned long)curfile.ino); do_compare_error; skipfile(); goto next; } Is there a reason to treat this as a hard error? I'd suggest removing the do_compare_error. |
From: Kenneth P. <sh...@se...> - 2008-03-11 22:25:32
|
I'm doing a multi-file dump (-B 1000000 -Mf) to an external USB drive. I then do a verify (restore -C) to check the backup. My last verify stopped with this prompt: You have read volumes: 2, 3, 4, 5, 6 Specify next volume # (none if no more volumes): 7 resync restore, skipped 230 blocks Normally I run this from a script at midnight Saturday, so the prompt fails the backup with no input. Here I ran the verify by hand from a shell window and was able to let it continue by entering the next file number. It would be nice if the operation would continue without prompting in this situation, reading until no more files were available. What would cause restore with -Mf to prompt like this? BLOCKING=64 MAXMISCOMPARES=10000 SIZE=1000000 DUMPSUBDIR=/mnt/Backup/0 $DUMP 0u -h 0 -b $BLOCKING -Mf $DUMPSUBDIR/root/dump \ -Q $DUMPSUBDIR/root/qfa -B $SIZE / /bin/mount -o remount,noatime / $RESTORE -C -l -L $MAXMISCOMPARES -b $BLOCKING -Mf $DUMPSUBDIR/root/dump /bin/mount -o remount,atime / |
From: Stelian P. <st...@po...> - 2008-02-07 16:35:56
|
Le mercredi 06 février 2008 à 15:25 -0800, Kenneth Porter a écrit : > On Wednesday, January 23, 2008 5:26 PM +0100 Stelian Pop > <st...@po...> wrote: > > > I'm not sure if this is a problem in dump or in restore, it would be > > nice if you could try and see if the problem can be reproduced with > > 0.4b41 (both dump and restore), or even give a try to the CVS version. > > This is from 0.4b41, from the CentOS 5.1 distro. > > Where can I find out about the correct format of EA blocks and what magic > should be in them? Ahem. To my knowledge, the EA format is only documented in the kernel source... (fs/ext?/xattr*). Dumps merely saves the EA block as it is found on the disk, then the block is analysed at restore time. Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2008-02-06 23:26:27
|
On Wednesday, January 23, 2008 5:26 PM +0100 Stelian Pop <st...@po...> wrote: > I'm not sure if this is a problem in dump or in restore, it would be > nice if you could try and see if the problem can be reproduced with > 0.4b41 (both dump and restore), or even give a try to the CVS version. This is from 0.4b41, from the CentOS 5.1 distro. Where can I find out about the correct format of EA blocks and what magic should be in them? |