You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Bryson L. <Lee...@ss...> - 2012-01-19 01:08:41
|
When attempting to restore an ext3 filesystem from tape on an IBM Power7 system running Fedora 12, I get a continuous stream of the following messages: error in EA block 1 magic = 10002ea ... restoring a different ext2 filesystem from another dump on the same tape does not produce any of these messages. I've used 0.4b42 as packaged for Fedora 12, and rebuilt the 0.4b43 SRPM from Fedora 14, with the same results. I've looked at the source for 0.4b44 and don't see much difference in the restore xattrs handling, but I haven't explicitly tried it yet. The message originates from restore/xattrs.c, and I note that the reported magic value appears to be a byte-swapped representation of the EXT2_XATTR_MAGIC2 value defined as 0xEA020001. There's an "#ifdef BIG_ENDIAN" stanza immediately preceeding the code that reports the error, and AFAICT the compiler option -D _BSD_SRC should cause <features.h> to define __USE_BSD, which should in turn result in <bits/endian.h> eventually setting BIG_ENDIAN correctly on this platform. One thing I'm wondering is why the endianness is fiddled on the restore side of the house when there's no comparable fiddling on the dump side. Can anyone shed any light on what's happening? Many thanks, -Bryson |
From: Bob B. Jr. <bl...@da...> - 2012-01-12 21:52:33
|
Well it seems to be all in the block size. On both systems I was using 1024k block size, which we find is optimal for writing to tape. dump doesn't like this. I started experimenting with different block sizes. I found the sweet spot at 64k blocks. Using 64k block size yields the fastest dump to tape throughput at about 50 MB/s (still slower than older dump, but only marginally). root@appserver-wn:~# dump 0uf /dev/st0 -b 64 /opt DUMP: Date of this level 0 dump: Thu Jan 12 15:57:58 2012 DUMP: Dumping /dev/mapper/vg1-opt (/opt) to /dev/st0 DUMP: Label: none DUMP: Writing 64 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 4999550 blocks. DUMP: Volume 1 started with block 1 at: Thu Jan 12 15:57:59 2012 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Closing /dev/st0 DUMP: Volume 1 completed at: Thu Jan 12 16:00:06 2012 DUMP: Volume 1 4990336 blocks (4873.38MB) DUMP: Volume 1 took 0:02:07 DUMP: Volume 1 transfer rate: 39293 kB/s DUMP: 4990336 blocks (4873.38MB) on 1 volume(s) DUMP: finished in 96 seconds, throughput 51982 kBytes/sec DUMP: Date of this level 0 dump: Thu Jan 12 15:57:58 2012 DUMP: Date this dump completed: Thu Jan 12 16:00:06 2012 DUMP: Average transfer rate: 39293 kB/s DUMP: DUMP IS DONE |
From: Bob B. Jr. <bl...@da...> - 2012-01-12 19:04:09
|
On Wed, 2012-01-11 at 14:06 -0800, Steve Bonds wrote: > On Wed, Jan 11, 2012 at 11:04 AM, Bob Blanchard Jr. blabj-at-dainty.ca > |dump-users/gmail/sbonds| <2n2...@sn...> wrote: > > ... trimmed ... > All interfaces are SAS, I can do dd reads from tape at 150 > MB/s from both systems. > > Trying to determine if this is kernel related (st module) or > dump related or ext3/4 related. > > > It looks like you're on the right track with your dd read tests to the > tape drive. What about dd write tests to the tape drive and dd read > tests from the disk partition? If these are slow, then dump will > never get any faster. dd write tests to the tape are between 150 and 160 MB/s. dd reads from disk around 150 - 180 MB/s So with dump "out of the way", both systems are giving me at least 150 MB/s read/write speeds. > I also recall some performance issues with older versions of dump-- > try the newest release (or even the CVS version) and see if the > problem persists. You were using 0.4b41 (2006) and current is 0.4b44. > Surprisingly, the "older" 0.4b41 on ubuntu hardy is the blazingly fast dump at 55 MB/s. 0.4b42 is slow. I built 0.4b44 from source, and tried again on the ubuntu lucid machine (where dumps are 10 times slower), and it is an improvement.. average throughput is around 15 MB/s instead of 5MB/s. But this is still nowhere near the 55 MB/s I'm getting with dump 0.4b41. Could it be slower due to ext4?? Unfortunately I don't have an ext3 partition on the lucid server to test 0.4b44. I suppse I could try to build 0.4b44 on hardy and try an ext3 dump. -Bob |
From: Steve B. <fb7...@sn...> - 2012-01-11 22:07:00
|
On Wed, Jan 11, 2012 at 11:04 AM, Bob Blanchard Jr. blabj-at-dainty.ca|dump-users/gmail/sbonds| <2n2...@sn...> wrote: .... trimmed ... > All interfaces are SAS, I can do dd reads from tape at 150 MB/s from both > systems. > > Trying to determine if this is kernel related (st module) or dump related > or ext3/4 related. > It looks like you're on the right track with your dd read tests to the tape drive. What about dd write tests to the tape drive and dd read tests from the disk partition? If these are slow, then dump will never get any faster. I also recall some performance issues with older versions of dump-- try the newest release (or even the CVS version) and see if the problem persists. You were using 0.4b41 (2006) and current is 0.4b44. -- Steve Bonds |
From: Bob B. Jr. <bl...@da...> - 2012-01-11 19:24:15
|
I have an IBM BladeCenter with a SAS connected TS3100 LTO4 tape drive. I have a 6 SAS disk RAID array with 2x300GB file systems, one ext3 and one ext4. Disks are 15k rpm. I have an HS21 bladeserver running 64-bit ubuntu hardy (2.6.24 kernel, 0.4b41 dump, deadline scheduler), which links to the ext3 raid fs, and a newer HS22 bladeserver running 64-bit ubuntu lucid (2.6.32 kernel, 0.4b42 dump, cfq scheduler), which links to the ext4 raid fs. So - same tape drive, same disk array source. On the older blade (4 cores 12GB ram), since install, we've had very fast dump speeds. Here is output: DUMP: Date of this level 0 dump: Tue Jan 10 22:04:58 2012 DUMP: Dumping /dev/sda1 (/cyrus-mail) to /dev/nst0 DUMP: Label: none DUMP: Writing 1024 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 132498941 blocks. DUMP: Volume 1 started with block 1 at: Tue Jan 10 22:05:38 2012 DUMP: 0.00% done at 1024 kB/s, finished in 35:56 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 17.06% done at 75081 kB/s, finished in 0:24 DUMP: 28.26% done at 62312 kB/s, finished in 0:25 DUMP: 36.86% done at 54198 kB/s, finished in 0:25 DUMP: 46.17% done at 50933 kB/s, finished in 0:23 DUMP: 58.69% done at 51811 kB/s, finished in 0:17 DUMP: 76.66% done at 56396 kB/s, finished in 0:09 DUMP: 93.17% done at 58754 kB/s, finished in 0:02 DUMP: Closing /dev/nst0 DUMP: Volume 1 completed at: Tue Jan 10 22:42:29 2012 DUMP: Volume 1 132504576 blocks (129399.00MB) DUMP: Volume 1 took 0:36:51 DUMP: Volume 1 transfer rate: 59929 kB/s DUMP: 132504576 blocks (129399.00MB) on 1 volume(s) DUMP: finished in 2206 seconds, throughput 60065 kBytes/sec DUMP: Date of this level 0 dump: Tue Jan 10 22:04:58 2012 DUMP: Date this dump completed: Tue Jan 10 22:42:29 2012 DUMP: Average transfer rate: 59929 kB/s DUMP: DUMP IS DONE On the new blade (with 24 cores and 36 GB ram), our dumps are 10 times slower! Here is output: DUMP: Date of this level 0 dump: Tue Jan 10 22:48:48 2012 DUMP: Dumping /dev/sda1 (/home) to /dev/nst0 DUMP: Label: none DUMP: Writing 1024 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 167349242 blocks. DUMP: Volume 1 started with block 1 at: Tue Jan 10 22:48:55 2012 DUMP: 0.00% done at 512 kB/s, finished in 90:47 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 2.00% done at 11057 kB/s, finished in 4:07 DUMP: 2.60% done at 7239 kB/s, finished in 6:15 DUMP: 3.63% done at 6742 kB/s, finished in 6:38 DUMP: 4.62% done at 6437 kB/s, finished in 6:53 DUMP: 6.32% done at 7036 kB/s, finished in 6:11 DUMP: 8.36% done at 7761 kB/s, finished in 5:29 DUMP: 9.50% done at 7559 kB/s, finished in 5:33 DUMP: 10.38% done at 7231 kB/s, finished in 5:45 DUMP: 11.27% done at 6981 kB/s, finished in 5:54 DUMP: 11.97% done at 6671 kB/s, finished in 6:08 DUMP: 12.71% done at 6439 kB/s, finished in 6:18 DUMP: 13.93% done at 6471 kB/s, finished in 6:10 DUMP: 14.90% done at 6389 kB/s, finished in 6:11 DUMP: 16.03% done at 6382 kB/s, finished in 6:06 DUMP: 17.07% done at 6345 kB/s, finished in 6:04 DUMP: 19.12% done at 6663 kB/s, finished in 5:38 DUMP: 20.33% done at 6669 kB/s, finished in 5:33 DUMP: 21.77% done at 6743 kB/s, finished in 5:23 DUMP: 22.35% done at 6560 kB/s, finished in 5:30 DUMP: 23.35% done at 6510 kB/s, finished in 5:28 DUMP: 25.19% done at 6688 kB/s, finished in 5:11 DUMP: 26.43% done at 6698 kB/s, finished in 5:06 DUMP: 28.09% done at 6785 kB/s, finished in 4:55 DUMP: 30.77% done at 7123 kB/s, finished in 4:31 DUMP: 31.65% done at 7035 kB/s, finished in 4:30 DUMP: 32.38% done at 6920 kB/s, finished in 4:32 DUMP: 33.34% done at 6864 kB/s, finished in 4:30 DUMP: 34.76% done at 6900 kB/s, finished in 4:23 DUMP: 35.39% done at 6784 kB/s, finished in 4:25 DUMP: 36.25% done at 6718 kB/s, finished in 4:24 DUMP: 38.57% done at 6918 kB/s, finished in 4:07 DUMP: 39.33% done at 6835 kB/s, finished in 4:07 DUMP: 40.55% done at 6834 kB/s, finished in 4:02 DUMP: 41.35% done at 6764 kB/s, finished in 4:01 DUMP: 41.92% done at 6662 kB/s, finished in 4:03 DUMP: 43.19% done at 6674 kB/s, finished in 3:57 DUMP: 43.87% done at 6597 kB/s, finished in 3:57 DUMP: 44.95% done at 6582 kB/s, finished in 3:53 DUMP: 46.37% done at 6616 kB/s, finished in 3:46 DUMP: 47.47% done at 6603 kB/s, finished in 3:41 DUMP: 48.25% done at 6548 kB/s, finished in 3:40 DUMP: 49.08% done at 6501 kB/s, finished in 3:38 DUMP: 50.09% done at 6482 kB/s, finished in 3:34 DUMP: 50.86% done at 6433 kB/s, finished in 3:33 DUMP: 52.45% done at 6486 kB/s, finished in 3:24 DUMP: 54.23% done at 6561 kB/s, finished in 3:14 DUMP: 55.63% done at 6588 kB/s, finished in 3:07 DUMP: 56.33% done at 6532 kB/s, finished in 3:06 DUMP: 57.14% done at 6491 kB/s, finished in 3:04 DUMP: 58.12% done at 6470 kB/s, finished in 3:00 DUMP: 59.53% done at 6497 kB/s, finished in 2:53 DUMP: 60.21% done at 6444 kB/s, finished in 2:52 DUMP: 61.77% done at 6487 kB/s, finished in 2:44 DUMP: 62.71% done at 6464 kB/s, finished in 2:40 DUMP: 64.20% done at 6497 kB/s, finished in 2:33 DUMP: 64.73% done at 6434 kB/s, finished in 2:32 DUMP: 65.41% done at 6388 kB/s, finished in 2:31 DUMP: 66.40% done at 6372 kB/s, finished in 2:27 DUMP: 67.23% done at 6343 kB/s, finished in 2:24 DUMP: 68.12% done at 6320 kB/s, finished in 2:20 DUMP: 68.72% done at 6272 kB/s, finished in 2:19 DUMP: 69.26% done at 6219 kB/s, finished in 2:17 DUMP: 69.92% done at 6179 kB/s, finished in 2:15 DUMP: 70.86% done at 6164 kB/s, finished in 2:11 DUMP: 72.62% done at 6220 kB/s, finished in 2:02 DUMP: 73.15% done at 6171 kB/s, finished in 2:01 DUMP: 73.76% done at 6129 kB/s, finished in 1:59 DUMP: 74.43% done at 6095 kB/s, finished in 1:56 DUMP: 75.10% done at 6061 kB/s, finished in 1:54 DUMP: 76.09% done at 6052 kB/s, finished in 1:50 DUMP: 76.83% done at 6026 kB/s, finished in 1:47 DUMP: 77.63% done at 6004 kB/s, finished in 1:43 DUMP: 78.76% done at 6008 kB/s, finished in 1:38 DUMP: 79.91% done at 6013 kB/s, finished in 1:33 DUMP: 80.72% done at 5993 kB/s, finished in 1:29 DUMP: 81.31% done at 5958 kB/s, finished in 1:27 DUMP: 81.89% done at 5923 kB/s, finished in 1:25 DUMP: 82.87% done at 5917 kB/s, finished in 1:20 DUMP: 83.80% done at 5907 kB/s, finished in 1:16 DUMP: 84.56% done at 5887 kB/s, finished in 1:13 DUMP: 85.40% done at 5871 kB/s, finished in 1:09 DUMP: 86.66% done at 5886 kB/s, finished in 1:03 DUMP: 88.36% done at 5929 kB/s, finished in 0:54 DUMP: 89.59% done at 5940 kB/s, finished in 0:48 DUMP: 90.92% done at 5957 kB/s, finished in 0:42 DUMP: 92.68% done at 6002 kB/s, finished in 0:34 DUMP: 93.94% done at 6014 kB/s, finished in 0:28 DUMP: 95.53% done at 6046 kB/s, finished in 0:20 DUMP: 96.74% done at 6054 kB/s, finished in 0:15 DUMP: 97.29% done at 6021 kB/s, finished in 0:12 DUMP: 98.09% done at 6004 kB/s, finished in 0:08 DUMP: 98.94% done at 5991 kB/s, finished in 0:04 DUMP: 99.52% done at 5961 kB/s, finished in 0:02 DUMP: Closing /dev/nst0 DUMP: Volume 1 completed at: Wed Jan 11 06:36:00 2012 DUMP: Volume 1 166971392 blocks (163058.00MB) DUMP: Volume 1 took 7:47:05 DUMP: Volume 1 transfer rate: 5957 kB/s DUMP: 166971392 blocks (163058.00MB) on 1 volume(s) DUMP: finished in 28015 seconds, throughput 5960 kBytes/sec DUMP: Date of this level 0 dump: Tue Jan 10 22:48:48 2012 DUMP: Date this dump completed: Wed Jan 11 06:36:00 2012 DUMP: Average transfer rate: 5957 kB/s DUMP: DUMP IS DONE All interfaces are SAS, I can do dd reads from tape at 150 MB/s from both systems. Trying to determine if this is kernel related (st module) or dump related or ext3/4 related. I thought initially it was issue with cfq shceduler after seeing bugs like https://bugzilla.redhat.com/show_bug.cgi?id=456181, but switching to deadline leads to similar results (still 10 times slower). I was hoping someone on this list can shed some light, as I've received no answers from IBM support forums. Thanks, -Bob |
From: Hendrik H. <ho...@li...> - 2011-08-23 02:34:45
|
Hi, I've got a question about removing directories between incremental backups. Here is a minimal example to explain what I see: ## /tmp/testing is a fresh ext3 or ext4 file system mkdir /tmp/testing/foobar dump -u -f /tmp/level0.dump /tmp/testing/ rmdir /tmp/testing/foobar dump -1 -u -f /tmp/level1.dump /tmp/testing/ restore -rf /tmp/level0.dump restore -rf /tmp/level1.dump The second restore fails with "deleteino: out of range 0" and asks me if I want to abort. If I say no, I'm asked again, and after saying "n" again restore starts using 100% CPU forever. If I abort, the foobar directory is gone, but I have a directory called something like RSTTMP01977. My system is Linux x86_64 and I'm using dump-0.4b44. Am I doing anything wrong here or is this indeed a bug? Cheers, Hendrik -- A man without a dream in his heart already has one foot in the grave. |
From: resoli - s. <re...@us...> - 2011-08-18 15:52:20
|
.. > Incorrect block for <file1 path> at 1588999555 blocks > Incorrect block for <file1 path> at 1588999556 blocks > *** so on for same file1 on many consecutive blocks *** > Incorrect block for <file1 path> at 1589000042 blocks > Missing blocks at the end of <file1 path>, assuming hole > <file2 path>: (inode 122169539) not found on tape > <file3 path>: (inode 122169540) not found on tape > ... > ======== > > To my surprise, "restore -x -a -Q" , using the genereted QFA file, > correctly restores these three files without problems from third volume, > swapping directly from first to third volume. > > I post here also the relevant fragment of QFA file, near the boundary > between second and third volume: > > ======== > ... > 122169536 2 775829 > 122169537 2 775831 > 122169537 3 1 > 122169539 3 3 > 122169540 3 5 > 122169541 3 7 > 122169543 3 8 > ... > ======== I forgot to add the log fragment from dump: ========== ... DUMP: Volume 3 started with block 1588986880 at: Tue Aug 16 10:07:05 2011 DUMP: Volume 3 begins with blocks from inode 122169536 ... ========== It seems that informations from dump log are not congruent with QFA ones: dump log says that part of inode 122169536 is on volume 3, QFA records his position only on volume 2. bye, Roberto Resoli |
From: resoli - s. <re...@us...> - 2011-08-18 08:25:03
|
Hello, I'm using latest dump_0.4b44 (binary package from from Debian Testing); I'm dumping a big (1.6T) and coplex (backuppc pool) UNmounted ext4 filesystem on three LTO-4 tapes, using a custom script (-F flag) for dealing with volume swap, and generating QFA files (-Q flag). Dump completes without errors, but when full restoring (restore -r) some files (usually not more than two or three) near the boundary between second and third volume are not correctly restored: here's the log fragment ( my comments between "*** ***" ): ======== ... *** restore invokes custom swap volume script *** Launching /opt/comunetn/scripts/vol_start.sh ***log of custom swap volume script follows...*** 2011-08-16-18:32:14 - = swap-volume: Operation vol_start volume 3 on drive 1 = 2011-08-16-18:32:14 - Unloading volume (EFU978L4) from drive 1 2011-08-16-18:32:14 - Scaricamento cassetta "EFU978L4" nello slot 17 ... Unloading drive 1 into Storage Element 17...done 2011-08-16-18:33:08 - Loading volume 3 (EFU979) into drive 1 2011-08-16-18:33:09 - Caricamento cassetta "EFU979" nel drive 1 ... Loading media from Storage Element 18 into drive 1...done 2011-08-16-18:33:21 - Waiting tape drive "/dev/nst1" to be in "BOT ONLINE" status ... 2011-08-16-18:33:36 - OK. Drive acquired "BOT ONLINE" status in 15 seconds. 2011-08-16-18:33:36 - Setting tape drive options ... Trying to open database '/etc/stinit.def'. Open succeeded. Mode 1 definition: scsi2logical=1 can-bsr drive-buffering can-partitions auto-lock buffer-writes=0 async-writes=0 read-ahead timeout=800 long-timeout=14400 blocksize=1024k density=0x00 compression=0 stinit, processing tape 1 Mode 1, name '/dev/nst1' Mode 2, name '/dev/nst1l' Mode 3, name '/dev/nst1m' Mode 4, name '/dev/nst1a' The manufacturer is 'HP', product is 'Ultrium 4-SCSI', and revision 'B45W'. 2011-08-16-18:33:36 - Drive "/dev/nst1" status follows: SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 1048576 bytes. Density code 0x46 (LTO-4). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN ***end of custom swap volume script log*** Incorrect block for <file1 path> at 1588999555 blocks Incorrect block for <file1 path> at 1588999556 blocks *** so on for same file1 on many consecutive blocks *** Incorrect block for <file1 path> at 1589000042 blocks Missing blocks at the end of <file1 path>, assuming hole <file2 path>: (inode 122169539) not found on tape <file3 path>: (inode 122169540) not found on tape ... ======== To my surprise, "restore -x -a -Q" , using the genereted QFA file, correctly restores these three files without problems from third volume, swapping directly from first to third volume. I post here also the relevant fragment of QFA file, near the boundary between second and third volume: ======== ... 122169536 2 775829 122169537 2 775831 122169537 3 1 122169539 3 3 122169540 3 5 122169541 3 7 122169543 3 8 ... ======== My guess is (i'm not confident enough with restore code to verify this directly) that there is some incongruence in restore behavior between "-r" and "-x" modes. Thanks, Roberto Resoli |
From: Stelian P. <st...@po...> - 2011-01-27 14:46:02
|
On Wed, Jan 26, 2011 at 08:47:00AM -0800, Kenneth Porter wrote: > I haven't looked closely at the restore code and am wondering if there's > logic in the verify system (option -C) to detect that files on the disk > didn't make it to the backup media? No, there is no such logic. Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2011-01-26 23:12:54
|
I haven't looked closely at the restore code and am wondering if there's logic in the verify system (option -C) to detect that files on the disk didn't make it to the backup media? I expect the logic would need to load the directories from the tape, scan the tape for any missing files that were in the backup's directory list but missing when dump go to that file (this already seems to be there), and then iterate over the filesystem's directories to compare that to the directory structure recorded in the dump file. |
From: Stelian P. <st...@po...> - 2010-12-27 13:36:34
|
Hi Kenneth, On Thu, Dec 23, 2010 at 05:29:51AM -0800, Kenneth Porter wrote: > I just wanted to follow up to say I haven't seen the problem recur, so I > haven't been able to debug it further. My original backup had gotten far > enough away from the disk contents that the restore was failing from too > many miscompares so I figured I'd put the media back into rotation until I > saw the problem again. Thanks for the update ! Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2010-12-23 13:30:36
|
I just wanted to follow up to say I haven't seen the problem recur, so I haven't been able to debug it further. My original backup had gotten far enough away from the disk contents that the restore was failing from too many miscompares so I figured I'd put the media back into rotation until I saw the problem again. |
From: Kenneth P. <sh...@se...> - 2010-12-16 07:49:55
|
--On Thursday, December 16, 2010 5:53 AM +0000 Nikhil <nik...@gm...> wrote: > Can any one help me let me know how to try backup in linux through dump > command. I have tried with command > > 1.) #dump -0u -f /dev/st0 /dev/sda1 > 2.) #dump -0u -f /dev/st0 /home > 3.) #dump -0u -f /dev/st0 /opt First, you probably want the non-rewinding tape device, nst. Otherwise subsequent backups will rewind the tape when it closes and overwrite the earlier backups. > but for above mention command I get ACL error in inode#XXXX ...... What version of dump? > Please help me how to take back of NFS file system in linux through dump, > no other method.... I thought dump only works with ext2 and ext3. To back up a remote ext2 filesystem, you'd use dump on the remote machine to send the output over a network socket to a tape device, using rmt. |
From: Nikhil <nik...@gm...> - 2010-12-16 06:10:22
|
Dear All, Can any one help me let me know how to try backup in linux through dump command. I have tried with command 1.) #dump -0u -f /dev/st0 /dev/sda1 2.) #dump -0u -f /dev/st0 /home 3.) #dump -0u -f /dev/st0 /opt but for above mention command I get ACL error in inode#XXXX ...... Please help me how to take back of NFS file system in linux through dump, no other method.... |
From: Stelian P. <st...@po...> - 2010-12-07 14:06:17
|
On Mon, Dec 06, 2010 at 03:42:54PM -0800, Kenneth Porter wrote: > --On Monday, December 06, 2010 4:57 PM +0100 Stelian Pop > <st...@po...> wrote: > > >As you wrote in a follow-up, curfile is NULL here, exhibiting > >the bug. > > Minor correction: It's curfile.dip, not curfile, that's NULL. Yes, it is a typo, the rest of the analysis is still valid. > With my workaround patch I was able to get my verify to complete, so > now I'll see if a subsequent backup elicits the error and try your > suggestions. Ok, thanks ! Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2010-12-06 23:43:17
|
--On Monday, December 06, 2010 4:57 PM +0100 Stelian Pop <st...@po...> wrote: > As you wrote in a follow-up, curfile is NULL here, exhibiting > the bug. Minor correction: It's curfile.dip, not curfile, that's NULL. With my workaround patch I was able to get my verify to complete, so now I'll see if a subsequent backup elicits the error and try your suggestions. |
From: Kenneth P. <sh...@se...> - 2010-12-06 23:07:01
|
--On Sunday, December 05, 2010 10:00 PM -0800 Kenneth Porter <sh...@se...> wrote: > It looks like the skipped test is trying to avoid a buffer overrun, so a > more comprehensive fix that correctly acquires the buffer size is still > needed. Stelian, I'm guessing you're swamped in other work. If you can coach me on what was intended and where to look for how that structure gets populated, I might be able to go back and fix the underlying issue. |
From: Stelian P. <st...@po...> - 2010-12-06 15:57:10
|
Hi Kenneth, I've finally found out a few minutes to look at your reported bug, and I'm not sure what happens here: > #0 readxattr (buffer=0xbfd76bf8 "") at tape.c:1294 > 1294 if (curfile.dip->di_size > XATTR_MAXSIZE) { As you wrote in a follow-up, curfile is NULL here, exhibiting the bug. > (gdb) bt > #0 readxattr (buffer=0xbfd76bf8 "") at tape.c:1294 > #1 0x080548bc in compareattr (name=0x8069ec7 "./var/log/named/queries") > at tape.c:1731 > #2 0x0805789d in comparefile (name=0x8069ec7 "./var/log/named/queries") > at tape.c:1946 ... but it wasn't NULL at the beginning of comparefile(), so we can assume that somehow during the file extraction (getfile(), called from tape.c:1909), restore finds out an inode (in findinode()) which is NOT a TS_INODE (so curfile.dip and curfile.ino are not initialized) but which satisfies the test on line 1718: "spcl.c_flags & DR_EXTATTRIBUTES"... Are you able to reproduce the problem ? If you do, you could use gdb, based on the analysis above, to see what happens... > #3 0x0805025a in compare_entry (ep=0x18633ed0, do_compare=1) at > restore.c:694 > #4 0x0805049f in compareleaves () at restore.c:748 > #5 0x0804ed98 in main (argc=Cannot access memory at address 0x0 > ) at main.c:475 Thanks, Stelian. -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2010-12-06 06:00:30
|
Workaround patch added to ticket here: <https://sourceforge.net/tracker/?func=detail&aid=3129314&group_id=1306&atid=101306> It looks like the skipped test is trying to avoid a buffer overrun, so a more comprehensive fix that correctly acquires the buffer size is still needed. |
From: Kenneth P. <sh...@se...> - 2010-12-03 04:58:27
|
--On Sunday, November 28, 2010 4:26 PM -0800 Kenneth Porter <sh...@se...> wrote: ># 0 readxattr (buffer=0xbfd76bf8 "") at tape.c:1294 > 1294 if (curfile.dip->di_size > XATTR_MAXSIZE) { I got a chance to start to look at this and I see this: (gdb) print curfile $1 = {name = 0x805e22d "EA block", ino = 0, dip = 0x0, action = 3 '\003'} Is dip supposed to have a value in this case, when comparing extended attributes? |
From: Kenneth P. <sh...@se...> - 2010-11-29 00:27:25
|
I've put a backtrace of a segfault in restore 0.4b43 here: <http://pastebin.com/kBhdnMxb> I'm verifying a backup of an entire partition from a file on a mounted USB drive. The file being compared exists, but it looks like one that's probably been logrotated since the backup. Summary: Core was generated by `/sbin/restore -C -l -L 10000 -b 64 -f /mnt/Backup/0/root/dump -a'. #0 readxattr (buffer=0xbfd76bf8 "") at tape.c:1294 1294 if (curfile.dip->di_size > XATTR_MAXSIZE) { (gdb) bt #0 readxattr (buffer=0xbfd76bf8 "") at tape.c:1294 #1 0x080548bc in compareattr (name=0x8069ec7 "./var/log/named/queries") at tape.c:1731 #2 0x0805789d in comparefile (name=0x8069ec7 "./var/log/named/queries") at tape.c:1946 #3 0x0805025a in compare_entry (ep=0x18633ed0, do_compare=1) at restore.c:694 #4 0x0805049f in compareleaves () at restore.c:748 #5 0x0804ed98 in main (argc=Cannot access memory at address 0x0 ) at main.c:475 |
From: Tom Y. <mad...@te...> - 2010-06-10 13:04:57
|
On Thu, 10 Jun 2010, Stelian Pop wrote: > Hi, > > On Tue, Jun 08, 2010 at 07:45:41PM +0100, Tom Yates wrote: > >> i'm getting this when i try to restore a level 1 dump on top of a level 0. > [...] >> am i right in thinking that the error is trying to tell me that the >> incremental "pre-dates" the full backup? > > Yes it's extremely useful just to know i'm thinking about the right error - thank you for that. > : (in restore.c: > if (hdr.dumpdate != dumptime) > errx(1, "Incremental tape too %s", > (hdr.dumpdate < dumptime) ? "low" : "high"); [...] > I'm not sure you can do this without modifying the source code > (commenting out the above test). i would be very happy to do that, and if this crops up again i may well do that. i thought about it myself, but couldn't be sure i wasn't breaking some other important test at the same time, which uncertainty you have removed for me. > The next time I suggest you use the -T option of dump to set the > dump time to the date when the mirrors were broken. the man page is seriously unclear that -T can be used in that way, but now i know that it can, i'm completely sorted for next time. in the event, i did some find-based trickery to confirm that there existed no file which (a) had been modified in the eight-minute window between breaking and dumping, and (b) had not since been re-modified, so i decided it was safe for me to revert /etc/dumpdates and do a new incremental. but if my window had been longer or i hadn't thought to do that test, your input would have completely saved me. thank you! -- Tom Yates - http://www.teaparty.net |
From: Stelian P. <st...@po...> - 2010-06-10 12:00:35
|
Hi, On Tue, Jun 08, 2010 at 07:45:41PM +0100, Tom Yates wrote: > i'm getting this when i try to restore a level 1 dump on top of a level 0. [...] > am i right in thinking that the error is trying to tell me that the > incremental "pre-dates" the full backup? Yes: (in restore.c: if (hdr.dumpdate != dumptime) errx(1, "Incremental tape too %s", (hdr.dumpdate < dumptime) ? "low" : "high"); > if so, is there any way to tell > restore to ignore that and just do the restore? I'm not sure you can do this without modifying the source code (commenting out the above test). The next time I suggest you use the -T option of dump to set the dump time to the date when the mirrors were broken. Stelian. -- Stelian Pop <st...@po...> |
From: Tom Y. <mad...@te...> - 2010-06-08 19:02:40
|
i'm getting this when i try to restore a level 1 dump on top of a level 0. i have a server (CentOS linux) with software RAID, which i want to replace with a newer server. to get a clean dump, i sync'ed the discs then broke the mirrors (mdadm -f, mdadm -r), e2fsck'ed the detached mirror, and did a level 0 dump of it with the "-u" flag. once the dump had finished, i edited /etc/dumpdates on the live mirror to reflect the time at which the mirrors were split (09:26:53), rather than the slightly later time the dump started (09:34:43), as i needed a later incremental to reflect all changes since the mirrors were broken, rather than since the start of the dump. once the mirrors had resynced, i did a level 1 incremental. "restore tvf" confirms that the level 0 dates from 09:34:43, while the level 1 dates from 09:26:53. when i restore the level 0 dump to the new box, which runs OK, then try to restore the level 1 on top, i get the error "restore: Incremental tape too high". am i right in thinking that the error is trying to tell me that the incremental "pre-dates" the full backup? if so, is there any way to tell restore to ignore that and just do the restore? thanks for any light anyone can shed. -- Tom Yates - http://www.teaparty.net |
From: Brian K. <br...@kr...> - 2009-07-14 23:40:11
|
Actually good point. This was later, after disabling then enabling selinux. I also noticed that before I embarked on this whole dump/restore NFS EA issue, that some files were labeled while others of the same age weren't. - Brian On Jul 14, 2009, at 3:58 PM, Kenneth Porter wrote: > --On Tuesday, July 14, 2009 4:19 PM -0700 Brian Krusic <br...@kr... > > wrote: > >> Actually, in my version of centos 5.3 (kernal 2.6.18-128.1.16), I saw >> everything get labeled upon selinux activation. >> >> Perhaps your behavior was due to an earlier kernel version? > > Was this on initial installation or later? > > I didn't initially install dovecot (IMAP server) and it was failing > to access individual mail folders until I manually relabeled them. > This was about 2 months ago, after updating the whole system first. > I did a minimal install, then used yum to pull all the packages I > really needed, to get the latest of everything. > > Oh, I'd copied all the home folders from an old FC2 system, so they > naturally lacked any labeling at all. So perhaps disabling and re- > enabling SELinux with the right magic command would have relabeled > those files. > > |