From: Delian K. <kr...@kr...> - 2004-01-30 14:23:44
|
Hello, Here's the test I've performed: I'm making a level 0 dump, and saving the dump time to different file, since I prefer updating it than /etc/dumpdates. I'm also saving the files that are dumped, and trying to use that list later: sync dump -0 -f - /dev/hda1 -A archive_file |restore -uvxf - date "+%a %b %e %H:%M:%S %Y" > last_dump Later I'm doing a level 1 dump and writing over the level 0 dump files: dump -1 -T "`cat last_dump`" -f - /dev/hda1 -A archive_file.new \ |restore -uvxf - The archive is kept on a separate partition which is supposed to be a mirror of the original. The dumps are taken from a lvm snapshot, avoiding the errors that live filesystem might cause. The idea is to always execute only the second command, and just after it to do date "+%a %b %e %H:%M:%S %Y" > last_dump This way I was hoping to be able to keep the mirror partition up to date. The problem is that if a file is removed from the original partition, it is impossible to detect that and remove the file from the mirror also. That way the mirror is always growing in size and getting biger and bigge= r than the original. A solution with let's say "find" is not appropriate since there are sever= al=20 hundreds thousands of files and stating each one takes too much time. I might get the list of files from the level 0 dump by parsing the output= of: cat archive_file |restore -tf - Unfortunately the archive_file.new, created during the level 1 dump conta= ins only the added/changed files and nothing about the removed ones. It will = be sufficient if I could generate a list with all the files that are in the original partition, like the one of level 0 dump, when I'm doing the leve= l 1 dump. If a file is on the first list but not on a second, it has been rem= oved, and I could remove it from the mirror partition. Any Ideas ? Regards, Delian Kurstev p.s. I've searched thoroughly the archives and found nothing about a simi= lar issue. I doubt no one has faced this, may be I'm missing something obviou= s ? Thanks for the patience to read that long post. |
From: Florian Z. <fl...@gm...> - 2004-01-30 20:42:19
|
Hi, > sync > dump -0 -f - /dev/hda1 -A archive_file |restore -uvxf - > date "+%a %b %e %H:%M:%S %Y" > last_dump > > Later I'm doing a level 1 dump and writing over the level 0 dump > files: > dump -1 -T "`cat last_dump`" -f - /dev/hda1 -A archive_file.new \ > |restore -uvxf - [files deleted on the source are not deleted on the mirror in that second step above] > Unfortunately the archive_file.new, created during the level 1 dump contains > only the added/changed files and nothing about the removed ones. It will be It _does_ contain information about the removed ones - though indirectly. Dump is not about backing up files (=directory entries and their associated data) but about backing up inodes. By dumping all changed inodes (including directories) relative to some previous backup you automatically also record which inodes from a previous backup are not referenced any longer and thus can be deleted upon restore of the incremental backup. The problem here is that inode numbers are not preserved by restore, which is why restoring an incremental backup on top of a filesystem that was filled by other means than by restoring a level 0 dump does not work - incremental restore only works because restore keeps track of corresponding inode numbers in /restoresymtable on the filesystem being restored. This is how I understand it - if I should be wrong, please correct me ... So, after all it should work if you initially create the mirror using dump -0 | restore. As an alternative to dump I'd recommend rsync which is made exactly for this kind of task. However, take care to disable checksumming for local copying - maybe that rsync does that automatically, though ... Cyas, Florian |
From: Delian K. <kr...@kr...> - 2004-02-03 20:05:30
|
On Friday 30 January 2004 17:05, Florian Zumbiehl wrote: > Hi, > > > sync > > dump -0 -f - /dev/hda1 -A archive_file |restore -uvxf - > > date "+%a %b %e %H:%M:%S %Y" > last_dump > > > > Later I'm doing a level 1 dump and writing over the level 0 dump > > files: > > dump -1 -T "`cat last_dump`" -f - /dev/hda1 -A archive_file.new \ > > > > |restore -uvxf - > > [files deleted on the source are not deleted on the mirror in that > second step above] > > > Unfortunately the archive_file.new, created during the level 1 dump > > contains only the added/changed files and nothing about the removed o= nes. > > It will be > > It _does_ contain information about the removed ones - though indirectl= y. > Dump is not about backing up files (=3Ddirectory entries and their asso= ciated > data) but about backing up inodes. By dumping all changed inodes (inclu= ding > directories) relative to some previous backup you automatically also re= cord > which inodes from a previous backup are not referenced any longer and t= hus > can be deleted upon restore of the incremental backup. > > The problem here is that inode numbers are not preserved by restore, wh= ich > is why restoring an incremental backup on top of a filesystem that was > filled by other means than by restoring a level 0 dump does not work - > incremental restore only works because restore keeps track of correspon= ding > inode numbers in /restoresymtable on the filesystem being restored. > > This is how I understand it - if I should be wrong, please correct me .= =2E. > > So, after all it should work if you initially create the mirror using > dump -0 | restore. Probably You mean restore -r here. I've tried that. You're right. It works. However the documentation states that -r should be used only on a fresh filesystem. Isn't it dangerous if it's used on already populated one ? Here's what I've done: step1: dump -0uf - /dev/hda1 |restore -ruf - step2: dump -1uf - /dev/hda1 |restore -ruf - - edit the contents of /etc/dumpdates and delete the line for level 0 dum= p on hda1 and make the line for level 1 to be the line for level 0 - delete a file dump -1uf - /dev/hda1 |restore -ruf - Step 2 could be performed multiple times just after itself then. > > As an alternative to dump I'd recommend rsync which is made exactly for > this kind of task. However, take care to disable checksumming for > local copying - maybe that rsync does that automatically, though ... > I'm currently using rsync. However even find . |wc -l takes huge amount of time when we talk about let's say one million files. Find/rsync uses stat for each file. I'm not sure whether the kernel keeps a copy of the FAT(don't know how it's called for ext2/3) in memory, but even in that case the context switches between the kernel and userspace f= or each stat seems expensive and unnecessary, when we could read that inform= ation directly from the FAT. That's what dump does, right ? > Cyas, Florian > Thanks for the response, Delian Krustev |
From: Florian Z. <fl...@gm...> - 2004-02-04 01:41:01
|
Hi, [...] > > So, after all it should work if you initially create the mirror using > > dump -0 | restore. > > Probably You mean restore -r here. Yep, something like that ... > I've tried that. You're right. It works. However the documentation states > that -r should be used only on a fresh filesystem. Isn't it dangerous > if it's used on already populated one ? Dunno, UTSL - or maybe some developer will be able to help you out with that one?! But can't you use some separate filesystem for that mirror? > > As an alternative to dump I'd recommend rsync which is made exactly for > > this kind of task. However, take care to disable checksumming for > > local copying - maybe that rsync does that automatically, though ... > > > > I'm currently using rsync. However even > find . |wc -l > takes huge amount of time when we talk about let's say one million files. > Find/rsync uses stat for each file. I'm not sure whether the kernel keeps > a copy of the FAT(don't know how it's called for ext2/3) in memory, but > even in that case the context switches between the kernel and userspace for > each stat seems expensive and unnecessary, when we could read that information > directly from the FAT. That's what dump does, right ? The information that is needed for the decision whether to dump a particular file or not is stored in the inode on ext2/3 and in the directory entry/ies on FAT filesystems - the FAT itself is needed only when actually dumping a file (and for finding directory blocks), as it only indicates which blocks ("clusters") belong together, not any other properties of this group of blocks, not even its exact used size or if it's a file or a directory, which both is stored in the corresponding directory entry. This information is stored with the inode (at least for smaller files) on ext2/3, thus eliminating any need for an additional read. However, linux caches inodes in main memory just as any other filesystem information, thus for caching purposes it shouldn't matter where the needed information is stored. BTW, how do you think should dump read blocks from the filesystem without context switches? The only thing dump does not use is the kernel's filesystem code. The block device drivers and the block buffering mechanisms are just the same as with the kernel's filesystem driver. However, I don't know whether dump optimizes block device access by sorting requests or something. Maybe you could provide some more detailled information on the problem you want to solve/the kind of data you have to backup? Cyas, Florian PS: Could you please limit your quoting to the parts needed? |
From: Delian K. <kr...@kr...> - 2004-02-04 22:21:16
|
On Wednesday 04 February 2004 03:40, Florian Zumbiehl wrote: > Dunno, UTSL - or maybe some developer will be able to help you out with > that one?! > > But can't you use some separate filesystem for that mirror? It is a separate filesystem. But I'm going to restore over it everytime, since I'm going to use only level 1 dump|restore. > BTW, how do you think should dump read blocks from the filesystem > without context switches? The only thing dump does not use is the kerne= l's > filesystem code. The block device drivers and the block buffering > mechanisms are just the same as with the kernel's filesystem driver. Read the FAT(inode tables) from the block device, parse it and see which=20 inodes are changed. > > However, I don't know whether dump optimizes block device access by > sorting requests or something. This is not needed, the inode tables locations are well known and located on adjacent blocks. > > Maybe you could provide some more detailled information on the problem > you want to solve/the kind of data you have to backup? > It is quite simple. I've got an ext3 over lvm. The filesystem is 30GB, 13GB full. There are about 1 million inodes used on that filesystem. The hard drive is ide. I've decided to perform the test before writing back too You. Unfortunately, the test failed.=20 The dump processes were quite small 1-2 MB, restore took about 92 MB. I've created the snapshot: lvcreate -s -L1G -n snaphome /dev/vg1/home I've created a fresh filesystem, mounted it, and cd to it's top level dir= =2E [root@smtp0 bkp]# cat /root/bin/tmp/dump.sh #!/bin/sh dump -0uf - /dev/vg1/snaphome |restore -ruf - [root@smtp0 bkp]# time /root/bin/tmp/dump.sh DUMP: Date of this level 0 dump: Wed Feb 4 14:58:19 2004 DUMP: Dumping /dev/vg1/snaphome (an unlisted file system) to standard o= utput DUMP: Added inode 8 to exclude list (journal inode) DUMP: Added inode 7 to exclude list (resize inode) DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 12540221 tape blocks. DUMP: Volume 1 started with block 1 at: Wed Feb 4 14:58:48 2004 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 2.41% done at 289 kB/s, finished in 11:43 DUMP: 4.48% done at 418 kB/s, finished in 7:57 DUMP: 6.32% done at 480 kB/s, finished in 6:46 DUMP: 8.07% done at 519 kB/s, finished in 6:09 DUMP: 9.88% done at 551 kB/s, finished in 5:41 DUMP: 11.59% done at 570 kB/s, finished in 5:23 DUMP: 13.31% done at 586 kB/s, finished in 5:09 DUMP: 15.05% done at 599 kB/s, finished in 4:56 DUMP: 16.94% done at 615 kB/s, finished in 4:41 DUMP: 18.72% done at 626 kB/s, finished in 4:31 DUMP: 20.56% done at 636 kB/s, finished in 4:20 DUMP: 22.40% done at 645 kB/s, finished in 4:11 DUMP: 24.25% done at 654 kB/s, finished in 4:02 DUMP: 26.22% done at 664 kB/s, finished in 3:52 DUMP: 27.96% done at 668 kB/s, finished in 3:45 DUMP: 29.85% done at 674 kB/s, finished in 3:37 DUMP: 31.74% done at 680 kB/s, finished in 3:29 DUMP: 33.67% done at 686 kB/s, finished in 3:21 DUMP: 35.65% done at 693 kB/s, finished in 3:13 DUMP: 37.67% done at 699 kB/s, finished in 3:06 DUMP: 39.61% done at 704 kB/s, finished in 2:59 DUMP: 41.51% done at 707 kB/s, finished in 2:52 DUMP: 43.23% done at 708 kB/s, finished in 2:47 DUMP: 45.72% done at 720 kB/s, finished in 2:37 DUMP: 47.96% done at 728 kB/s, finished in 2:29 DUMP: 49.86% done at 730 kB/s, finished in 2:23 DUMP: 51.85% done at 734 kB/s, finished in 2:17 DUMP: 53.74% done at 736 kB/s, finished in 2:11 DUMP: 55.96% done at 742 kB/s, finished in 2:03 DUMP: 57.90% done at 744 kB/s, finished in 1:58 DUMP: 59.73% done at 745 kB/s, finished in 1:52 DUMP: 61.65% done at 746 kB/s, finished in 1:47 DUMP: 63.68% done at 749 kB/s, finished in 1:41 DUMP: 65.53% done at 750 kB/s, finished in 1:36 DUMP: 67.60% done at 753 kB/s, finished in 1:29 DUMP: 69.53% done at 754 kB/s, finished in 1:24 DUMP: 71.51% done at 756 kB/s, finished in 1:18 DUMP: 73.49% done at 758 kB/s, finished in 1:13 DUMP: 75.48% done at 759 kB/s, finished in 1:07 DUMP: 77.40% done at 760 kB/s, finished in 1:02 DUMP: 79.35% done at 762 kB/s, finished in 0:56 DUMP: 81.28% done at 763 kB/s, finished in 0:51 DUMP: 83.15% done at 763 kB/s, finished in 0:46 DUMP: 85.14% done at 764 kB/s, finished in 0:40 DUMP: 87.09% done at 765 kB/s, finished in 0:35 DUMP: 89.14% done at 767 kB/s, finished in 0:29 DUMP: 91.05% done at 768 kB/s, finished in 0:24 DUMP: 92.96% done at 768 kB/s, finished in 0:19 DUMP: 94.87% done at 769 kB/s, finished in 0:13 DUMP: 96.85% done at 770 kB/s, finished in 0:08 DUMP: 98.35% done at 767 kB/s, finished in 0:04 DUMP: 99.82% done at 765 kB/s, finished in 0:00 DUMP: 100.00% done at 762 kB/s, finished in 0:00 DUMP: 100.00% done at 759 kB/s, finished in 0:00 DUMP: 100.00% done at 756 kB/s, finished in 0:00 DUMP: 100.00% done at 753 kB/s, finished in 0:00 DUMP: 100.00% done at 751 kB/s, finished in 0:00 DUMP: 100.00% done at 749 kB/s, finished in 0:00 DUMP: 100.00% done at 746 kB/s, finished in 0:00 DUMP: 100.00% done at 744 kB/s, finished in 0:00 DUMP: Broken pipe DUMP: The ENTIRE dump is aborted. real 324m1.259s user 1m21.260s sys 3m44.360s [root@smtp0 bkp]# I've tried the same on a smaller partition, it worked just fine. The snapshot was NOT exausted and removed by the kernel. I've removed it manually later. Btw as You could see, the times reported by dump are not accurate, at least if it counts real time. > PS: Could you please limit your quoting to the parts needed? n.p. p.s. Any ideas on this topic, Stelian ? ;)) |
From: Stelian P. <st...@po...> - 2004-02-05 10:48:43
|
On Thu, Feb 05, 2004 at 12:21:05AM +0200, Delian Krustev wrote: > On Wednesday 04 February 2004 03:40, Florian Zumbiehl wrote: > > Dunno, UTSL - or maybe some developer will be able to help you out with > > that one?! You can use -r over an already used filesystem, but pay attention to the fact that if a file/dir has the same name as the one you try to restore then it will be overwritten. > > > > But can't you use some separate filesystem for that mirror? > > It is a separate filesystem. But I'm going to restore over it everytime, > since I'm going to use only level 1 dump|restore. You have to restore the level 0 dump every time too. Remember, level 1 dumps contain the differences between the 0-level dump and the 1-level dump, so you will have to do something like: dump 0uf /tmp/0-level.dump /fs dump 1uf /tmp/1-level.dump /fs ... cd /mynewfs restore rf /tmp/0-level.dump restore rf /tmp/1-level.dump ... dump 1uf /tmp/1-level.dump /fs ... cd /mynewfs rm -rf * restore rf /tmp/0-level.dump restore rf /tmp/1-level.dump ... etc... > > > BTW, how do you think should dump read blocks from the filesystem > > without context switches? The only thing dump does not use is the kernel's > > filesystem code. The block device drivers and the block buffering > > mechanisms are just the same as with the kernel's filesystem driver. This is correct. > It is quite simple. I've got an ext3 over lvm. The filesystem is 30GB, > 13GB full. There are about 1 million inodes used on that filesystem. > The hard drive is ide. I've decided to perform the test before > writing back too You. Unfortunately, the test failed. [...] > lvcreate -s -L1G -n snaphome /dev/vg1/home [...] > dump -0uf - /dev/vg1/snaphome |restore -ruf - [..] > DUMP: 100.00% done at 744 kB/s, finished in 0:00 > DUMP: Broken pipe > DUMP: The ENTIRE dump is aborted. Hmmm, strange. dump version ? kernel version ? Try separating the two steps in order to see who exactly is failing (dump or restore): dump -0uf - /dev/vg1/snaphome > /dev/null first, then: dump -0uf - /dev/vg1/snaphome |restore -rufdv - (note the addition of (d)ebug and (v)erbose flag). Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-05 15:43:31
|
On Thursday 05 February 2004 12:48, Stelian Pop wrote: > You have to restore the level 0 dump every time too. Remember, level 1 > dumps contain the differences between the 0-level dump and the 1-level > dump, so you will have to do something like: > =09dump 0uf /tmp/0-level.dump /fs > =09dump 1uf /tmp/1-level.dump /fs > =09=09... > =09=09=09cd /mynewfs > =09=09=09restore rf /tmp/0-level.dump > =09=09=09restore rf /tmp/1-level.dump > =09=09... > =09dump 1uf /tmp/1-level.dump /fs > =09=09... > =09=09=09cd /mynewfs > =09=09=09rm -rf * > =09=09=09restore rf /tmp/0-level.dump > =09=09=09restore rf /tmp/1-level.dump > =09=09... > =09etc... > I don't want to restore level 0 dump each time. It will be terribly slow. I prefer dumping from levels 0-9 in this case. This way I'll=20 dump/restore level 0 only once on every 10 backups. What does the restoresymtable contain. Isn't it the info for inodes which have already been restored ? I've tried restoring only level one dumps, one after another and it seemed to work just fine. > Hmmm, strange. > > dump version ? kernel version ? > dump-0.4b27-3 - from rh7.3 kernel 2.4.24 with patches from sistina, needed for snapshotting ext3 > Try separating the two steps in order to see who exactly is failing > (dump or restore): > =09dump -0uf - /dev/vg1/snaphome > /dev/null > first, then: > =09dump -0uf - /dev/vg1/snaphome |restore -rufdv - > (note the addition of (d)ebug and (v)erbose flag). > I'll do that. First I would like to upgrade dump/e2fsprogs to the latest versions. I've succeeded with e2fsprogs{,-devel} 1.34. However dump-0.4b3= 5 gives me a compilation error: make[1]: Entering directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' i386-redhat-linux-gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -O2 -march=3Di3= 86 -mcpu=3Di686 -pipe -I.. -I../compat/include -I../dump -DRDUMP -DRRES= TORE -DLINUX_FORK_BUG -DHAVE_LZO -D_PATH_DUMPDATES=3D\"/etc/dumpdates\" -= D_DUMP_VERSION=3D\"0.4b35\" main.c -o main.o In file included from /usr/include/linux/types.h:5, from /usr/include/linux/fs.h:13, from main.c:69: /usr/include/asm/types.h:11: warning: redefinition of `__s8' /usr/include/ext2fs/ext2_types.h:11: warning: `__s8' previously declared = here /usr/include/asm/types.h:12: warning: redefinition of `__u8' /usr/include/ext2fs/ext2_types.h:10: warning: `__u8' previously declared = here /usr/include/asm/types.h:14: warning: redefinition of `__s16' /usr/include/ext2fs/ext2_types.h:37: warning: `__s16' previously declared= here /usr/include/asm/types.h:15: warning: redefinition of `__u16' /usr/include/ext2fs/ext2_types.h:38: warning: `__u16' previously declared= here /usr/include/asm/types.h:17: warning: redefinition of `__s32' /usr/include/ext2fs/ext2_types.h:45: warning: `__s32' previously declared= here /usr/include/asm/types.h:18: warning: redefinition of `__u32' /usr/include/ext2fs/ext2_types.h:46: warning: `__u32' previously declared= here /usr/include/asm/types.h:21: warning: redefinition of `__s64' /usr/include/ext2fs/ext2_types.h:23: warning: `__s64' previously declared= here /usr/include/asm/types.h:22: warning: redefinition of `__u64' /usr/include/ext2fs/ext2_types.h:27: warning: `__u64' previously declared= here main.c: In function `do_exclude_ino': main.c:1285: parse error before `unsigned' main.c:1286: `j' undeclared (first use in this function) main.c:1286: (Each undeclared identifier is reported only once main.c:1286: for each function it appears in.) make[1]: *** [main.o] Error 1 make[1]: Leaving directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' make: *** [all] Error 1 I've looked at the source of main.c and I'm seeing variable definitions in the middle of a function, which I think is not an ANSI C (main.c not main.cpp) .. The build system is rh7.3. I'm using the dump.spec from the src.rpm from http://dump.sourceforge.net/ Thanks, Delian Krustev |
From: Stelian P. <st...@po...> - 2004-02-05 15:58:39
|
On Thu, Feb 05, 2004 at 05:43:19PM +0200, Delian Krustev wrote: > On Thursday 05 February 2004 12:48, Stelian Pop wrote: > > You have to restore the level 0 dump every time too. Remember, level 1 > > dumps contain the differences between the 0-level dump and the 1-level > > dump, so you will have to do something like: > > dump 0uf /tmp/0-level.dump /fs > > dump 1uf /tmp/1-level.dump /fs > > ... > > cd /mynewfs > > restore rf /tmp/0-level.dump > > restore rf /tmp/1-level.dump > > ... > > dump 1uf /tmp/1-level.dump /fs > > ... > > cd /mynewfs > > rm -rf * > > restore rf /tmp/0-level.dump > > restore rf /tmp/1-level.dump > > ... > > etc... > > > > I don't want to restore level 0 dump each time. It will be terribly > slow. I prefer dumping from levels 0-9 in this case. This way I'll > dump/restore level 0 only once on every 10 backups. Indeed, it is better to use levels 0-9 in this case. > What does the restoresymtable contain. Isn't it the info for inodes > which have already been restored ? I've tried restoring only level one > dumps, one after another and it seemed to work just fine. It will be used in order to detect renames/deletions etc. > > > > dump version ? kernel version ? > > > > dump-0.4b27-3 - from rh7.3 You need to update it, as you tried below. > kernel 2.4.24 with patches from sistina, needed for snapshotting ext3 > > > Try separating the two steps in order to see who exactly is failing > > (dump or restore): > > dump -0uf - /dev/vg1/snaphome > /dev/null > > first, then: > > dump -0uf - /dev/vg1/snaphome |restore -rufdv - > > (note the addition of (d)ebug and (v)erbose flag). > > > > I'll do that. First I would like to upgrade dump/e2fsprogs to the latest > versions. I've succeeded with e2fsprogs{,-devel} 1.34. However dump-0.4b35 gives > me a compilation error: > > make[1]: Entering directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' > i386-redhat-linux-gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -O2 -march=i386 -mcpu=i686 -pipe -I.. -I../compat/include -I../dump -DRDUMP -DRRESTORE -DLINUX_FORK_BUG -DHAVE_LZO -D_PATH_DUMPDATES=\"/etc/dumpdates\" -D_DUMP_VERSION=\"0.4b35\" main.c -o main.o > In file included from /usr/include/linux/types.h:5, > from /usr/include/linux/fs.h:13, > from main.c:69: > /usr/include/asm/types.h:11: warning: redefinition of `__s8' > /usr/include/ext2fs/ext2_types.h:11: warning: `__s8' previously declared here > /usr/include/asm/types.h:12: warning: redefinition of `__u8' > /usr/include/ext2fs/ext2_types.h:10: warning: `__u8' previously declared here > /usr/include/asm/types.h:14: warning: redefinition of `__s16' > /usr/include/ext2fs/ext2_types.h:37: warning: `__s16' previously declared here > /usr/include/asm/types.h:15: warning: redefinition of `__u16' > /usr/include/ext2fs/ext2_types.h:38: warning: `__u16' previously declared here > /usr/include/asm/types.h:17: warning: redefinition of `__s32' > /usr/include/ext2fs/ext2_types.h:45: warning: `__s32' previously declared here > /usr/include/asm/types.h:18: warning: redefinition of `__u32' > /usr/include/ext2fs/ext2_types.h:46: warning: `__u32' previously declared here > /usr/include/asm/types.h:21: warning: redefinition of `__s64' > /usr/include/ext2fs/ext2_types.h:23: warning: `__s64' previously declared here > /usr/include/asm/types.h:22: warning: redefinition of `__u64' > /usr/include/ext2fs/ext2_types.h:27: warning: `__u64' previously declared here > main.c: In function `do_exclude_ino': > main.c:1285: parse error before `unsigned' > main.c:1286: `j' undeclared (first use in this function) > main.c:1286: (Each undeclared identifier is reported only once > main.c:1286: for each function it appears in.) > make[1]: *** [main.o] Error 1 > make[1]: Leaving directory `/usr/src/redhat/BUILD/dump-0.4b35/dump' > make: *** [all] Error 1 > > I've looked at the source of main.c and I'm seeing variable definitions > in the middle of a function, which I think is not an ANSI C (main.c not > main.cpp) .. Yup, known problem: http://cvs.sourceforge.net/viewcvs.py/dump/dump/dump/main.c?r1=1.88&r2=1.89 Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-06 13:55:02
|
I've updated dump/e2fsprogs to the latest versions and performed the test= s. This went fine: time /usr/sbin/dump -0uf - /dev/vg1/snaphome |gzip |split -b 1024m This also: time cat xaa xab |gunzip|restore -rdvuf - >/home/out.log 2>/home/err.log However this has failed: time /usr/sbin/dump -1uf - /dev/vg1/snaphome | restore -rdvuf - >/home/ou= t1.log 2>/home/err1.log DUMP: Date of this level 1 dump: Fri Feb 6 12:15:22 2004 DUMP: Date of last level 0 dump: Thu Feb 5 21:27:28 2004 DUMP: Dumping /dev/vg1/snaphome (an unlisted file system) to standard o= utput DUMP: Label: none DUMP: Writing 10 Kilobyte records DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 1496288 blocks. DUMP: Volume 1 started with block 1 at: Fri Feb 6 12:20:27 2004 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 26.14% done at 1304 kB/s, finished in 0:14 DUMP: 32.57% done at 812 kB/s, finished in 0:20 DUMP: 39.55% done at 657 kB/s, finished in 0:22 DUMP: 52.02% done at 648 kB/s, finished in 0:18 DUMP: 63.38% done at 631 kB/s, finished in 0:14 DUMP: 71.66% done at 595 kB/s, finished in 0:11 DUMP: 79.98% done at 569 kB/s, finished in 0:08 DUMP: 89.24% done at 555 kB/s, finished in 0:04 DUMP: 100.00% done at 555 kB/s, finished in 0:00 DUMP: Broken pipe DUMP: The ENTIRE dump is aborted. real 54m42.457s user 0m25.730s sys 0m36.370s [root@smtp0 tmp]# tail /home/out1.log extract file ./ttt/domains/ttt.biz/t/index.html extract file ./ttt/domains/ttt.biz/webcam-privees/webcam-privees.html extract file ./ttt/domains/ttt.biz/webcam-privees/index.html extract file ./ttt/domains/ttt.biz/x-video/x-video.html extract file ./ttt/domains/ttt.biz/x-video/index.html Add links Set directory mode, owner, and times. Check the symbol table. Verify the directory structure Check pointing the restore [root@smtp0 tmp]# tail /home/err1.log File header, ino 3918076 File header, ino 3918077 File header, ino 3918078 File header, ino 3918079 File header, ino 3918080 File header, ino 3918081 File header, ino 3918082 File header, ino 3918083 File header, ino 3918084 Warning: missing name ./vpopmail/domains/1/ttt.com/ttt/Maildir/new/107601= 4788.27968.smtp0.ttt.com,S=3D32899 [root@smtp0 tmp]# One more question. What are these times that dump reports ? Regards, Delian Krustev |
From: Stelian P. <st...@po...> - 2004-02-06 14:40:11
|
On Fri, Feb 06, 2004 at 03:54:51PM +0200, Delian Krustev wrote: > I've updated dump/e2fsprogs to the latest versions and performed the tests. Are you perfectly sure you updated dump ? You use /usr/sbin/dump below, but my RPMS try to install /sbin/dump... Make sure you don't have several versions of dump on your disk. > > This went fine: > time /usr/sbin/dump -0uf - /dev/vg1/snaphome |gzip |split -b 1024m > > This also: > time cat xaa xab |gunzip|restore -rdvuf - >/home/out.log 2>/home/err.log > > However this has failed: > time /usr/sbin/dump -1uf - /dev/vg1/snaphome | restore -rdvuf - >/home/out1.log 2>/home/err1.log Hmm, really strange, the direct pipe should be the same as when going through cat/gzip. It alsa seems to crash after the full dump and restore has finished ( I cannot see what is causing this. Try running the same command but with dump under strace -ff (strace -ff /usr/sbin/dump -1u...). Does this show something about the crash ? Second thought: compile dump with debug enabled and run it through gdb, then when it crashes use commands like "bt" to see where it happened. > real 54m42.457s > user 0m25.730s > sys 0m36.370s [...] > One more question. What are these times that dump reports ? 54 minutes of real execution (clock time), 25 effective execution in algorithms, 36 seconrds in doing syscalls. Most of the time (53 minutes) was spend in waiting for the disk to be ready for read/write. See man 1 time and man 2 times for details. Stelian. -- Stelian Pop <st...@po...> |
From: Delian K. <kr...@kr...> - 2004-02-06 18:03:11
|
On Friday 06 February 2004 16:39, Stelian Pop wrote: > Are you perfectly sure you updated dump ? You use > /usr/sbin/dump below, but my RPMS try to install > /sbin/dump... Make sure you don't have several versions of dump > on your disk. [root@smtp0 tmp]# which dump /usr/sbin/dump [root@smtp0 tmp]# whereis dump dump: /usr/sbin/dump /usr/share/man/man8/dump.8.gz [root@smtp0 tmp]# rpm -q dump dump-0.4b35-1 I've used rpm -Uvh, the old version is removed ... > Hmm, really strange, the direct pipe should be the same as when > going through cat/gzip. It alsa seems to crash after the full dump > and restore has finished ( > > I cannot see what is causing this. Try running the same command > but with dump under strace -ff (strace -ff /usr/sbin/dump -1u...). > Does this show something about the crash ? You probably mean -fF here. I suspect the logs will be terribly big if I'm tracing all the syscals(may be I'll shorten the output with e.g. -e trace=3D..), but may be I'll try that also. > > Second thought: compile dump with debug enabled and run it through > gdb, then when it crashes use commands like "bt" to see where > it happened. > Ok, I'll try that also. > > One more question. What are these times that dump reports ? > > 54 minutes of real execution (clock time), 25 effective execution > in algorithms, 36 seconrds in doing syscalls. Most of the time > (53 minutes) was spend in waiting for the disk to be ready for read/wri= te. > > See man 1 time and man 2 times for details. I was not asking about the times "time" reports. I've asked about: DUMP: 32.57% done at 812 kB/s, finished in 0:20 DUMP: 39.55% done at 657 kB/s, finished in 0:22 DUMP: 52.02% done at 648 kB/s, finished in 0:18 Aren't these seconds, seconds of what ? Regards, Delian |
From: Stelian P. <st...@po...> - 2004-02-07 09:02:23
|
On Fri, Feb 06, 2004 at 08:02:59PM +0200, Delian Krustev wrote: > On Friday 06 February 2004 16:39, Stelian Pop wrote: > > Are you perfectly sure you updated dump ? You use > > /usr/sbin/dump below, but my RPMS try to install > > /sbin/dump... Make sure you don't have several versions of dump > > on your disk. > > [root@smtp0 tmp]# which dump > /usr/sbin/dump > [root@smtp0 tmp]# whereis dump > dump: /usr/sbin/dump /usr/share/man/man8/dump.8.gz > [root@smtp0 tmp]# rpm -q dump > dump-0.4b35-1 > > I've used rpm -Uvh, the old version is removed ... Yup, you're correct, I was looking at the dump.static package, sorry. > > Hmm, really strange, the direct pipe should be the same as when > > going through cat/gzip. It alsa seems to crash after the full dump > > and restore has finished ( > > > > I cannot see what is causing this. Try running the same command > > but with dump under strace -ff (strace -ff /usr/sbin/dump -1u...). > > Does this show something about the crash ? > > You probably mean -fF here. I suspect the logs will be terribly > big if I'm tracing all the syscals(may be I'll shorten the output with > e.g. -e trace=..), but may be I'll try that also. Strace -f is sufficient indeed. > > > > One more question. What are these times that dump reports ? > > > > 54 minutes of real execution (clock time), 25 effective execution > > in algorithms, 36 seconrds in doing syscalls. Most of the time > > (53 minutes) was spend in waiting for the disk to be ready for read/write. > > > > See man 1 time and man 2 times for details. > > I was not asking about the times "time" reports. I've asked about: > DUMP: 32.57% done at 812 kB/s, finished in 0:20 > DUMP: 39.55% done at 657 kB/s, finished in 0:22 > DUMP: 52.02% done at 648 kB/s, finished in 0:18 > > Aren't these seconds, seconds of what ? Ah, didn't understood you were refering to those times. Those are dump estimates of the remaining time, and the units are hours:minutes. Stelian. -- Stelian Pop <st...@po...> |