You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Stelian P. <st...@po...> - 2005-06-01 13:47:31
|
Le mardi 31 mai 2005 =C3=A0 11:00 -0400, Eric Jensen a =C3=A9crit : > If you think it's appropriate, here are a few draft sentences for > inclusion in the man page description of the "-m" switch to dump, to > help others avoid the mistake I made: >=20 > "If you use this option, be aware that many programs that unpack > files from archives (e.g. tar, rpm, unzip, apt) may set files' > mtimes to dates in the past. Files installed in this way may not be > dumped correctly using "dump -m" if the modified mtime is earlier > than the most recent level 0 dump." >=20 I've put this in the manpage, replacing apt with dpkg and 'most recent level 0 dump' with 'previous level dump'. Thanks. Stelian. --=20 Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2005-05-31 22:59:08
|
--On Tuesday, May 31, 2005 11:00 AM -0400 Eric Jensen <eje...@sw...> wrote: > Changes, additions, or deletions are welcome. I'm not positive that > this is true of apt (since I don't use it), though it is true for tar, > rpm, and unzip. I like the wording. I don't use -m but just in case I ever do I'm glad you blazed the territory and found the gotchas for me. ;) |
From: Georgi S. <Geo...@ka...> - 2005-05-31 18:08:42
|
Hello, branding-irons to you.His excellency changed colour. He sat quite still, = staring at theNext morning early he sought Captain Blood again. He = found himmoment the unfortunate slave stood powerless, his wrists = pinionedleading upwards to the house. Across this patch of light = flutteredalarm. My brother wishes me to assume charge of this money, = tofire and metal burst upon him from the buccaneers, and his = decksoverboard in his turn, to be picked up presently by the = longboatunofficial skirmishes, and unofficial predatory measures = which weavail him nothing, and in his leaky condition would provide = anships were at anchor in mid-channel. The Admiral's Encarnacion,who = pressed about, shook off his weariness and the two tearful auntsto be = held at the disposal of the Queen. These prisoners were toCaptain's = future wife.For he knew, as all Bridgewater knew and had known now = for someLevasseur, awaiting the Captain's signal. |
From: Eric J. <eje...@sw...> - 2005-05-31 15:00:06
|
Stelian Pop <st...@po...> wrote: > > One more related question - if an RPM package installed after a level 0 > > installs a new file and sets its date to a date *before* the level 0 > > date, what will happen with that file in an incremental dump using -m ? > > That does in fact appear to be the case for some (if not all) of the > > "metadata only" files in the level 2 - they were installed after the > > level 0, but have dates (as listed with 'ls -l') that are before the > > level 0. If this is the explanation for the "metadata only" problem, I > > see why (and won't use -m in the future), > > I think you did indeed find the issue here. Instead of 'ls -l' do a > 'stat' on the incriminated files and check the 'mtime'. Regular dump > looks at both 'ctime' and 'mtime', dump -m looks only at 'mtime'. Thanks for clearing this up - it's a wrinkle that I hadn't anticipated, though clearly dump is functioning exactly as advertised. If you think it's appropriate, here are a few draft sentences for inclusion in the man page description of the "-m" switch to dump, to help others avoid the mistake I made: "If you use this option, be aware that many programs that unpack files from archives (e.g. tar, rpm, unzip, apt) may set files' mtimes to dates in the past. Files installed in this way may not be dumped correctly using "dump -m" if the modified mtime is earlier than the most recent level 0 dump." Changes, additions, or deletions are welcome. I'm not positive that this is true of apt (since I don't use it), though it is true for tar, rpm, and unzip. Thanks, Eric |
From: Stelian P. <st...@po...> - 2005-05-31 09:24:14
|
Le mardi 31 mai 2005 =C3=A0 10:39 +0200, Eros Albertazzi a =C3=A9crit : > With dump-0.4b40 I gaet >=20 > DUMP: dumping (Pass III) [directories] > /usr/sbin/dump: symbol lookup error: /usr/sbin/dump: undefined symbol: > ext2fs_read_inode_full >=20 > on my Suse 9.1 box as well You will get these errors on every distribution which hasn't upgraded to e2fsprogs >=3D 1.36 Once again, you should *ALWAYS* rebuild the RPMS from source because I provide the binary rpms only as a facility, they are build on my redhat machine which happens to be a Fedora Core 3 right now, and binary compatibility with other distributions/versions is not guaranteed. =20 I may even stop providing binary rpms in the future since, at least on my laptop, I've switched to Ubuntu which is debian based. Stelian. --=20 Stelian Pop <st...@po...> |
From: Eros A. <ph...@bo...> - 2005-05-31 08:39:32
|
With dump-0.4b40 I gaet DUMP: dumping (Pass III) [directories] /usr/sbin/dump: symbol lookup error: /usr/sbin/dump: undefined symbol: ext2fs_read_inode_full on my Suse 9.1 box as well |
From: Stelian P. <st...@po...> - 2005-05-31 05:11:08
|
On Mon, 2005-05-30 at 18:14 -0700, Anthony Ewell wrote: > Hi All, > > I am running White Box Enterprise Linux 4 (WBEL4), > which is the same thing as Red Hat Enterprise Linux 4 > (RHEL4), only cheaper. WBEL4 is based on Fedora Core 3. > > > I am getting the following errors when I run dump > from my root drive (LABEL=/ or /dev/sdb5) to > my image backup drive (/images or /dev/sda2). > Both partitions are ext3. > /sbin/dump.static -v -0a -z -f /images/linuxDump.gz /dev/sdb5 > > DUMP: dumping EA (block) in inode #2273373 > DUMP: dumping directory inode 2273383 These are no errors, just normal verbose messages. If you don't want them, then just don't put -v on the command line. Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2005-05-31 04:54:34
|
On Mon, 2005-05-30 at 16:03 -0700, Anthony Ewell wrote: > Hi All, > > I am running White Box Enterprise Linux 4 (WBEL4), > which is the same thing as Red Hat Enterprise Linux 4 > (RHEL4), only cheaper. WBEL4 is based on Fedora Core 3. > > > I am getting the following error when I run dump > from my root drive (LABEL=/ or /dev/sdb5) to > my image backup drive (/images or /dev/sda2). > Both partitions are ext3. > /usr/sbin/dump: symbol lookup error: /usr/sbin/dump: undefined symbol: > ext2fs_read_inode_full Known issue, the binary RPM is linked with a newer version of e2fsprogs (and I fergot to put the 'Require: e2fsprogs > xxxx' in the spec file). The correct way to deal with it is to rebuild the RPM locally with the installed version of e2fsprogs. > I upgraded from dump-0.4b37-1.i386.rpm as it > would only give me millions of inode errors > to dump-0.4b40-1.i386.rpm. I tryed to > recompile the src rpm but could not locate > rpmbuild on any of my four CD's. It must be here, in the rpm-build package. Stelian. -- Stelian Pop <st...@po...> |
From: Anthony E. <ae...@gb...> - 2005-05-31 01:16:44
|
Hi All, I am running White Box Enterprise Linux 4 (WBEL4), which is the same thing as Red Hat Enterprise Linux 4 (RHEL4), only cheaper. WBEL4 is based on Fedora Core 3. I am getting the following errors when I run dump from my root drive (LABEL=/ or /dev/sdb5) to my image backup drive (/images or /dev/sda2). Both partitions are ext3. I am running dump-static-0.4b40-1.i386.rpm. Does anyone know what is going on here? Many thanks, --Tony p.s. I test for the return error code. I get back a zero. /sbin/dump.static -v -0a -z -f /images/linuxDump.gz /dev/sdb5 DUMP: dumping EA (block) in inode #2273373 DUMP: dumping directory inode 2273383 DUMP: dumping EA (block) in inode #2273383 DUMP: dumping directory inode 2273399 DUMP: dumping EA (block) in inode #2273399 DUMP: dumping directory inode 2273404 DUMP: dumping EA (block) in inode #2273404 and so on and so forth (millions and millions and ...) |
From: Anthony E. <ae...@gb...> - 2005-05-30 23:05:36
|
Hi All, I am running White Box Enterprise Linux 4 (WBEL4), which is the same thing as Red Hat Enterprise Linux 4 (RHEL4), only cheaper. WBEL4 is based on Fedora Core 3. I am getting the following error when I run dump from my root drive (LABEL=/ or /dev/sdb5) to my image backup drive (/images or /dev/sda2). Both partitions are ext3. I upgraded from dump-0.4b37-1.i386.rpm as it would only give me millions of inode errors to dump-0.4b40-1.i386.rpm. I tryed to recompile the src rpm but could not locate rpmbuild on any of my four CD's. Does anyone know what is going on here? Many thanks, --Tony p.s. I test for the return error code. I get back a zero. /usr/sbin/dump -v -0a -z -f /images/linuxDump.gz /dev/sdb5 start time = Mon May 30 15:22:39 PDT 2005 /dev/sda2 on /images type ext3 (rw) DUMP: Date of this level 0 dump: Mon May 30 15:22:40 2005 DUMP: Dumping /dev/sdb5 (/) to /images/linuxDump.gz DUMP: Excluding inode 8 (journal inode) from dump DUMP: Excluding inode 7 (resize inode) from dump DUMP: Label: / DUMP: Writing 10 Kilobyte records DUMP: Compressing output at compression level 2 (zlib) DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 3394102 blocks. DUMP: Volume 1 started with block 1 at: Mon May 30 15:23:02 2005 DUMP: dumping (Pass III) [directories] /usr/sbin/dump: symbol lookup error: /usr/sbin/dump: undefined symbol: ext2fs_read_inode_full DUMP: Bad return code from dump: 127 DUMP: Broken pipe DUMP: error reading command pipe: Connection reset by peer |
From: Stelian P. <st...@po...> - 2005-05-28 18:53:28
|
On Thu, 2005-05-26 at 15:35 -0400, Eric Jensen wrote: > To restore, we did the following: > > 1- reboot using Redhat EL AS (ver.3) CD in rescue mode > 2- Used fdisk (v. 2.11y) to setup the partitions > Note: The partitioning of the drive changed between the dump and > restore (we got a bigger replacement drive). The > root partition is on the same device (/dev/sda2) and is > the same size as before, but it's not on the same sectors of > the disk due to other partitions changing size. This doesn't matter at all for restore. Restore works in regular filesystem space, meaning it is using regular syscalls to restore the files. So it doesn't matter if the target filesystem has a different size, or is located on another partition. All that matters is that it must have enough space to contain the files to restore. > 3- Used mkfs.ext3 (v. 1.32 09-Nov-2002) to make the filesystems. Only > option used was -L (label the filesystem). > 4- Mounted /dev/sda2 on /mnt/root Listing shows only lost+found directory > 5- cd into /mnt/root use mt to advance to the proper tape file. > 6- run restore.static (v. 0.4b40): > restore.static -r -b 64 -f user@tapehost:/dev/nst0 > 7- restoresymtable is created by the restore. > 8- load the level 2 tape and advance to the right file (there is no > level 1 dump for this partition) > 9- run restore.static again: > restore.static -r -b 64 -v -f user@tapehost:/dev/nst0 > (output from stdout and stderr are attached) All this seems correct. > Problem: many files are missing after the level 2 restore. stdout from > the level 2 restore shows many "file ... is metadata only", yet in some > cases that file is not in the level 0 backup (i.e., it was created > between the time of the level 0 and the level 2 dump). There are also > many files being renamed where the original and final names have little > to do with one another. > > Example output lines: > > rename ./usr/share/config/language.codes to ./var/lib/ntp/drift > rename ./var/lib/xkb/README to ./var/lib/xkb/RSTTMP01273294 > rename ./usr/X11R6/lib/X11/xkb/symbols/sun/us to ./var/lib/xkb/README > rename ./var/lib/menu/kde/Applications/.directory to ./var/lib/menu/kde/Applications/RSTTMP082279 > rename ./usr/X11R6/lib/X11/fonts/100dpi/timR24.pcf.gz to ./var/lib/menu/kde/Applications/.directory > > and so on. 'Final' names ? Do you mean that at the end of restore you end up with files/dirs called 'RSTTMP...' ? If this is the case, then this is clearly a malfunction. If this is not the case, then the messages are part of the normal restore process. Remember that restore works with inode *numbers*, and it is very possible that the inode *number* destination changed between the 2 dumps (the inode was freed then reallocated to a new file). This is what you see as a 'rename' in the output. Example: in the 0-level dump you had ino #42 containing ./usr/X11R6/lib/X11/xkb/symbols/sun/us. For some reason this file was removed, and the ino #42 was allocated to ./var/lib/xkb/README. So the 2-level dump contains information about this 'rename'. Check the contents of /var/lib/xkb/README. If it is correct then all is good. > Having gone through this, I belatedly thought of the "-m" flag to dump > (to optimize for backing up only metadata if that's all that has > changed). I *know* that the level 2 was run with -m, since it is run > from a script. But I ran the level 0 by hand, and I may not have used > the -m flag for that; in fact, I suspect I did not. My naive > understanding is that it wouldn't matter, since the level 0 has got to > contain all the files anyway. Would that cause such a problem? > No. The level 0 contains all the files. > A related scenario - if there are multiple level 2 dumps (again assuming > no level 1's as in my case), all with -m, does one need to restore all > of them (i.e. does only the first one have the real file data, and the > rest metadata)? I wouldn't have thought so, but again, there's clearly > something I'm not getting here. No, each subsequent level 2 dump replaces the information contained in the previous one. > One more related question - if an RPM package installed after a level 0 > installs a new file and sets its date to a date *before* the level 0 > date, what will happen with that file in an incremental dump using -m ? > That does in fact appear to be the case for some (if not all) of the > "metadata only" files in the level 2 - they were installed after the > level 0, but have dates (as listed with 'ls -l') that are before the > level 0. If this is the explanation for the "metadata only" problem, I > see why (and won't use -m in the future), I think you did indeed find the issue here. Instead of 'ls -l' do a 'stat' on the incriminated files and check the 'mtime'. Regular dump looks at both 'ctime' and 'mtime', dump -m looks only at 'mtime'. > but I still don't fully > understand the confusing renaming output above - it must be > inode-related at some level, but I'd like to understand it better. See above. Stelian. -- Stelian Pop <st...@po...> |
From: Eric J. <eje...@sw...> - 2005-05-26 19:36:03
|
Hi all, Well, after six years of using dump every day, it finally paid off - had a full disk failure, and we were able to restore almost everything relatively straightforwardly. So, thanks to Stelian for a nice program, and to people on the list for useful support. But... we seem to be having problems restoring the incremental backups on top of the level 0. It may be that there's something I don't understand about how this works, but perhaps someone can help out. We have a level 0 and a level 2 (no intervening level 1) for the partition in question; these dumps were made with dump 0.4b28-7 from Redhat. (I don't know if that "-7" after the 28 represents Redhat's patches; they have a tendency to do things like that, though. We've since upgraded to 0.4b40 for the restore.) To restore, we did the following: 1- reboot using Redhat EL AS (ver.3) CD in rescue mode 2- Used fdisk (v. 2.11y) to setup the partitions Note: The partitioning of the drive changed between the dump and restore (we got a bigger replacement drive). The root partition is on the same device (/dev/sda2) and is the same size as before, but it's not on the same sectors of the disk due to other partitions changing size. 3- Used mkfs.ext3 (v. 1.32 09-Nov-2002) to make the filesystems. Only option used was -L (label the filesystem). 4- Mounted /dev/sda2 on /mnt/root Listing shows only lost+found directory 5- cd into /mnt/root use mt to advance to the proper tape file. 6- run restore.static (v. 0.4b40): restore.static -r -b 64 -f user@tapehost:/dev/nst0 7- restoresymtable is created by the restore. 8- load the level 2 tape and advance to the right file (there is no level 1 dump for this partition) 9- run restore.static again: restore.static -r -b 64 -v -f user@tapehost:/dev/nst0 (output from stdout and stderr are attached) Problem: many files are missing after the level 2 restore. stdout from the level 2 restore shows many "file ... is metadata only", yet in some cases that file is not in the level 0 backup (i.e., it was created between the time of the level 0 and the level 2 dump). There are also many files being renamed where the original and final names have little to do with one another. Example output lines: rename ./usr/share/config/language.codes to ./var/lib/ntp/drift rename ./var/lib/xkb/README to ./var/lib/xkb/RSTTMP01273294 rename ./usr/X11R6/lib/X11/xkb/symbols/sun/us to ./var/lib/xkb/README rename ./var/lib/menu/kde/Applications/.directory to ./var/lib/menu/kde/Applications/RSTTMP082279 rename ./usr/X11R6/lib/X11/fonts/100dpi/timR24.pcf.gz to ./var/lib/menu/kde/Applications/.directory and so on. Having gone through this, I belatedly thought of the "-m" flag to dump (to optimize for backing up only metadata if that's all that has changed). I *know* that the level 2 was run with -m, since it is run from a script. But I ran the level 0 by hand, and I may not have used the -m flag for that; in fact, I suspect I did not. My naive understanding is that it wouldn't matter, since the level 0 has got to contain all the files anyway. Would that cause such a problem? A related scenario - if there are multiple level 2 dumps (again assuming no level 1's as in my case), all with -m, does one need to restore all of them (i.e. does only the first one have the real file data, and the rest metadata)? I wouldn't have thought so, but again, there's clearly something I'm not getting here. One more related question - if an RPM package installed after a level 0 installs a new file and sets its date to a date *before* the level 0 date, what will happen with that file in an incremental dump using -m ? That does in fact appear to be the case for some (if not all) of the "metadata only" files in the level 2 - they were installed after the level 0, but have dates (as listed with 'ls -l') that are before the level 0. If this is the explanation for the "metadata only" problem, I see why (and won't use -m in the future), but I still don't fully understand the confusing renaming output above - it must be inode-related at some level, but I'd like to understand it better. Thanks in advance for any answers and advice that anyone has. Thanks, Eric |
From: Stelian P. <ste...@fr...> - 2005-05-26 08:26:01
|
Le mercredi 25 mai 2005 =C3=A0 16:24 -0700, Shane Kerschtien a =C3=A9crit : > I currently stage all my dumps to disk before I copy them to tape (via=20 > the dd command). However, I can't span multiple tapes with dd. I have=20 > been trying to think of a way to use restore/dump so I can copy the=20 > dump files to tape. The problem I am coming across is how I can=20 > maintain the "label" (I know it isn't the label, but I am uncertain=20 > what to call it). > For example: > new-fs dump file (little endian), This dump Fri Jan 28 14:47:42 2005,=20 > Previous dump Wed Dec 31 16:00:00 1969, Volume 1, Level zero, type:=20 > tape header, Label /, Filesystem /, Device /dev/sda6, Host xxx, Flags 1 > Does anyone have any idea how I can maintain this data? The only thing=20 > I have come up with is to restore to disk, then do a dump to the tape=20 > (not a very nice way of doing it but I can't think of anything else). > I am using dump 0.4b37. I dont' understand what you're trying to achieve.=20 Do you want to split a dump file in several chunks, write each chunk on a separate tape, then be able to restore directly from the tape ? In this case you can't run 'restore' directly on the tapes, you will have to concatenate the chunks together before launching restore on the whole file. Or is it something else ? Stelian. --=20 Stelian Pop <ste...@fr...> |
From: Shane K. <ske...@en...> - 2005-05-25 23:24:50
|
I currently stage all my dumps to disk before I copy them to tape (via the dd command). However, I can't span multiple tapes with dd. I have been trying to think of a way to use restore/dump so I can copy the dump files to tape. The problem I am coming across is how I can maintain the "label" (I know it isn't the label, but I am uncertain what to call it). For example: new-fs dump file (little endian), This dump Fri Jan 28 14:47:42 2005, Previous dump Wed Dec 31 16:00:00 1969, Volume 1, Level zero, type: tape header, Label /, Filesystem /, Device /dev/sda6, Host xxx, Flags 1 Does anyone have any idea how I can maintain this data? The only thing I have come up with is to restore to disk, then do a dump to the tape (not a very nice way of doing it but I can't think of anything else). I am using dump 0.4b37. Cheers, Shane Kerschtien |
From: Kolev, N. <NK...@tr...> - 2005-05-18 13:29:10
|
I am trying to run dump (0.4b40 (using libext2fs 1.32 of 09-Nov-2002)) = and am getting the following error: dump: relocation error: dump: undefined symbol: ext2fs_read_inode_full Here's where references to libext2fs are and both /usr/lib and /lib are = on my LD_LIBRARY_PATH locate libext2fs /usr/share/info/libext2fs.info.gz /usr/lib/libext2fs.a /usr/lib/libext2fs.so /lib/libext2fs.so.2.4 /lib/libext2fs.so.2 What am I doing wrong? Nik |
From: Eric J. <eje...@sw...> - 2005-04-13 01:38:28
|
> I use SATA in a USB2 case connected to a Linksys NSLU2 unit. This looks like an interesting unit (running Linux under the hood), with an active developer community. I wonder - is anyone using this the other way around, i.e. mounting NSLU2-attached disks as part of the regular filesystem, which then need to be backed up? Since dump only handles local devices, then dump would have to be running on the NSLU2 itself, I guess. I didn't find anything like this on the nslu2-linux list or wiki, but people have compiled many other linux packages on it, so I suppose it could be done. Thoughts? Eric |
From: Christopher B. <ch...@wx...> - 2005-04-12 13:33:59
|
On Tue, Apr 12, 2005 at 09:04:00AM +0200, Helmut Jarausch wrote: > On 11 Apr, Anthony Ewell wrote: > > With the cost of a decent small server tape drive > > (vxa2 $1100.00, decent scsi card and cable $300.00, plus $80.00 > > per tape), I am thinking of using removable hard SATA > > drives instead. Also, when you consider the physical durability of tape vs disk it makes it very difficult to make a compelling argument for the use of tape. Disk can withstand higher temperatures and greater accelerations for starters. > I am using an IDE drive in a USB2 case. You can use SATA drives in such > a case also. I use SATA in a USB2 case connected to a Linksys NSLU2 unit. > > 4) any gotchas I am missing? Will dump work properly with > > this scheme? Definitely. Best of luck too! -c -- WeatherNet Observations for station: home Temperature: 46.40F Pressure: 30.04in; Dew Point: 21.30F (37%) Wind: 96 at 0 mph Recorded: 07:40:02 04/12/05 (http://wsdl.wxnet.org/home/binding.wsdl) |
From: Helmut J. <jar...@ig...> - 2005-04-12 07:04:06
|
On 11 Apr, Anthony Ewell wrote: > Hi All, > > With the cost of a decent small server tape drive > (vxa2 $1100.00, decent scsi card and cable $300.00, plus $80.00 > per tape), I am thinking of using removable hard SATA > drives instead. > > CRU makes a hot swappable carrier: > > http://www.cruinc.com/htmldocs/products/SATAdp5plus.htm > > that comes well recommended. > > Question: > > 1) as long as the backup drive is unmounted, will Linux > freak out when the drive gets jerked out of its carrier? > > 2) will Linux get annoyed at the new hard drive being > inserted (mount /dev/hdb1)? (It will have a new serial > number, etc.) I am using an IDE drive in a USB2 case. You can use SATA drives in such a case also. Hotswapping works just fine. (assuming hotplug is installed correctly) I'd guess this should work with a SATA drive, as well, if CONFIG_HOTPLUG_PCI is selected in the kernel configuration. Note, however, I had lots of trouble even with non-swappable SATA drives with Linux before 2.6.12-rc1 (mightbe my motherboard) > 3) would it be faster to leave compression on or off > when doing a dump? (I will have lots of room on the backup > drive.) dump supports 3 compression modes. Only -y (lzop) gives good speed, more than 10 Mb/sec depending on the CPU. Furthermore, dump uses several threads, so on a SMP machine it compresses in parallel - quite noticable! > > 4) any gotchas I am missing? Will dump work properly with > this scheme? Yes. -- Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: Kenneth P. <sh...@se...> - 2005-04-11 22:46:12
|
--On Monday, April 11, 2005 11:53 AM -0700 Anthony Ewell <ae...@gb...> wrote: > CRU makes a hot swappable carrier: > http://www.cruinc.com/htmldocs/products/SATAdp5plus.htm > that comes well recommended. Looks nice. What do those cost? I've been buying ATAPI drives and mounting them in USB2 enclosures, which run around $50. I've been mounting the drive to my Win2k system as NTFS, then sharing the partition over SMB or CIFS on gigabit Ethernet. (See my earlier posts for what's needed to get decent throughput.) > 1) as long as the backup drive is unmounted, will Linux > freak out when the drive gets jerked out of its carrier? > > 2) will Linux get annoyed at the new hard drive being > inserted (mount /dev/hdb1)? (It will have a new serial > number, etc.) Probably best to look for the kernel storage guys and see what they have to say. The question is how the SATA driver associates hot-swapped drives with /dev names. You may also need to modprobe the driver if this is the only SATA drive. The whole issue of how to handle hot-swapped disks is ongoing. > 3) would it be faster to leave compression on or off > when doing a dump? (I will have lots of room on the backup > drive.) I'm using 300 GB to store two copies of my server, and because it's over the network, compression might improve throughput, but only if there's enough CPU to keep the buffers full so you don't get start-stop hiccups. > 4) any gotchas I am missing? Will dump work properly with > this scheme? Since I'm using NTFS over Samba, I had to use the -M option to break the backup into many 1 GByte files. I then do a verify pass and that seems to go fine. "Live" files like logs and some mailboxes get reported in the verify, which tells me that it's working right. (It's also a good idea to do a "fire drill" and restore random files from the backup to insure that you know how to do that and that your backup files are good.) |
From: Anthony E. <ae...@gb...> - 2005-04-11 18:53:32
|
Hi All, With the cost of a decent small server tape drive (vxa2 $1100.00, decent scsi card and cable $300.00, plus $80.00 per tape), I am thinking of using removable hard SATA drives instead. CRU makes a hot swappable carrier: http://www.cruinc.com/htmldocs/products/SATAdp5plus.htm that comes well recommended. Question: 1) as long as the backup drive is unmounted, will Linux freak out when the drive gets jerked out of its carrier? 2) will Linux get annoyed at the new hard drive being inserted (mount /dev/hdb1)? (It will have a new serial number, etc.) 3) would it be faster to leave compression on or off when doing a dump? (I will have lots of room on the backup drive.) 4) any gotchas I am missing? Will dump work properly with this scheme? Many thanks, --Tony ae...@gb... |
From: Kenneth P. <sh...@se...> - 2005-04-08 18:25:19
|
--On Friday, April 08, 2005 10:49 AM +0200 Stelian Pop <st...@po...> wrote: > I think this is the same as > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=149299 Thanks, I'll pull down the new RPM and try that. I don't recall seeing anything about that development on the list here, and I don't think it's been released as a RH update. It's probably sitting in Rawhide (the RH development tree). |
From: Stelian P. <st...@po...> - 2005-04-08 08:48:04
|
On Thu, Apr 07, 2005 at 07:47:59PM -0700, Kenneth Porter wrote: > Using dump-0.4b39-1.FC2 with the EA patch. > > During "restore -C" on a large (128 GB) filesystem I'm seeing "error in EA > block" perhaps 100 times. Do I need to worry? Is there a way to get more > details about this? I think this is the same as https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=149299 Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2005-04-08 08:45:42
|
On Thu, Apr 07, 2005 at 07:44:41PM -0700, Kenneth Porter wrote: > I tried to add -j2 (compression) to the dump command line but this kicks > the expected completion time to over 48 hours, which suggests that the > network buffers are starving. Is there some way to avoid this? I thought > the megabyte block size would be sufficient but it's not working when > compression is enabled. First of all you should determine if the problem comes from the CPU (look at system load) or the I/O. If the problem comes from the CPU, you can, as Helmut says, use a different compression algorithm. If the problem comes from the I/O, it could be because you're now writing in smaller chunks than before (because -b sets the uncompressed chunk size). In this case, since you're dumping to a file, you could pipe the output of dump into dd and set a new blocksize for the output: dump ... | dd of=1024 > ${OUTDIR}/sda1/dump Stelian. -- Stelian Pop <st...@po...> |
From: Helmut J. <jar...@ig...> - 2005-04-08 08:18:50
|
On 7 Apr, Kenneth Porter wrote: > I'm trying to dump a large (128 GB) partition to a Samba-mounted share on a > Win2k box over gigabit Ethernet. This is reasonably fast provided that I > mount the share with large buffers: > > FSOPTIONS="sockopt=SO_RCVBUF=65536,sockopt=SO_SNDBUF=65536" > mount //${SERVER}/BigBackup /mnt/Backup -t ${FSTYPE} -o ${FSOPTIONS} > > (Username and password are passed in environment variables here, but could > also be done by credential file, to hide from ps.) > > The dump command line: > > BLOCKSIZE=1024 > OUTDIR=/mnt/Backup/Newred > FILESIZE=1000000 > ERRORLIMIT=500 > COMPRESS= > dump 0u -b ${BLOCKSIZE} ${COMPRESS} -Mf ${OUTDIR}/sda1/dump -B ${FILESIZE} > /mnt/sda1 -Q ${OUTDIR}/sda1/qfa > > Performance stats reported: > > DUMP: 127629312 blocks (124638.00MB) on 128 volume(s) > DUMP: finished in 19082 seconds, throughput 6688 kBytes/sec > > That's about 5.25 hours. > > I tried to add -j2 (compression) to the dump command line but this kicks > the expected completion time to over 48 hours, which suggests that the > network buffers are starving. Is there some way to avoid this? I thought > the megabyte block size would be sufficient but it's not working when > compression is enabled. > There are 3 compression methods. You used -j ==> bzip2 : best compression but extremely slow There is -z ==> gzip : good compression and quite a bit faster -y ==> lzo : moderate compression (still a factor of 2 in most cases) but extremely fast Depending on your hardware you can expect at lot more than 10 Mb/sec. If you have lzop on your machine, you can test the speed. Otherwise the sources are free. -- Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: Kenneth P. <sh...@se...> - 2005-04-08 02:48:08
|
Using dump-0.4b39-1.FC2 with the EA patch. During "restore -C" on a large (128 GB) filesystem I'm seeing "error in EA block" perhaps 100 times. Do I need to worry? Is there a way to get more details about this? |