You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(24) |
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(5) |
Oct
(4) |
Nov
|
Dec
(26) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
|
2003 |
Jan
(10) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2004 |
Jan
(9) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
2005 |
Jan
|
Feb
(1) |
Mar
(4) |
Apr
(2) |
May
(1) |
Jun
(3) |
Jul
(4) |
Aug
(2) |
Sep
(6) |
Oct
(6) |
Nov
(17) |
Dec
(28) |
2006 |
Jan
(9) |
Feb
(6) |
Mar
(4) |
Apr
(2) |
May
(11) |
Jun
(22) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(6) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Andrea A. <an...@su...> - 2000-12-19 16:28:35
|
On Tue, Dec 19, 2000 at 05:04:21PM +0100, Stelian Pop wrote: > On Tue, Dec 19, 2000 at 04:32:09PM +0100, Andrea Arcangeli wrote: > > > On Tue, Dec 19, 2000 at 07:17:34AM -0800, ty...@va... wrote: > > > If your application is known to be 64-bit clean, and won't get surprised > > > by large files, it should use the ext2fs_block_iterate2 interface. > > > Hence, a largefile-knowledgeable version dump should not require a > > > special version of e2fsprogs/libext2fs. The ext2fs_block_iterate2 > > > interface has been around since version 1.12. > > > > Great. So in short we don't need to touch e2fsprogs to fix dump ;)) > > So, if I got all this right: > - we don't touch e2fsprogs > - your patch for dump remains correct > - we use ext2fs_block_iterate2 instead of ext2fs_block_iterate yes. > (probably needs some little cosmetic fixes here in the > callback function since the block number is > 2^32) > - we forget about the -D_FILE_OFFSET_BITS=64, in both dump > and e2fsprogs No! We forget about -D_FILE_OFFSET_BITS=64 _only_ in e2fsprogs. > With all this, we get one dump binary which will be able to deal with > large files on both LFS and non-LFS systems (LFS system accessed from > a non-LFS system). No, we simply can't generate largefiles in a non-LFS system during restore. Dump could even work without largefile support from glibc, but not restore as far I can tell. We have to restore those files through the fs, not via blockdevice through e2fsprogs. > Of course, restore will truncate the result if ran on non-LFS systems, > but that's not very important IMHO... And that's why we need to use glibc LFS support in the LFS systems so we can actually restore the largefiles ;) > Andrea, could you make those changes to your patch and test on your > test files ? The only change is "not touch e2fsprogs" and use ext2fs_block_iterate2 instead of ext2fs_block_iterate, and that's what I'm doing right now. Andrea |
From: Stelian P. <ste...@al...> - 2000-12-19 16:13:18
|
On Tue, Dec 19, 2000 at 05:04:04PM +0100, Andrea Arcangeli wrote: > On Tue, Dec 19, 2000 at 07:39:40AM -0800, ty...@va... wrote: > > If you do things right, you don't need glibc's LFS support. This is > > We strictly need some kind of glibc LFS support to make restore to work > with largefiles. Think how to restore a file with only an 8G hole in it (no one > single data block allocated). We cannot create the hole with lseek+write, we > need truncate64. Dump and restore are completly different. Dump see the filesystem only through the e2fs libraries. So Ted comments apply. Restore however uses directly the glibc functions so we must indeed use the LFS here. I don't know what is the best way to do that however (explicit support vs. implicit support)... Two different patches... Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: Stelian P. <ste...@al...> - 2000-12-19 16:04:48
|
On Tue, Dec 19, 2000 at 04:32:09PM +0100, Andrea Arcangeli wrote: > On Tue, Dec 19, 2000 at 07:17:34AM -0800, ty...@va... wrote: > > If your application is known to be 64-bit clean, and won't get surprised > > by large files, it should use the ext2fs_block_iterate2 interface. > > Hence, a largefile-knowledgeable version dump should not require a > > special version of e2fsprogs/libext2fs. The ext2fs_block_iterate2 > > interface has been around since version 1.12. > > Great. So in short we don't need to touch e2fsprogs to fix dump ;)) So, if I got all this right: - we don't touch e2fsprogs - your patch for dump remains correct - we use ext2fs_block_iterate2 instead of ext2fs_block_iterate (probably needs some little cosmetic fixes here in the callback function since the block number is > 2^32) - we forget about the -D_FILE_OFFSET_BITS=64, in both dump and e2fsprogs With all this, we get one dump binary which will be able to deal with large files on both LFS and non-LFS systems (LFS system accessed from a non-LFS system). Even better than we thought... :) Of course, restore will truncate the result if ran on non-LFS systems, but that's not very important IMHO... Andrea, could you make those changes to your patch and test on your test files ? Thanks. Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: Andrea A. <an...@su...> - 2000-12-19 16:04:32
|
On Tue, Dec 19, 2000 at 07:39:40AM -0800, ty...@va... wrote: > If you do things right, you don't need glibc's LFS support. This is We strictly need some kind of glibc LFS support to make restore to work with largefiles. Think how to restore a file with only an 8G hole in it (no one single data block allocated). We cannot create the hole with lseek+write, we need truncate64. Andrea |
From: <ty...@va...> - 2000-12-19 15:40:06
|
Date: Tue, 19 Dec 2000 12:34:13 +0100 From: Andrea Arcangeli <an...@su...> On Tue, Dec 19, 2000 at 11:58:42AM +0100, Stelian Pop wrote: > Does it hurt to use this patch on a non-LFS system ? Even with > this patch, it will continue to run correctly on a non-LFS system > I suppose... Strictly speaking there's the case of a non-LFS system with a largefile into it. restore won't work (think the truncate for the filesize). I think the `dump` stage could even work for a non-LFS system as e2fsprogs probably only accesses the blockdevice via read/write/lseek64. The question is whether a binary compiled with LFS support will work on a non-LFS system. Also note that if you have a LFS filesystem mounted on a non-LFS system, ideally something sane should happen. (Note that this is true for e2fsprogs/libext2fs; we use the llseek() system call on devices, which exists even on non-LFS systems.) > -D_FILE_OFFSET_BITS=64 run correctly on a Linux 2.0/libc5 system > for example ? Certainly not, only glibc provides LFS support. glibc will try however to work also on a non-LFS kernel correctly (but you need the same glibc major revision and that's of course usual requirement regardless of LFS issues given it's dynamically linked to it). If you do things right, you don't need glibc's LFS support. This is true for e2fsprogs, and if you use the unix_io interfaces, you should be fine. - Ted |
From: Andrea A. <an...@su...> - 2000-12-19 15:32:43
|
On Tue, Dec 19, 2000 at 07:17:34AM -0800, ty...@va... wrote: > If your application is known to be 64-bit clean, and won't get surprised > by large files, it should use the ext2fs_block_iterate2 interface. > Hence, a largefile-knowledgeable version dump should not require a > special version of e2fsprogs/libext2fs. The ext2fs_block_iterate2 > interface has been around since version 1.12. Great. So in short we don't need to touch e2fsprogs to fix dump ;)) Andrea |
From: <ty...@va...> - 2000-12-19 15:17:44
|
Date: Tue, 19 Dec 2000 11:58:42 +0100 From: Stelian Pop <ste...@al...> > Since I work only on purerly userspace and kernel LFS systems > I didn't cared to make it dynamic with an > autoconf check in LFS, but it should be easy to add for you. Does it hurt to use this patch on a non-LFS system ? Even with this patch, it will continue to run correctly on a non-LFS system I suppose... I *really* don't like usage of -D_FILE_OFFSET_BITS=64. This kind of libc magic scares me, and it's the compatibility issue (especially with regards to future versions of glibc, since the glibc developers have proven themselves less than trustworthy about maintaining backwards compatibility in my eyes). In fact, it's not necessary with e2fsprogs; it already supports 64 bit i/o, and it's abstracted away behind the unix_io and llseek interfaces in libext2fs. > And this is the patch against e2fsprogs-1.19. Ted, will you include this in e2fsprogs-1.20 ? If yes, when do you plan to release the new version ? No, I won't include the patch, because the patch is bogus. There's a reason why ext2fs_block_iterate passes BLOCK_FLAG_NO_LARGE to ext2fs_block_iterate2; that's to avoid surprising old binaries which were linked the libraries, and weren't expecting to be able to deal with large filesystems. In addition, the interface for ext2fs_block_iterate uses a 32 bit value for the block count variable, which isn't enough for files which have more than 2**32 blocks. So ext2fs_block_iterate() will return EXT2_ET_FILE_TOO_BIG for largefiles, just as the LFS specification requires that open/read/write return EFBIG unless the file was opened using the O_LARGE flag (or the open64/read64/write64 interfaces are used). I treat backwards compatibility very seriously, and that means not making changes that might break old binaries. If your application is known to be 64-bit clean, and won't get surprised by large files, it should use the ext2fs_block_iterate2 interface. Hence, a largefile-knowledgeable version dump should not require a special version of e2fsprogs/libext2fs. The ext2fs_block_iterate2 interface has been around since version 1.12. - Ted |
From: Andrea A. <an...@su...> - 2000-12-19 14:41:18
|
On Tue, Dec 19, 2000 at 03:25:15PM +0100, Stelian Pop wrote: > Anyway, I'll apply your patch, and I'll make an option to the > configure (--enable-largefile) which will trigger the addition > of -D_FILE_OFFSET_BITS=64 to the compile line. I'd preferred if it was autodetected. If glibc supports lfs you could enable it automatically if you want. > Sometimes in the future, when all distributions will be 2.4 based, Some distribution has LFS features by default even if 2.2.x based. So some userbase would probably like to have it enabled by default, but ok it's not a big deal to add --enable-largefile by hand :). Andrea |
From: Stelian P. <ste...@al...> - 2000-12-19 14:25:26
|
On Tue, Dec 19, 2000 at 12:34:13PM +0100, Andrea Arcangeli wrote: > > Does it hurt to use this patch on a non-LFS system ? Even with > > this patch, it will continue to run correctly on a non-LFS system > > I suppose... > > Strictly speaking there's the case of a non-LFS system with a largefile into > it. restore won't work (think the truncate for the filesize). I think the > `dump` stage could even work for a non-LFS system as e2fsprogs probably only > accesses the blockdevice via read/write/lseek64. > > But you can't do much about that case (other than maybe print a warning > and I was too lazy to care about that ;). > > So I'd say yes. Anyway, I'll apply your patch, and I'll make an option to the configure (--enable-largefile) which will trigger the addition of -D_FILE_OFFSET_BITS=64 to the compile line. This configure flag will default to no for now. It will be up to each distribution to enable or disable it depending on whether they build a LFS system or not. Sometimes in the future, when all distributions will be 2.4 based, and all systems will have LFS features by default, I'll transform the flag to default to yes. > Stelian, do you have a regression test suite for dump? I should have, but... I don't. Generally I do several dump/restore -C in order to test but it isn't an automatic process... > Does my patch works > flawlessy for you too? It works for me, but I don't have a LFS system, so I didn't really test the new features, just non regression for 'small' files. > I tested it on a 5G largefile without holes and a 8G > largefile with holes and a few small files. It worked fine for me but as said > I haven't tested it very extensively (nor I tested e2fsck keeps working, > but that should really keep working ;). If you run a regression test suite > against it please keep me posted (I need some extensive testing on it). Thanks! As far as I can tell, your changes are valid and are exactly what I had planned to do some time ago but never had the time to. But of course, this is only a source code test level, not a executable test level :) Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: Andrea A. <an...@su...> - 2000-12-19 11:34:39
|
On Tue, Dec 19, 2000 at 11:58:42AM +0100, Stelian Pop wrote: > Does it hurt to use this patch on a non-LFS system ? Even with > this patch, it will continue to run correctly on a non-LFS system > I suppose... Strictly speaking there's the case of a non-LFS system with a largefile into it. restore won't work (think the truncate for the filesize). I think the `dump` stage could even work for a non-LFS system as e2fsprogs probably only accesses the blockdevice via read/write/lseek64. But you can't do much about that case (other than maybe print a warning and I was too lazy to care about that ;). So I'd say yes. > There must be some configure option in order to enable large file > support, but only in order to pass -D_FILE_OFFSET_BITS=64 on the > compile line ? No dynamic compile-time checking for LFS system. > Am I wrong here ? See above, I'd say yes. > What about binary compatibility ? Will a binary built with It will be binary compatibile say with glibc 2.2. > -D_FILE_OFFSET_BITS=64 run correctly on a Linux 2.0/libc5 system > for example ? Certainly not, only glibc provides LFS support. glibc will try however to work also on a non-LFS kernel correctly (but you need the same glibc major revision and that's of course usual requirement regardless of LFS issues given it's dynamically linked to it). Stelian, do you have a regression test suite for dump? Does my patch works flawlessy for you too? I tested it on a 5G largefile without holes and a 8G largefile with holes and a few small files. It worked fine for me but as said I haven't tested it very extensively (nor I tested e2fsck keeps working, but that should really keep working ;). If you run a regression test suite against it please keep me posted (I need some extensive testing on it). Thanks! Andrea |
From: Stelian P. <ste...@al...> - 2000-12-19 10:59:07
|
On Mon, Dec 18, 2000 at 04:20:02PM +0100, Andrea Arcangeli wrote: > This patch makes dump to work with largefiles. Thanks. > Since I work only on purerly userspace and kernel LFS systems > I didn't cared to make it dynamic with an > autoconf check in LFS, but it should be easy to add for you. Does it hurt to use this patch on a non-LFS system ? Even with this patch, it will continue to run correctly on a non-LFS system I suppose... There must be some configure option in order to enable large file support, but only in order to pass -D_FILE_OFFSET_BITS=64 on the compile line ? No dynamic compile-time checking for LFS system. Am I wrong here ? What about binary compatibility ? Will a binary built with -D_FILE_OFFSET_BITS=64 run correctly on a Linux 2.0/libc5 system for example ? > And this is the patch against e2fsprogs-1.19. Ted, will you include this in e2fsprogs-1.20 ? If yes, when do you plan to release the new version ? Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: Andrea A. <an...@su...> - 2000-12-18 15:20:19
|
Dump is currently not able to work with largefiles (this is not a LFS news because 64bit archs in 2.2.x can generate largefiles just now). This patch makes dump to work with largefiles. Since I work only on purerly userspace and kernel LFS systems I didn't cared to make it dynamic with an autoconf check in LFS, but it should be easy to add for you. This is the patch against dump CVS: Index: dump//compat/include/bsdcompat.h =================================================================== RCS file: /cvsroot/dump/dump/compat/include/bsdcompat.h,v retrieving revision 1.12 diff -u -r1.12 bsdcompat.h --- dump//compat/include/bsdcompat.h 2000/12/04 15:43:16 1.12 +++ dump//compat/include/bsdcompat.h 2000/12/18 02:35:11 @@ -102,6 +102,7 @@ #define di_rdev di_db[0] /* #define di_ouid di_uid */ /* #define di_ogid di_gid */ +#define di_size_high di_dir_acl /* * This is the ext2_dir_entry structure but the fields have been renamed Index: dump//dump/traverse.c =================================================================== RCS file: /cvsroot/dump/dump/dump/traverse.c,v retrieving revision 1.24 diff -u -r1.24 traverse.c --- dump//dump/traverse.c 2000/12/05 16:57:38 1.24 +++ dump//dump/traverse.c 2000/12/18 02:35:13 @@ -149,6 +149,7 @@ blockest(struct dinode const *dp) { long blkest, sizeest; + u_quad_t i_size; /* * dp->di_size is the size of the file in bytes. @@ -164,19 +165,20 @@ * dump blocks (sizeest vs. blkest in the indirect block * calculation). */ - blkest = howmany(dbtob(dp->di_blocks), TP_BSIZE); - sizeest = howmany(dp->di_size, TP_BSIZE); + blkest = howmany((u_quad_t)dp->di_blocks*fs->blocksize, TP_BSIZE); + i_size = dp->di_size + ((u_quad_t) dp->di_size_high << 32); + sizeest = howmany(i_size, TP_BSIZE); if (blkest > sizeest) blkest = sizeest; #ifdef __linux__ - if (dp->di_size > fs->blocksize * NDADDR) { + if (i_size > fs->blocksize * NDADDR) { /* calculate the number of indirect blocks on the dump tape */ blkest += howmany(sizeest - NDADDR * fs->blocksize / TP_BSIZE, NINDIR(sblock) * EXT2_FRAGS_PER_BLOCK(fs->super)); } #else - if (dp->di_size > sblock->fs_bsize * NDADDR) { + if (i_size > sblock->fs_bsize * NDADDR) { /* calculate the number of indirect blocks on the dump tape */ blkest += howmany(sizeest - NDADDR * sblock->fs_bsize / TP_BSIZE, @@ -715,7 +718,7 @@ void dumpino(struct dinode *dp, ino_t ino) { - int cnt; + unsigned long cnt; fsizeT size; char buf[TP_BSIZE]; struct old_bsd_inode obi; @@ -724,6 +727,7 @@ #else int ind_level; #endif + u_quad_t i_size = dp->di_size + ((u_quad_t) dp->di_size_high << 32); if (newtape) { newtape = 0; @@ -735,7 +739,7 @@ obi.di_mode = dp->di_mode; obi.di_uid = dp->di_uid; obi.di_gid = dp->di_gid; - obi.di_qsize.v = (u_quad_t)dp->di_size; + obi.di_qsize.v = i_size; obi.di_atime = dp->di_atime; obi.di_mtime = dp->di_mtime; obi.di_ctime = dp->di_ctime; @@ -744,7 +748,7 @@ obi.di_flags = dp->di_flags; obi.di_gen = dp->di_gen; memmove(&obi.di_db, &dp->di_db, (NDADDR + NIADDR) * sizeof(daddr_t)); - if (dp->di_file_acl || dp->di_dir_acl) + if (dp->di_file_acl) warn("ACLs in inode #%ld won't be dumped", (long)ino); memmove(&spcl.c_dinode, &obi, sizeof(obi)); #else /* __linux__ */ @@ -771,8 +775,8 @@ * Check for short symbolic link. */ #ifdef __linux__ - if (dp->di_size > 0 && - dp->di_size < EXT2_N_BLOCKS * sizeof (daddr_t)) { + if (i_size > 0 && + i_size < EXT2_N_BLOCKS * sizeof (daddr_t)) { spcl.c_addr[0] = 1; spcl.c_count = 1; writeheader(ino); @@ -800,7 +804,7 @@ case S_IFDIR: #endif case S_IFREG: - if (dp->di_size > 0) + if (i_size) break; /* fall through */ @@ -815,16 +819,16 @@ msg("Warning: undefined file type 0%o\n", dp->di_mode & IFMT); return; } - if (dp->di_size > NDADDR * sblock->fs_bsize) + if (i_size > NDADDR * sblock->fs_bsize) #ifdef __linux__ cnt = NDADDR * EXT2_FRAGS_PER_BLOCK(fs->super); #else cnt = NDADDR * sblock->fs_frag; #endif else - cnt = howmany(dp->di_size, sblock->fs_fsize); + cnt = howmany(i_size, sblock->fs_fsize); blksout(&dp->di_db[0], cnt, ino); - if ((size = dp->di_size - NDADDR * sblock->fs_bsize) <= 0) + if ((quad_t) (size = i_size - NDADDR * sblock->fs_bsize) <= 0) return; #ifdef __linux__ bc.max = NINDIR(sblock) * EXT2_FRAGS_PER_BLOCK(fs->super); @@ -953,7 +957,7 @@ obi.di_flags = dp->di_flags; obi.di_gen = dp->di_gen; memmove(&obi.di_db, &dp->di_db, (NDADDR + NIADDR) * sizeof(daddr_t)); - if (dp->di_file_acl || dp->di_dir_acl) + if (dp->di_file_acl) warn("ACLs in inode #%ld won't be dumped", (long)ino); memmove(&spcl.c_dinode, &obi, sizeof(obi)); #else /* __linux__ */ And this is the patch against e2fsprogs-1.19. diff -urN e2fsprogs-1.19/lib/ext2fs/block.c e2fsprogs-1.19-lfs/lib/ext2fs/block.c --- e2fsprogs-1.19/lib/ext2fs/block.c Thu May 18 15:20:33 2000 +++ e2fsprogs-1.19-lfs/lib/ext2fs/block.c Mon Dec 18 01:42:19 2000 @@ -468,7 +468,7 @@ xl.real_private = priv_data; xl.func = func; - return ext2fs_block_iterate2(fs, ino, BLOCK_FLAG_NO_LARGE | flags, + return ext2fs_block_iterate2(fs, ino, flags, block_buf, xlate_func, &xl); } Both e2fsprogs and dump needs to be ./configured with: --with-ccopts=-D_FILE_OFFSET_BITS=64 Then dump seems to be happy about largefiles... (it's not extremely well tested though). Comments are welcome. Andrea |
From: Rob C. <ce...@im...> - 2000-10-09 15:54:19
|
> Maybe you should contact him and discuss this matter. His > name is Bdale Garbee and can be reached at <bdale AT gag.com>. He automatically submits the source to an auto-compilation queue and several architectures are packaged. No need to do a sparc binary for this. I'll just have to find time now to focus on the code. Rob |
From: Stelian P. <po...@cy...> - 2000-10-08 15:18:08
|
On Fri, Oct 06, 2000 at 01:56:07PM -0400, Rob Cermak wrote: > The RH binaries on the njlug.rutgers.edu website will be the last ones > for RedHat versions. From this point on, I'll be making sparc binaries > for the Debian/Sparc distribution when new verisons of dump surface. Just make sure you don't duplicate the work of the Debian dump maintainer, who is packaging the new versions of dump rather quickly (just a couple of days after the release of a new version of dump). Maybe you should contact him and discuss this matter. His name is Bdale Garbee and can be reached at <bdale AT gag.com>. Stelian. -- /\ / \ Stelian Pop / DS \ Email: po...@cy... \____/ |
From: Michael J. T. <mj...@tl...> - 2000-10-07 01:23:27
|
Hello, folks! I'm new to this list... :) I use an EA/ACL (Extended Attributes/Access Control List) implementation for linux (acl.bestbits.at). And there is currently _no_ tool for backup/restore those EAs available. Dump/restore among some others (tar/pax/cpio) support is essential to live with that useful features. Currently, EAs (and ACLs that live in top of EAs) implemented for linux only on ext2fs (sgi xfs will also have 'em). I want this support in dump/restore. And I want to contribute and/or help in development of this. So, the question: is there some thoughts/efforts supporting EAs in dump exists already? Can someone comment on format for that attributes in dump archives (to be portable, as, e.g. Solaris, HP/UX and Irix all have acls and/or eas already)? (This question about format is rather interesting. Solaris supports only acls, and have defined/used format for them in dump. Irix should support eas and acls on top of them, and linux follows it. So, if we choose to have separate format specially for acls, then we should also add format for general eas; but if we leave acls and will support eas only, we will not be compatible with solaris). Also, mostly offtopic but still. Can someone offer formats for other tools for this? Or at least give some pointers? (I not contacted yet with pax/tar/cpio maintainers yet; there is also the Austin group http://www.opengroup.org/austin/ that works on materials related to posix1e, but their drafts are not available to public). Comments? Regards, Michael. |
From: Rob C. <ce...@im...> - 2000-10-06 18:05:27
|
The RH binaries on the njlug.rutgers.edu website will be the last ones for RedHat versions. From this point on, I'll be making sparc binaries for the Debian/Sparc distribution when new verisons of dump surface. |
From: Stelian P. <ste...@al...> - 2000-09-11 08:55:02
|
On Sun, Sep 10, 2000 at 07:53:45PM -0400, Rob Cermak wrote: > Not sure how long it will be before the change works itself into the sparc > tree. Since its such a small change might just put it in a SPARC.FAQ file > or in the TODO list for monitoring. I've already changed this in the CVS because this was causing problems with libc5 too. ... struct sigaction sa; memset(&sa, 0, sizeof sa); sigemptyset(&sa.sa_mask); sa.sa_handler = dumpabort; ... does the job, and should do it on Sparc/Linux also... > NOTE: to build the programs succesfully from SRPMS you need the latest > greatest rpm utilites. I know, there was a discussion about this on dump-devel (or was it dump-users) some time ago... Stelian. -- Stelian Pop <ste...@al...> |
From: Rob C. <ce...@im...> - 2000-09-11 00:02:47
|
Not sure how long it will be before the change works itself into the sparc tree. Since its such a small change might just put it in a SPARC.FAQ file or in the TODO list for monitoring. Binaries to appear Monday in the same location. Here is the small patch for the sparc linux version (dump/tape.c): --- tape.c.orig Sun Sep 10 18:27:40 2000 +++ tape.c Sun Sep 10 15:44:45 2000 @@ -895,7 +895,11 @@ master = getpid(); { struct sigaction sa; +/* Not defined for SPARC Linux - yet; might be better as a + check in configure HAVE_SA_SIGACTION */ +#if !defined(__linux__) && !defined(__sparc__) sa.sa_sigaction = NULL; +#endif sigemptyset(&sa.sa_mask); sa.sa_flags = 0; sa.sa_handler = dumpabort; NOTE: to build the programs succesfully from SRPMS you need the latest greatest rpm utilites. |
From: Rob C. <ce...@im...> - 2000-09-10 19:52:04
|
Hiya, I'm working on the next binary release (quite overdue). Looks like the i386 signal.h now differs with the latest asm-sparc/signal.h. [At least through 2.2.16-3 (rpm kernel)/e2fsprogs-devel-1.18-5] [If you upgrade the kernel with headers and compile from source, you will have to fix the /usr/include/asm link to point to the asm-sparc directory.] tape.c: In function `enslave': tape.c:898: structure has no member named `sa_sigaction' make[1]: *** [tape.o] Error 1 I'll compare code between b16 and b19 to see what changed and submit a patch for the sparc. Rob |
From: Stelian P. <ste...@al...> - 2000-09-04 12:23:31
|
On Mon, Sep 04, 2000 at 11:44:53AM +0200, Juergen Vollmer wrote: > > is there a version of dump/restore which is able to deal with the Reiser > filesystem (reiserfs)? No. > If not has anybody started to develop such a thing? I thought a bit about that, but never really started coding. Stelian. -- Stelian Pop <ste...@al...> |
From: Juergen V. <vo...@co...> - 2000-09-04 12:06:06
|
Hi everybody, is there a version of dump/restore which is able to deal with the Reiser filesystem (reiserfs)? If not has anybody started to develop such a thing= ? With best regards J=FCrgen -- = Dr.rer.nat. Juergen Vollmer, Viktoriastrasse 15, D-76133 Karlsruhe office: ju...@in..., vo...@co... www.informatik-vollmer.de Tel: +49(721) 9204871 Fax: +49(721) 24874 private: Jue...@ac... |
From: Stelian P. <ste...@al...> - 2000-08-23 10:35:40
|
On Tue, Aug 22, 2000 at 03:46:00AM +0200, Christer Ekholm wrote: > > $ restore ivf /dev/cdrom > > I would also like to be able to put more than one of my smaller > filesystems on the same CDRW, using multisession. That works fine, but > then restore must know how to reach other sessions on the CD, similar > to the -s option for tapes. I have no idea about how to use a multi-session CD... How does one read, let's say, the third session on a CD ? > Is it possible to add support for this? Maybe... I really don't know :) Stelian. -- Stelian Pop <ste...@al...> |
From: Christer E. <ch...@ch...> - 2000-08-22 01:46:04
|
Hi! I am using cdrecord to dump my filesystems to CDRW like this: $ mknod /tmp/tape p $ dump 0uBf 665000 /tmp/tape / and in another shell: $ cdrecord -v -v blank=fast fs=10m speed=2 dev=4,0 /tmp/tape This way I can dump filesystems larger than fits on a single CDRW, and still be able to restore very easily with: $ restore ivf /dev/cdrom I would also like to be able to put more than one of my smaller filesystems on the same CDRW, using multisession. That works fine, but then restore must know how to reach other sessions on the CD, similar to the -s option for tapes. Is it possible to add support for this? |
From: Kenneth P. <sh...@we...> - 2000-04-18 00:03:41
|
I diff'd your b16 spec file against Red Hat's b15 to see if Red Hat had done any interesting customizations and found the following. Don't know if you want to incorporate any of this. 9,10d8 < # XXX feeble attempt to force build with next version of e2fs < BuildRequires: e2fsprogs >= 1.15, e2fsprogs-devel >= 1.15 97,98c95 < %doc CHANGES COPYRIGHT KNOWNBUGS MAINTAINERS README REPORTING-BUGS THANKS TODO < %doc dump.lsm --- > %doc CHANGES COPYRIGHT KNOWNBUGS MAINTAINERS README REPORTING-BUGS THANKS TODO dump.lsm 104,107c101,104 < %{_prefix}/man/man8/dump.* < %{_prefix}/man/man8/rdump.* < %{_prefix}/man/man8/restore.* < %{_prefix}/man/man8/rrestore.* --- > %{_prefix}/man/man8/dump.8 > %{_prefix}/man/man8/rdump.8 > %{_prefix}/man/man8/restore.8 > %{_prefix}/man/man8/rrestore.8 113c110 < %{_prefix}/man/man8/rmt.* --- > %{_prefix}/man/man8/rmt.8 Ken mailto:sh...@we... http://www.sewingwitch.com/ken/ http://www.harrybrowne2000.org/ |
From: Stelian P. <st...@ca...> - 2000-03-08 11:33:14
|
On Wed, 8 Mar 2000, Dejan Muhamedagic wrote: > Hello, > > I wrote a small script to assist my Mac users to recover files from > dumps. However, they are going wild when giving names to files (all > kind of funny stuff). I've found this to be almost impossible to handle > with a shell script. So, here's a small update to restore(8) which > extends its functionality so that a list of files to be extracted/listed > comes from a named file (like '-T' option for gnutar). I've chosen '-X' > option for this. Thanks for your patch, I found it very useful. It has been included in the CVS and will be part of the next release. > stdin is not supported because I was not sure how to > handle the following case > > # restore -tf - -X - I agree. This should not be very important though. Stelian. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |