You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(24) |
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(5) |
Oct
(4) |
Nov
|
Dec
(26) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
|
2003 |
Jan
(10) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2004 |
Jan
(9) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
2005 |
Jan
|
Feb
(1) |
Mar
(4) |
Apr
(2) |
May
(1) |
Jun
(3) |
Jul
(4) |
Aug
(2) |
Sep
(6) |
Oct
(6) |
Nov
(17) |
Dec
(28) |
2006 |
Jan
(9) |
Feb
(6) |
Mar
(4) |
Apr
(2) |
May
(11) |
Jun
(22) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(6) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: PeterKorman <pk...@ei...> - 2003-01-13 17:06:03
|
On Mon, Jan 13, 2003 at 11:09:44AM +0100, Stelian Pop wrote: > On Fri, Jan 10, 2003 at 01:48:06PM -0500, PeterKorman wrote: > > > So if you have 2 60G drives, and the first one > > is 15% transient junk, then you can be pretty > > safe doing compressed dumps to the second drive. > > Big exclusion lists become pretty important because > > there's LOTZ of small transient files. A fast > > search is especially important if you have say > > 500000 files in a L0 dump, of which you > > want to exclude 170000. > > But very often (if not always) all this junk is located in > only a few directories (~/.netscape/cache for example, etc). > > And you can use chattr to set the nodump attribute on these > directories, and the attribute will be automatically > inherited by all the files below in the hierarchy. Doh! I didn't know about this. > > Does anyone think the maintainer would be > > interested in a patch like the one I describe? > > I'm not sure, I still think it would be overkill. Even so, I had great fun making it work. > However, if you come with a nice, little implementation of this > I might still apply it. Cool. > > If so, where would the person who generates > > the patch send it? > > Here, or use the Sourceforge patch system or you can even send it > directly to me. Easiest to send it here. I've attached it to this message. > > > How much time might pass between submission > > of the patch and it's inclusion in the > > distribution assuming the patch is worth > > a damn? Thanks. > > Well, if I like the patch it might get applied in the CVS instantly. > It will then be in the next release (releases are scheduled depending > on what features/bugs get added/fixed, but I try to get a new version > out every 2 months or so). Thanks for the feedback. Cheers, JPK |
From: Stelian P. <ste...@fr...> - 2003-01-13 10:09:48
|
On Fri, Jan 10, 2003 at 01:48:06PM -0500, PeterKorman wrote: > With reasonably small changes, a > dump exclusion list of nearly unlimited size > could be supported. [...] > > So if you have 2 60G drives, and the first one > is 15% transient junk, then you can be pretty > safe doing compressed dumps to the second drive. > Big exclusion lists become pretty important because > there's LOTZ of small transient files. A fast > search is especially important if you have say > 500000 files in a L0 dump, of which you > want to exclude 170000. But very often (if not always) all this junk is located in only a few directories (~/.netscape/cache for example, etc). And you can use chattr to set the nodump attribute on these directories, and the attribute will be automatically inherited by all the files below in the hierarchy. > My 3 questions are these: > > Does anyone think the maintainer would be > interested in a patch like the one I describe? I'm not sure, I still think it would be overkill. However, if you come with a nice, little implementation of this I might still apply it. > If so, where would the person who generates > the patch send it? Here, or use the Sourceforge patch system or you can even send it directly to me. > How much time might pass between submission > of the patch and it's inclusion in the > distribution assuming the patch is worth > a damn? Thanks. Well, if I like the patch it might get applied in the CVS instantly. It will then be in the next release (releases are scheduled depending on what features/bugs get added/fixed, but I try to get a new version out every 2 months or so). Stelian. -- Stelian Pop <ste...@fr...> Alcove - http://www.alcove.com |
From: PeterKorman <cal...@ei...> - 2003-01-10 18:48:41
|
With reasonably small changes, a dump exclusion list of nearly unlimited size could be supported. It would mean an internal binary search to interrogate the exclusion list during file write and a reasonably good performance sort routine to put the exclusion list in order to make the binary search possible. A heap sort would give order O log N worst case performance. After the sort a quick walk through the list to trim any duplicate entries would be a low cost operation as well. Price performance on Tape drives is awful. OTOH a big ATA disk is fast and comparably cheap. In many cases, systems never need to hold more than 3 L0 dumps each 19 days apart. The thing about backing up to an non-removable drive is that you really dont want to back up anything you don't have to back up. Web browser cache directories, for instance, never need backing up. Mailing lists often dont need backing up. You can partition file systems so that the junk goes to those not backed up, but that also limits flexibility of storage utilization. So if you have 2 60G drives, and the first one is 15% transient junk, then you can be pretty safe doing compressed dumps to the second drive. Big exclusion lists become pretty important because there's LOTZ of small transient files. A fast search is especially important if you have say 500000 files in a L0 dump, of which you want to exclude 170000. My 3 questions are these: Does anyone think the maintainer would be interested in a patch like the one I describe? If so, where would the person who generates the patch send it? How much time might pass between submission of the patch and it's inclusion in the distribution assuming the patch is worth a damn? Thanks. Cheers, JPK |
From: Stelian P. <st...@po...> - 2002-11-18 20:38:38
|
On Mon, Nov 18, 2002 at 10:09:39AM -0800, Chris de Vidal wrote: > --- Stelian Pop <ste...@fr...> wrote: > > What version of dump are you using ? If it is not > > the latest one (0.4b32), please retry with it. > > DOH! That should have been the first thing to try. I > upgraded to dump.static 0.4b32 and it worked. I was > using the RedHat 6.2 version which was updated, but > only to dump-0.4b19. Even RedHat 8.0 comes with > only 0.4b28! I know, I've let them know that, I hope they will update their version for the next Red Hat. > Now I have to either link dump to dump.static or > compile a regular version for RedHat 6.2 (Glib 2.1). > Given that the former is far easier, I will probably > do that (: The simplest is to grab the .src.rpm and rpm --rebuild dump-0.4b32-1.src.rpm It should compile just fine and generate a perfectly valid binary rpm for RH 6.2. Stelian. -- Stelian Pop <st...@po...> |
From: Chris de V. <cde...@ya...> - 2002-11-18 18:10:15
|
--- Stelian Pop <ste...@fr...> wrote: > What version of dump are you using ? If it is not > the latest one (0.4b32), please retry with it. DOH! That should have been the first thing to try. I upgraded to dump.static 0.4b32 and it worked. I was using the RedHat 6.2 version which was updated, but only to dump-0.4b19. Even RedHat 8.0 comes with only 0.4b28! Now I have to either link dump to dump.static or compile a regular version for RedHat 6.2 (Glib 2.1). Given that the former is far easier, I will probably do that (: Thanks for your help! /dev/idal __________________________________________________ Do you Yahoo!? Yahoo! Web Hosting - Let the expert host your site http://webhosting.yahoo.com |
From: Stelian P. <ste...@fr...> - 2002-11-18 09:14:49
|
On Mon, Nov 18, 2002 at 10:03:41AM +0100, jar...@ig... wrote: > >> This setup, a web-based GUI backup, only allows me to > >> execute a pre and post backup script. The actual > >> backup uses dump and can't be altered to also include > >> ssh, so I had been testing out dumping through a FIFO. [...] > I don't understand what's the FIFO for. > > What about dump -0 -b 64 -f - | ssh user@host "cat >test.dump" Reread the above statement :-) Stelian. -- Stelian Pop <ste...@fr...> Alcove - http://www.alcove.com |
From: <jar...@ig...> - 2002-11-18 09:08:06
|
On 18 Nov, Stelian Pop wrote: > On Sun, Nov 17, 2002 at 11:30:56AM -0800, Chris de Vidal wrote: > >> > > By the way, this: >> > > dump 0bdsf 64 100000 100000 - /tmp | ssh >> > user@remote >> > > "dd bs=64k of=test.dump" >> > > was already thought of. Won't work with this >> > setup. >> > >> > Why ? It should work and it is definately the >> > recommended way <snip> >> >> And it has worked well for me in the past. >> >> This setup, a web-based GUI backup, only allows me to >> execute a pre and post backup script. The actual >> backup uses dump and can't be altered to also include >> ssh, so I had been testing out dumping through a FIFO. > > Ok. > >> As I said, it _almost_ works. I don't get it. >> >> So, do you have an idea why dump hangs when the output >> file is a FIFO? The file system is ext2 and kernel is >> 2.2.16. I'd really appreciate it. >> >> Do you suppose it matters that the FIFO is being >> copied over SSH? > > Well, I just tried it here and it works correctly for me: > mkfifo /tmp/FIFO > dd bs=64k if=/tmp/FIFO | ssh user@host "dd bs=64k of=test.dump" & > dump -0 -b 64 -f /tmp/FIFO / > I don't understand what's the FIFO for. What about dump -0 -b 64 -f - | ssh user@host "cat >test.dump" -- Helmut Jarausch Lehrstuhl fuer Numerische Mathematik Aachen University D 52056 Aachen, Germany |
From: Stelian P. <ste...@fr...> - 2002-11-18 08:33:44
|
On Sun, Nov 17, 2002 at 11:30:56AM -0800, Chris de Vidal wrote: > > > By the way, this: > > > dump 0bdsf 64 100000 100000 - /tmp | ssh > > user@remote > > > "dd bs=64k of=test.dump" > > > was already thought of. Won't work with this > > setup. > > > > Why ? It should work and it is definately the > > recommended way <snip> > > And it has worked well for me in the past. > > This setup, a web-based GUI backup, only allows me to > execute a pre and post backup script. The actual > backup uses dump and can't be altered to also include > ssh, so I had been testing out dumping through a FIFO. Ok. > As I said, it _almost_ works. I don't get it. > > So, do you have an idea why dump hangs when the output > file is a FIFO? The file system is ext2 and kernel is > 2.2.16. I'd really appreciate it. > > Do you suppose it matters that the FIFO is being > copied over SSH? Well, I just tried it here and it works correctly for me: mkfifo /tmp/FIFO dd bs=64k if=/tmp/FIFO | ssh user@host "dd bs=64k of=test.dump" & dump -0 -b 64 -f /tmp/FIFO / What version of dump are you using ? If it is not the latest one (0.4b32), please retry with it. Also, it could be useful to run strace (using strace -f dump...), it will show all open/read/write calls on the fifo, maybe this will show what happens. Stelian. -- Stelian Pop <ste...@fr...> Alcove - http://www.alcove.com |
From: Chris de V. <cde...@ya...> - 2002-11-17 19:31:32
|
--- Stelian Pop <st...@po...> wrote: > Well, the straighforward way is the one you specify > below: > > > > By the way, this: > > dump 0bdsf 64 100000 100000 - /tmp | ssh > user@remote > > "dd bs=64k of=test.dump" > > was already thought of. Won't work with this > setup. > > Why ? It should work and it is definately the > recommended way <snip> And it has worked well for me in the past. This setup, a web-based GUI backup, only allows me to execute a pre and post backup script. The actual backup uses dump and can't be altered to also include ssh, so I had been testing out dumping through a FIFO. As I said, it _almost_ works. I don't get it. So, do you have an idea why dump hangs when the output file is a FIFO? The file system is ext2 and kernel is 2.2.16. I'd really appreciate it. Do you suppose it matters that the FIFO is being copied over SSH? /dev/idal P.S. Thanks for letting me know those flags have been depricated; I'll stop using them. __________________________________________________ Do you Yahoo!? Yahoo! Web Hosting - Let the expert host your site http://webhosting.yahoo.com |
From: Stelian P. <st...@po...> - 2002-11-17 15:37:51
|
On Sat, Nov 16, 2002 at 02:09:58PM -0800, Chris de Vidal wrote: > I need to be able to create a connection through SSH > to a remote dump site before an (unalterable) script > starts a dump. I'm using RedHat 6.2. > > I create a fifo, then I use > dd bs=64k if=test.fifo | ssh user@remote "dd 64k > of=test.dump" & > > The dump then runs, which is something like > dump 0bdsf 64 100000 100000 test.fifo /tmp > > The dump runs, closes the fifo, I see the dd output > X blocks read > X blocks written > but dump hangs. > > I tried the same test with tar; it passed with flying > colors. > > Is the problem that dump needs to be able to read back > from the file but can't? No, dump will not try to read back any data. > Can anyone think of another > way to pass the data with SSH? Well, the straighforward way is the one you specify below: > > By the way, this: > dump 0bdsf 64 100000 100000 - /tmp | ssh user@remote > "dd bs=64k of=test.dump" > was already thought of. Won't work with this setup. Why ? It should work and it is definately the recommended way (except the d and s arguments, which are now obsolete, try: dump -0 -b 64 -a -f - /tmp | ssh ... instead). Stelian. -- Stelian Pop <st...@po...> |
From: Chris de V. <cde...@ya...> - 2002-11-16 22:10:33
|
I need to be able to create a connection through SSH to a remote dump site before an (unalterable) script starts a dump. I'm using RedHat 6.2. I create a fifo, then I use dd bs=64k if=test.fifo | ssh user@remote "dd 64k of=test.dump" & The dump then runs, which is something like dump 0bdsf 64 100000 100000 test.fifo /tmp The dump runs, closes the fifo, I see the dd output X blocks read X blocks written but dump hangs. I tried the same test with tar; it passed with flying colors. Is the problem that dump needs to be able to read back from the file but can't? Can anyone think of another way to pass the data with SSH? By the way, this: dump 0bdsf 64 100000 100000 - /tmp | ssh user@remote "dd bs=64k of=test.dump" was already thought of. Won't work with this setup. Thanks, /dev/idal __________________________________________________ Do you Yahoo!? Yahoo! Web Hosting - Let the expert host your site http://webhosting.yahoo.com |
From: Stelian P. <ste...@al...> - 2000-12-23 09:51:32
|
On Fri, Dec 22, 2000 at 05:45:45PM +0100, Andrea Arcangeli wrote: > On Thu, Dec 21, 2000 at 12:17:56PM +0100, Stelian Pop wrote: > > Could you just take a quick look at the CVS and see if it looks > > ok to you ? > > merged without rejects, looks fine to me. thanks. Ok, will be part then of the next version, 0.4b21, to be released somewhere at the beginning of 2001. (I'm going in vacation right now...). Happy Christmas! Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: Andrea A. <an...@su...> - 2000-12-22 16:46:00
|
On Thu, Dec 21, 2000 at 12:17:56PM +0100, Stelian Pop wrote: > Could you just take a quick look at the CVS and see if it looks > ok to you ? merged without rejects, looks fine to me. thanks. Andrea |
From: Stelian P. <ste...@al...> - 2000-12-21 11:18:09
|
On Tue, Dec 19, 2000 at 07:20:58PM +0100, Andrea Arcangeli wrote: > On Tue, Dec 19, 2000 at 07:17:44PM +0100, Andrea Arcangeli wrote: > > looks like it's just working here, I'm using stock binary e2fs libs installed > > in my system. I've only added an ugly -D_FILE_OFFSET_BITS=64 in the > > restore/Makefile.in for my local testing. > > Works with my quite dumb regression test: > > Index: dump/compat/include/bsdcompat.h > =================================================================== > RCS file: /cvsroot/dump/dump/compat/include/bsdcompat.h,v > [...] I've checked in your patch, along with my modifications for --enable-largefile configure option. Could you just take a quick look at the CVS and see if it looks ok to you ? Thanks. Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: <ty...@va...> - 2000-12-20 02:03:33
|
Date: Tue, 19 Dec 2000 22:18:09 +0100 From: Andrea Arcangeli <an...@su...> I dislike to see the explicit LFS interface. I like not having to write O_LARGEFILE and truncate64 in the sources for example. This because on 64bit platform I can avoid to compile with --enable-largefile and the source code remains the same: without alien things like O_LARGEFILE and truncate64 inside. At least at the source level, I can easily hide the need for O_LARGEFILE and lseek64 in specific library routines, and it's easy enough to have autoconf macros that only use O_LARGEFILE and use lseek64 if present (and even then, I only use it when necessary). About trustlevel of glibc folks I believe they're forced to get this interface right on the long run (unfortunately). Now that we have this stuff supported I can at least take advantage of it at the source level. I'm worried about ABI compatibility for glibc's both with and without LFS support (i.e., glibc 2.1 and glibc 2.2; never mind glibc 2.0 compatibility). About the librarians issue I'm not sure (what matters for the ABI is the interface [not if the library internally recalls truncate or truncate64 in its way while doing its work] and none librarians other than glibc is supposed to provide those VFS calls so interface shouldn't break that easily). But as said I'm not sure about this... The problem with using the #define method is that it changes the size of off_t. So libraries built with and without -D_LARGE_FILE_SIZE=64 have ABI changes if any of their function prototypes or structures contain off_t. This is bad, bad, bad, bad, bad. *Especially* for shared libraries. - Ted |
From: Andrea A. <an...@su...> - 2000-12-20 02:03:29
|
On Tue, Dec 19, 2000 at 02:19:19PM -0800, ty...@va... wrote: > At least at the source level, I can easily hide the need for O_LARGEFILE > and lseek64 in specific library routines, and it's easy enough to have > autoconf macros that only use O_LARGEFILE and use lseek64 if present > (and even then, I only use it when necessary). Using O_LARGEFILE and lseek64 only if present is necessary for supporting systems without those implemented, but it's not the way to cleanup the sourcecode. > The problem with using the #define method is that it changes the size of > off_t. So libraries built with and without -D_LARGE_FILE_SIZE=64 > have ABI changes if any of their function prototypes or structures > contain off_t. This is bad, bad, bad, bad, bad. *Especially* for > shared libraries. I see, off_t redefined will definitely break the interface. I agree that for libs explicit LFS coding is necessary. Andrea |
From: Andrea A. <an...@su...> - 2000-12-19 21:18:18
|
On Tue, Dec 19, 2000 at 12:48:30PM -0800, ty...@va... wrote: > programs, it's probably more of a personal comfort issue of which you > think feels better to you. I dislike to see the explicit LFS interface. I like not having to write O_LARGEFILE and truncate64 in the sources for example. This because on 64bit platform I can avoid to compile with --enable-largefile and the source code remains the same: without alien things like O_LARGEFILE and truncate64 inside. About trustlevel of glibc folks I believe they're forced to get this interface right on the long run (unfortunately). Now that we have this stuff supported I can at least take advantage of it at the source level. About the librarians issue I'm not sure (what matters for the ABI is the interface [not if the library internally recalls truncate or truncate64 in its way while doing its work] and none librarians other than glibc is supposed to provide those VFS calls so interface shouldn't break that easily). But as said I'm not sure about this... Andrea |
From: <ty...@va...> - 2000-12-19 20:48:59
|
Date: Tue, 19 Dec 2000 21:29:34 +0100 From: Andrea Arcangeli <an...@su...> Please reject the above one liner (you had to reject it anyways :) becase also `dump` (not only restore) needs to be compiled with -D_FILE_OFFSET_BITS=64 so that the output file is opened with O_LARGEFILE (otherwise max dump output filesize is 2G). I really dislike having to change all open() to make them to use O_LARGEFILE... I much prefer the ./configure --enable-largefile way that just adds the -D_FILE_OFFSET_BITS=64 param all over the dump package (dump and restore) as discussed previously. That's safe and cleaner. Shrug. I suspect you only need to add the O_LARGEFILE flag in one place for dump, and it makes resulting binary much more likely to be portable. The question of which is cleaner is probably mostly a religious issue, but I feel very trongly that it's a bad-bad-bad idea for libraries, since it can end up changing the binary ABI of the library without any kind of warning. For programs, it's a matter of how much you trust glibc to maintain the proper backwards compatibility. YMMV. So for programs, it's probably more of a personal comfort issue of which you think feels better to you. - Ted |
From: Andrea A. <an...@su...> - 2000-12-19 20:29:50
|
On Tue, Dec 19, 2000 at 07:20:58PM +0100, Andrea Arcangeli wrote: > Index: dump/restore/Makefile.in > =================================================================== > RCS file: /cvsroot/dump/dump/restore/Makefile.in,v > retrieving revision 1.5 > diff -u -r1.5 Makefile.in > --- dump/restore/Makefile.in 2000/05/29 14:17:37 1.5 > +++ dump/restore/Makefile.in 2000/12/19 18:19:16 > @@ -5,7 +5,7 @@ > > @MCONFIG@ > > -CFLAGS= @CCOPTS@ -pipe $(OPT) $(DEFS) $(GINC) $(INC) @RESTOREDEBUG@ > +CFLAGS= @CCOPTS@ -pipe $(OPT) $(DEFS) $(GINC) $(INC) @RESTOREDEBUG@ -D_FILE_OFFSET_BITS=64 > LDFLAGS:= $(LDFLAGS) @STATIC@ > LIBS= $(GLIBS) -le2p @READLINE@ > DEPLIBS= ../compat/lib/libcompat.a Please reject the above one liner (you had to reject it anyways :) becase also `dump` (not only restore) needs to be compiled with -D_FILE_OFFSET_BITS=64 so that the output file is opened with O_LARGEFILE (otherwise max dump output filesize is 2G). I really dislike having to change all open() to make them to use O_LARGEFILE... I much prefer the ./configure --enable-largefile way that just adds the -D_FILE_OFFSET_BITS=64 param all over the dump package (dump and restore) as discussed previously. That's safe and cleaner. Other part of the patch looks fine, I'm finishing right now to restore a 5G dump, but I don't expect troubles. Thanks Stelian and Ted for the help! Andrea |
From: Andrea A. <an...@su...> - 2000-12-19 18:21:22
|
On Tue, Dec 19, 2000 at 07:17:44PM +0100, Andrea Arcangeli wrote: > looks like it's just working here, I'm using stock binary e2fs libs installed > in my system. I've only added an ugly -D_FILE_OFFSET_BITS=64 in the > restore/Makefile.in for my local testing. Works with my quite dumb regression test: Index: dump/compat/include/bsdcompat.h =================================================================== RCS file: /cvsroot/dump/dump/compat/include/bsdcompat.h,v retrieving revision 1.12 diff -u -r1.12 bsdcompat.h --- dump/compat/include/bsdcompat.h 2000/12/04 15:43:16 1.12 +++ dump/compat/include/bsdcompat.h 2000/12/19 18:19:13 @@ -102,6 +102,7 @@ #define di_rdev di_db[0] /* #define di_ouid di_uid */ /* #define di_ogid di_gid */ +#define di_size_high di_dir_acl /* * This is the ext2_dir_entry structure but the fields have been renamed Index: dump/dump/traverse.c =================================================================== RCS file: /cvsroot/dump/dump/dump/traverse.c,v retrieving revision 1.24 diff -u -r1.24 traverse.c --- dump/dump/traverse.c 2000/12/05 16:57:38 1.24 +++ dump/dump/traverse.c 2000/12/19 18:19:16 @@ -149,6 +149,7 @@ blockest(struct dinode const *dp) { long blkest, sizeest; + u_quad_t i_size; /* * dp->di_size is the size of the file in bytes. @@ -164,19 +165,20 @@ * dump blocks (sizeest vs. blkest in the indirect block * calculation). */ - blkest = howmany(dbtob(dp->di_blocks), TP_BSIZE); - sizeest = howmany(dp->di_size, TP_BSIZE); + blkest = howmany((u_quad_t)dp->di_blocks*fs->blocksize, TP_BSIZE); + i_size = dp->di_size + ((u_quad_t) dp->di_size_high << 32); + sizeest = howmany(i_size, TP_BSIZE); if (blkest > sizeest) blkest = sizeest; #ifdef __linux__ - if (dp->di_size > fs->blocksize * NDADDR) { + if (i_size > fs->blocksize * NDADDR) { /* calculate the number of indirect blocks on the dump tape */ blkest += howmany(sizeest - NDADDR * fs->blocksize / TP_BSIZE, NINDIR(sblock) * EXT2_FRAGS_PER_BLOCK(fs->super)); } #else - if (dp->di_size > sblock->fs_bsize * NDADDR) { + if (i_size > sblock->fs_bsize * NDADDR) { /* calculate the number of indirect blocks on the dump tape */ blkest += howmany(sizeest - NDADDR * sblock->fs_bsize / TP_BSIZE, @@ -682,7 +684,8 @@ * Dump a block to the tape */ static int -dumponeblock(ext2_filsys fs, blk_t *blocknr, int blockcnt, void *private) +dumponeblock(ext2_filsys fs, blk_t *blocknr, e2_blkcnt_t blockcnt, + blk_t ref_block, int ref_offset, void * private) { struct block_context *p; int i; @@ -715,7 +718,7 @@ void dumpino(struct dinode *dp, ino_t ino) { - int cnt; + unsigned long cnt; fsizeT size; char buf[TP_BSIZE]; struct old_bsd_inode obi; @@ -724,6 +727,7 @@ #else int ind_level; #endif + u_quad_t i_size = dp->di_size + ((u_quad_t) dp->di_size_high << 32); if (newtape) { newtape = 0; @@ -735,7 +739,7 @@ obi.di_mode = dp->di_mode; obi.di_uid = dp->di_uid; obi.di_gid = dp->di_gid; - obi.di_qsize.v = (u_quad_t)dp->di_size; + obi.di_qsize.v = i_size; obi.di_atime = dp->di_atime; obi.di_mtime = dp->di_mtime; obi.di_ctime = dp->di_ctime; @@ -744,7 +748,7 @@ obi.di_flags = dp->di_flags; obi.di_gen = dp->di_gen; memmove(&obi.di_db, &dp->di_db, (NDADDR + NIADDR) * sizeof(daddr_t)); - if (dp->di_file_acl || dp->di_dir_acl) + if (dp->di_file_acl) warn("ACLs in inode #%ld won't be dumped", (long)ino); memmove(&spcl.c_dinode, &obi, sizeof(obi)); #else /* __linux__ */ @@ -771,8 +775,8 @@ * Check for short symbolic link. */ #ifdef __linux__ - if (dp->di_size > 0 && - dp->di_size < EXT2_N_BLOCKS * sizeof (daddr_t)) { + if (i_size > 0 && + i_size < EXT2_N_BLOCKS * sizeof (daddr_t)) { spcl.c_addr[0] = 1; spcl.c_count = 1; writeheader(ino); @@ -800,7 +804,7 @@ case S_IFDIR: #endif case S_IFREG: - if (dp->di_size > 0) + if (i_size) break; /* fall through */ @@ -815,16 +819,16 @@ msg("Warning: undefined file type 0%o\n", dp->di_mode & IFMT); return; } - if (dp->di_size > NDADDR * sblock->fs_bsize) + if (i_size > NDADDR * sblock->fs_bsize) #ifdef __linux__ cnt = NDADDR * EXT2_FRAGS_PER_BLOCK(fs->super); #else cnt = NDADDR * sblock->fs_frag; #endif else - cnt = howmany(dp->di_size, sblock->fs_fsize); + cnt = howmany(i_size, sblock->fs_fsize); blksout(&dp->di_db[0], cnt, ino); - if ((size = dp->di_size - NDADDR * sblock->fs_bsize) <= 0) + if ((quad_t) (size = i_size - NDADDR * sblock->fs_bsize) <= 0) return; #ifdef __linux__ bc.max = NINDIR(sblock) * EXT2_FRAGS_PER_BLOCK(fs->super); @@ -833,7 +837,7 @@ bc.ino = ino; bc.next_block = NDADDR; - ext2fs_block_iterate (fs, ino, 0, NULL, dumponeblock, (void *)&bc); + ext2fs_block_iterate2(fs, ino, 0, NULL, dumponeblock, (void *)&bc); if (bc.cnt > 0) { blksout (bc.buf, bc.cnt, bc.ino); } @@ -953,7 +957,7 @@ obi.di_flags = dp->di_flags; obi.di_gen = dp->di_gen; memmove(&obi.di_db, &dp->di_db, (NDADDR + NIADDR) * sizeof(daddr_t)); - if (dp->di_file_acl || dp->di_dir_acl) + if (dp->di_file_acl) warn("ACLs in inode #%ld won't be dumped", (long)ino); memmove(&spcl.c_dinode, &obi, sizeof(obi)); #else /* __linux__ */ Index: dump/restore/Makefile.in =================================================================== RCS file: /cvsroot/dump/dump/restore/Makefile.in,v retrieving revision 1.5 diff -u -r1.5 Makefile.in --- dump/restore/Makefile.in 2000/05/29 14:17:37 1.5 +++ dump/restore/Makefile.in 2000/12/19 18:19:16 @@ -5,7 +5,7 @@ @MCONFIG@ -CFLAGS= @CCOPTS@ -pipe $(OPT) $(DEFS) $(GINC) $(INC) @RESTOREDEBUG@ +CFLAGS= @CCOPTS@ -pipe $(OPT) $(DEFS) $(GINC) $(INC) @RESTOREDEBUG@ -D_FILE_OFFSET_BITS=64 LDFLAGS:= $(LDFLAGS) @STATIC@ LIBS= $(GLIBS) -le2p @READLINE@ DEPLIBS= ../compat/lib/libcompat.a (the change in the restore/Makefile.in should be conditional to the ./configure script of course, the above is ok only for LFS systems, the rest should be ok) Andrea |
From: Andrea A. <an...@su...> - 2000-12-19 18:18:06
|
On Tue, Dec 19, 2000 at 10:12:14AM -0800, ty...@va... wrote: > Umm, yes, that's true. It's needed for restore, but not for dump. looks like it's just working here, I'm using stock binary e2fs libs installed in my system. I've only added an ugly -D_FILE_OFFSET_BITS=64 in the restore/Makefile.in for my local testing. Andrea |
From: <ty...@va...> - 2000-12-19 18:15:56
|
Date: Tue, 19 Dec 2000 17:04:21 +0100 From: Stelian Pop <ste...@al...> - we forget about the -D_FILE_OFFSET_BITS=64, in both dump and e2fsprogs You can't just forget about FILE_OFFSET_BITS=64. On the other hand, it's fairly simple to simply pass O_LARGE to open(), and then set up a situation where you only try to call lseek64() if you need to seek to an offset larger than 32 bits. O_LARGE should be ignored on non-LFS systems, and if lseek64 returns ENOSYS, you know you're on a non-LFS system, and you can simply truncate the restore of the large file at that point, with a warning message. - Ted |
From: <ty...@va...> - 2000-12-19 18:12:36
|
Date: Tue, 19 Dec 2000 17:04:04 +0100 From: Andrea Arcangeli <an...@su...> On Tue, Dec 19, 2000 at 07:39:40AM -0800, ty...@va... wrote: > If you do things right, you don't need glibc's LFS support. This is We strictly need some kind of glibc LFS support to make restore to work with largefiles. Think how to restore a file with only an 8G hole in it (no one single data block allocated). We cannot create the hole with lseek+write, we need truncate64. Umm, yes, that's true. It's needed for restore, but not for dump. - Ted |
From: Stelian P. <ste...@al...> - 2000-12-19 16:32:14
|
On Tue, Dec 19, 2000 at 05:28:17PM +0100, Andrea Arcangeli wrote: > > (probably needs some little cosmetic fixes here in the > > callback function since the block number is > 2^32) > > - we forget about the -D_FILE_OFFSET_BITS=64, in both dump > > and e2fsprogs > > No! We forget about -D_FILE_OFFSET_BITS=64 _only_ in e2fsprogs. I was talking only from dump point of vue. > > With all this, we get one dump binary which will be able to deal with ______________________________^^^^ :) > > large files on both LFS and non-LFS systems (LFS system accessed from > > a non-LFS system). > > No, we simply can't generate largefiles in a non-LFS system during restore. > > Dump could even work without largefile support from glibc, but not restore > as far I can tell. We all agree then :) > The only change is "not touch e2fsprogs" and use ext2fs_block_iterate2 > instead of ext2fs_block_iterate, and that's what I'm doing right now. Great. Stelian. -- Stelian Pop <ste...@al...> |------------- Ingénieur Informatique Libre -------------| | Alcôve - http://www.alcove.fr - Tel: +33 1 49 22 68 00 | |----------- Alcôve, l'informatique est libre -----------| |
From: Andrea A. <an...@su...> - 2000-12-19 16:31:20
|
On Tue, Dec 19, 2000 at 05:12:59PM +0100, Stelian Pop wrote: > Dump see the filesystem only through the e2fs libraries. So Ted > comments apply. Yes. > Restore however uses directly the glibc functions so we must indeed > use the LFS here. I don't know what is the best way to do that > however (explicit support vs. implicit support)... I prefer the implicit one (-D_...=64), it has a simpler interface. Performance doesn't really matter. gcc breaks more easily with the long long but then fix gcc if it breaks since we rely on it everywhere ;) Andrea |