From: Paul M. <Pau...@us...> - 2001-11-02 05:20:41
|
> On Thu, Nov 01, 2001 at 05:30:19PM +0100, Andrea Arcangeli wrote: > > don't want to risk to destablize the kernel. The rmb() are needed in two > > places: between the read of the max_fds/max_fdset, the read of the array > > pointer, and the read of the contents of the array. thanks! Good point! The original patch was designed with wmbdd() in mind, and needs to be updated to allow for read_barrier_depends(), which was the name Linus preferred to rmbdd(). I put out an RFC patch for this, and saw no response, so will send it out again. Silence means assent? ;-) My guess is that the read-side code would be greatly simplified if the size of the array/bitflag was in the same memory block as the array/bitflag, so that there would only be the dependency on the pointer. Maneesh, thoughts? Thanx, Paul > I was trying to fix the read side and I now found tons of other races > also on the read side (not related to the alpha, there's no dependency > between ->max_fds and the ->fd, same on the bitflag array, real rmb() is > needed there for x86 there too, not only for the cpu ordering but also > to enforce compiler ordering). Here some partial fix mainly to show a > few of those bugs, the max_fdset must be read _before_ the oldf->fd and > a rmb() must be put in between (also on x86, ppc, ia64 etc.., not only > on alpha, at the very least for the compiler). And please as said in the > other email I suggest to put the rmb() also on read dependnecies, so > between the read of the fd and the read of the data in the array, not > only the obviously necessary rmb() between reading max_fds/max_fdset and > starting using the arrays, it shouldn't be too bad and it makes the code > more documented as well, if you want to mark the ones serializing > implicit read ordering with a comment that's fine too so we can optimize > them away later if we add some common code ordering API. > > The open_files calc isn't obvious as well: we must find the guarantee > that's it's coherent with "size", so the open_fds/close_on_exec part > should be checked closely too. And that's nearly the only part I checked > on the read side, so I'd expect more bugs there. > > Since the read side at the moment is a buggy can for worms, I've to to > back out the rcu file locking patch until a corrected version will be > released. I assume Maneesh will take care of that. Thanks! > > --- ./include/linux/file.h.~1~ Thu Nov 1 17:43:44 2001 > +++ ./include/linux/file.h Thu Nov 1 17:58:49 2001 > @@ -31,8 +31,13 @@ > { > struct file * file = NULL; > > - if (fd < files->max_fds) > - file = files->fd[fd]; > + int max_fds = files->max_fds; > + rmb(); > + if (fd < max_fds) { > + struct file ** fd_array = files->fd; > + rmb(); > + file = fd_array[fd]; > + } > return file; > } > > @@ -44,8 +49,13 @@ > struct file * file = NULL; > struct files_struct *files = current->files; > > - if (fd < files->max_fds) > - file = files->fd[fd]; > + int max_fds = files->max_fds; > + rmb(); > + if (fd < max_fds) { > + struct file ** fd_array = files->fd; > + rmb(); > + file = fd_array[fd]; > + } > return file; > } > > --- ./kernel/fork.c.~1~ Thu Nov 1 17:43:44 2001 > +++ ./kernel/fork.c Thu Nov 1 18:29:53 2001 > @@ -420,6 +420,8 @@ > struct file **old_fds, **new_fds; > int open_files, nfds, size, i, error = 0; > > + size = oldf->max_fdset; > + rmb(); > /* > * A background process may not have any files ... > */ > @@ -450,7 +452,6 @@ > > /* We don't yet have the oldf readlock, but even if the old > fdset gets grown now, we'll only copy up to "size" fds */ > - size = oldf->max_fdset; > if (size > __FD_SETSIZE) { > newf->max_fdset = 0; > spin_lock(&newf->file_lock); > > Andrea |
From: Paul M. <Pau...@us...> - 2001-11-02 14:19:34
|
> On Thu, Nov 01, 2001 at 06:49:24PM +0100, Andrea Arcangeli wrote: > > I was trying to fix the read side and I now found tons of other races > > also on the read side (not related to the alpha, there's no dependency > > between ->max_fds and the ->fd, same on the bitflag array, real rmb() is > > needed there for x86 there too, not only for the cpu ordering but also > > to enforce compiler ordering). Here some partial fix mainly to show a > > I changed the update code in expand_fd_array and expand_fdset so that it > first expands the array or bitmap and then updates max_fds or max_fdset. I > was not aware that compiler changes this order and AFAIK on x86 on the write > side program order is maintained. So, I thought it will never be the case > that max_fds or max_fdset is updated before acutally expanding the arrays, > provided compiler also maintains the write order. The CPU can change the read order, also, but only as long as there are no data dependencies. Since there are separate fields for the size and the pointer to the array, both the CPU and the compiler are free to change the ordering. The wmb()s guarantee the write order (in absence of "volatile" declarations, the compiler could change these, also). > > few of those bugs, the max_fdset must be read _before_ the oldf->fd and > > a rmb() must be put in between (also on x86, ppc, ia64 etc.., not only > > on alpha, at the very least for the compiler). And please as said in the > > other email I suggest to put the rmb() also on read dependencies, so > > If I understand correctly, the problem (as in fcheck_files) here is that if > file = files->fd[fd] > happens before the check > if (fd < files->max_fds) > and rmb() will prevent that to happen. > > > The open_files calc isn't obvious as well: we must find the guarantee > > that's it's coherent with "size", so the open_fds/close_on_exec part > > should be checked closely too. And that's nearly the only part I checked > > on the read side, so I'd expect more bugs there. > > Can we take the advantage of the fact that nobody does shrink_fd_array or > shrink_fdset except while "exit"ing when there is only one user for the > files_struct ie. ref count is 1. Yes, this fact does simplify things. > There are places (writers) where we are taking lock and we read max_fds or > the array, I hope the lock will take care for the ordering. Yes, the lock does prevent anyone else from changing these fields (as the code is written), and the lock is guaranteed to take care of ordering between different CPUs that use the lock. > > Since the read side at the moment is a buggy can for worms, I've to to > > back out the rcu file locking patch until a corrected version will be > > released. I assume Maneesh will take care of that. Thanks! > > I have started modifying the patch and hope to soon get in back. I look forward to seeing the new patch! Thanx, Paul |
From: Paul E. M. <pmc...@us...> - 2001-11-07 02:07:31
|
> BTW, Paul the last read_barrier_depends() patch you sent me last week > seems just fine, even if in the longer run we end doing the IPI way > because it's potentially faster in the fast path, the > read_barrier_depends() way still remains nice documentation of the > ordering requirements and we can define read_barrier_depends to a noop > anytime. Many thanks to Dipankar, Paul and Maneesh for looking into > those race conditions, I look forward to the next update! :) Hello, Andrea, Here is an updated read_barrier_depends() patch against 2.4.14. I will be posting a patch including memory_barrier(), read_barrier(), and write_barrier() as well as read_barrier_depends() as an RFC to lkml. And I will certainly not discarded the IPI code. ;-) Thanx, Paul diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-alpha/system.h linux-2.4.14.read_barrier_depends/include/asm-alpha/system.h --- linux-2.4.14/include/asm-alpha/system.h Thu Oct 4 18:47:08 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-alpha/system.h Tue Nov 6 16:15:50 2001 @@ -148,16 +148,21 @@ #define rmb() \ __asm__ __volatile__("mb": : :"memory") +#define read_barrier_depends() \ +__asm__ __volatile__("mb": : :"memory") + #define wmb() \ __asm__ __volatile__("wmb": : :"memory") #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() barrier() #define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-arm/system.h linux-2.4.14.read_barrier_depends/include/asm-arm/system.h --- linux-2.4.14/include/asm-arm/system.h Mon Nov 27 17:07:59 2000 +++ linux-2.4.14.read_barrier_depends/include/asm-arm/system.h Tue Nov 6 16:15:50 2001 @@ -38,6 +38,7 @@ #define mb() __asm__ __volatile__ ("" : : : "memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() mb() #define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t"); @@ -67,12 +68,14 @@ #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() #define cli() __cli() diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-cris/system.h linux-2.4.14.read_barrier_depends/include/asm-cris/system.h --- linux-2.4.14/include/asm-cris/system.h Mon Oct 8 11:43:54 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-cris/system.h Tue Nov 6 16:15:50 2001 @@ -149,15 +149,18 @@ #define mb() __asm__ __volatile__ ("" : : : "memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() mb() #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-i386/system.h linux-2.4.14.read_barrier_depends/include/asm-i386/system.h --- linux-2.4.14/include/asm-i386/system.h Mon Nov 5 12:42:13 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-i386/system.h Tue Nov 6 16:20:06 2001 @@ -288,6 +288,7 @@ #define mb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #ifdef CONFIG_X86_OOSTORE #define wmb() __asm__ __volatile__ ("lock; addl $0,0(%%esp)": : :"memory") @@ -298,10 +299,12 @@ #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-ia64/system.h linux-2.4.14.read_barrier_depends/include/asm-ia64/system.h --- linux-2.4.14/include/asm-ia64/system.h Tue Jul 31 10:30:09 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-ia64/system.h Tue Nov 6 16:15:50 2001 @@ -85,6 +85,9 @@ * stores and that all following stores will be * visible only after all previous stores. * rmb(): Like wmb(), but for reads. + * read_barrier_depends(): Like rmb(), but only for pairs + * of loads where the second load depends on the + * value loaded by the first. * mb(): wmb()/rmb() combo, i.e., all previous memory * accesses are visible before all subsequent * accesses and vice versa. This is also known as @@ -98,15 +101,18 @@ */ #define mb() __asm__ __volatile__ ("mf" ::: "memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() mb() #ifdef CONFIG_SMP # define smp_mb() mb() # define smp_rmb() rmb() +# define smp_read_barrier_depends() read_barrier_depends() # define smp_wmb() wmb() #else # define smp_mb() barrier() # define smp_rmb() barrier() +# define smp_read_barrier_depends() do { } while(0) # define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-m68k/system.h linux-2.4.14.read_barrier_depends/include/asm-m68k/system.h --- linux-2.4.14/include/asm-m68k/system.h Thu Oct 25 13:53:55 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-m68k/system.h Tue Nov 6 16:15:50 2001 @@ -80,12 +80,14 @@ #define nop() do { asm volatile ("nop"); barrier(); } while (0) #define mb() barrier() #define rmb() barrier() +#define read_barrier_depends() do { } while(0) #define wmb() barrier() #define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_wmb(var, value) do { var = value; wmb(); } while (0) #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-mips/system.h linux-2.4.14.read_barrier_depends/include/asm-mips/system.h --- linux-2.4.14/include/asm-mips/system.h Sun Sep 9 10:43:01 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-mips/system.h Tue Nov 6 16:15:50 2001 @@ -150,6 +150,7 @@ #include <asm/wbflush.h> #define rmb() do { } while(0) +#define read_barrier_depends() do { } while(0) #define wmb() wbflush() #define mb() wbflush() @@ -166,6 +167,7 @@ : /* no input */ \ : "memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() mb() #endif /* CONFIG_CPU_HAS_WB */ @@ -173,10 +175,12 @@ #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-mips64/system.h linux-2.4.14.read_barrier_depends/include/asm-mips64/system.h --- linux-2.4.14/include/asm-mips64/system.h Wed Jul 4 11:50:39 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-mips64/system.h Tue Nov 6 16:15:50 2001 @@ -147,15 +147,18 @@ : /* no input */ \ : "memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() mb() #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-parisc/system.h linux-2.4.14.read_barrier_depends/include/asm-parisc/system.h --- linux-2.4.14/include/asm-parisc/system.h Wed Dec 6 11:46:39 2000 +++ linux-2.4.14.read_barrier_depends/include/asm-parisc/system.h Tue Nov 6 16:15:50 2001 @@ -50,6 +50,7 @@ #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() wmb() #else /* This is simply the barrier() macro from linux/kernel.h but when serial.c @@ -58,6 +59,7 @@ */ #define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory"); +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() __asm__ __volatile__("":::"memory"); #endif @@ -122,6 +124,7 @@ #define mb() __asm__ __volatile__ ("sync" : : :"memory") #define wmb() mb() +#define read_barrier_depends() do { } while(0) extern unsigned long __xchg(unsigned long, unsigned long *, int); diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-ppc/system.h linux-2.4.14.read_barrier_depends/include/asm-ppc/system.h --- linux-2.4.14/include/asm-ppc/system.h Tue Aug 28 06:58:33 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-ppc/system.h Tue Nov 6 16:15:50 2001 @@ -24,6 +24,8 @@ * * mb() prevents loads and stores being reordered across this point. * rmb() prevents loads being reordered across this point. + * read_barrier_depends() prevents data-dependant loads being reordered + * across this point (nop on PPC). * wmb() prevents stores being reordered across this point. * * We can use the eieio instruction for wmb, but since it doesn't @@ -32,6 +34,7 @@ */ #define mb() __asm__ __volatile__ ("sync" : : : "memory") #define rmb() __asm__ __volatile__ ("sync" : : : "memory") +#define read_barrier_depends() do { } while(0) #define wmb() __asm__ __volatile__ ("eieio" : : : "memory") #define set_mb(var, value) do { var = value; mb(); } while (0) @@ -40,10 +43,12 @@ #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() __asm__ __volatile__("": : :"memory") #define smp_rmb() __asm__ __volatile__("": : :"memory") +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() __asm__ __volatile__("": : :"memory") #endif /* CONFIG_SMP */ diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-s390/system.h linux-2.4.14.read_barrier_depends/include/asm-s390/system.h --- linux-2.4.14/include/asm-s390/system.h Wed Jul 25 14:12:02 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-s390/system.h Tue Nov 6 16:15:50 2001 @@ -117,9 +117,11 @@ # define SYNC_OTHER_CORES(x) eieio() #define mb() eieio() #define rmb() eieio() +#define read_barrier_depends() do { } while(0) #define wmb() eieio() #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #define smp_mb__before_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb() diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-s390x/system.h linux-2.4.14.read_barrier_depends/include/asm-s390x/system.h --- linux-2.4.14/include/asm-s390x/system.h Wed Jul 25 14:12:03 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-s390x/system.h Tue Nov 6 16:15:50 2001 @@ -130,9 +130,11 @@ # define SYNC_OTHER_CORES(x) eieio() #define mb() eieio() #define rmb() eieio() +#define read_barrier_depends() do { } while(0) #define wmb() eieio() #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #define smp_mb__before_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb() diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-sh/system.h linux-2.4.14.read_barrier_depends/include/asm-sh/system.h --- linux-2.4.14/include/asm-sh/system.h Sat Sep 8 12:29:09 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-sh/system.h Tue Nov 6 16:15:50 2001 @@ -88,15 +88,18 @@ #define mb() __asm__ __volatile__ ("": : :"memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() __asm__ __volatile__ ("": : :"memory") #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() barrier() #define smp_rmb() barrier() +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() barrier() #endif diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-sparc/system.h linux-2.4.14.read_barrier_depends/include/asm-sparc/system.h --- linux-2.4.14/include/asm-sparc/system.h Tue Oct 30 15:08:11 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-sparc/system.h Tue Nov 6 16:15:50 2001 @@ -277,11 +277,13 @@ /* XXX Change this if we ever use a PSO mode kernel. */ #define mb() __asm__ __volatile__ ("" : : : "memory") #define rmb() mb() +#define read_barrier_depends() do { } while(0) #define wmb() mb() #define set_mb(__var, __value) do { __var = __value; mb(); } while(0) #define set_wmb(__var, __value) set_mb(__var, __value) #define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory"); +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() __asm__ __volatile__("":::"memory"); #define nop() __asm__ __volatile__ ("nop"); diff -urN -X /home/mckenney/dontdiff linux-2.4.14/include/asm-sparc64/system.h linux-2.4.14.read_barrier_depends/include/asm-sparc64/system.h --- linux-2.4.14/include/asm-sparc64/system.h Fri Sep 7 11:01:20 2001 +++ linux-2.4.14.read_barrier_depends/include/asm-sparc64/system.h Tue Nov 6 16:15:50 2001 @@ -99,6 +99,7 @@ #define mb() \ membar("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad"); #define rmb() membar("#LoadLoad") +#define read_barrier_depends() do { } while(0) #define wmb() membar("#StoreStore") #define set_mb(__var, __value) \ do { __var = __value; membar("#StoreLoad | #StoreStore"); } while(0) @@ -108,10 +109,12 @@ #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() +#define smp_read_barrier_depends() read_barrier_depends() #define smp_wmb() wmb() #else #define smp_mb() __asm__ __volatile__("":::"memory"); #define smp_rmb() __asm__ __volatile__("":::"memory"); +#define smp_read_barrier_depends() do { } while(0) #define smp_wmb() __asm__ __volatile__("":::"memory"); #endif |
From: Paul M. <Pau...@us...> - 2001-11-07 02:34:13
|
> Do we have a guide to linux kernel memory barrier interfaces ? > Is this something that should go to Rusty's locking howto or a new > lock-free howto ? Looking at the code, I can figure out the barrier > characteristics only for one or two CPUs that I might be familiar with, > that is not good enough to write common code. A definitive guide will > be very helpful here. I thought it already was in Rusty's locking howto -- he submitted a patch to a howto as a result of discussions on wmb() and Alpha, IIRC. Thanx, Paul |
From: Maneesh S. <ma...@in...> - 2001-11-02 06:19:03
|
On Thu, Nov 01, 2001 at 09:11:20PM -0800, Paul McKenney wrote: > > > On Thu, Nov 01, 2001 at 05:30:19PM +0100, Andrea Arcangeli wrote: > > > don't want to risk to destablize the kernel. The rmb() are needed in > two > > > places: between the read of the max_fds/max_fdset, the read of the > array > > > pointer, and the read of the contents of the array. thanks! > > Good point! The original patch was designed with wmbdd() in mind, and > needs to be updated to allow for read_barrier_depends(), which was the > name Linus preferred to rmbdd(). I put out an RFC patch for this, and > saw no response, so will send it out again. Silence means assent? ;-) > > My guess is that the read-side code would be greatly simplified if the > size of the array/bitflag was in the same memory block as the > array/bitflag, > so that there would only be the dependency on the pointer. Maneesh, > thoughts? Right now size fields for the array and bitmap are not poniers, are you saying that we should make them integer pointer and allocate them whith the array? Also I couldnot get how doing that will ensure ordering? Actually putting rmb() on read side is not very complicated as we have inline functions (fcheck() and fcheck_files()) and putting rmb() in those two should do for most of the cases. Maneesh -- Maneesh Soni IBM Linux Technology Center, IBM India Software Lab, Bangalore. Phone: +91-80-5262355 Extn. 3999 email: ma...@in... http://lse.sourceforge.net/locking/rcupdate.html |