|
From: Catherine M. <Cat...@jp...> - 2006-09-20 22:00:35
|
Hi, I just upgraded to valgrind 3.2.1 (running Fedore Core 5 on an AMD Opteron, version 6.1 of the Portland Group F90 compilers), and just got the following "impossible" error. ==27491== Warning: set address range perms: large range 106048304 (undefined) valgrind: m_scheduler/scheduler.c:996 (vgPlain_scheduler): the 'impossible' happened. valgrind: VG_(scheduler), phase 3: run_innerloop detected host state invariant failure ==27491== at 0x380178D3: report_and_quit (m_libcassert.c:136) ==27491== by 0x38017C36: vgPlain_assert_fail (m_libcassert.c:200) ==27491== by 0x38037987: vgPlain_scheduler (scheduler.c:994) ==27491== by 0x38051CC9: run_a_thread_NORETURN (syswrap-linux.c:87) sched status: running_tid=1 Thread 1: status = VgTs_Runnable ==27491== at 0x76D3A3: pgf90_subchk (in /data/L2TC/cmm/linux_code/stereo_cmm/bin/read_swath_stereo.debug.exe) ==27491== by 0x532E49: swath_utility_init_data_ (/data/L2TC/cmm/linux_code/stereo_cmm/src_pgf90/swath_utility.f:202) ==27491== by 0x406FF1: MAIN_ (/data/L2TC/cmm/linux_code/stereo_cmm/src_pgf90/read_swath_stereo.f: 336) ==27491== by 0x40325D: main (in /data/L2TC/cmm/linux_code/stereo_cmm/bin/read_swath_stereo.debug.exe) I'm running a *large* Fortran code, so I don't know how much luck I'd have in trying to boil it down to a subset. The error line in question is simply an assignation of a single value to a large array in one step, i.e.: array(1:nsmp, 1:nline, 1:ncams, 1:nblocks) = x Thanks for any help. I hope I can get past this problem. This is a large code that's misbehaving and the compiler isn't picking up any array bounds errors. Catherine |
|
From: Julian S. <js...@ac...> - 2006-09-20 22:14:11
|
> valgrind: m_scheduler/scheduler.c:996 (vgPlain_scheduler): the
> 'impossible' happened.
> valgrind: VG_(scheduler), phase 3: run_innerloop detected host state
> invariant failure
Sheesh. That's an extremely obscure error. I've never seen one of
those in the wild (outside of V development land). It indicates
a bug in Valgrind's amd64 floating point code handling.
Are you doing something strange with floating point rounding, or
other IEEE control word stuff (precision, exception handling) ?
What version of valgrind were you using prior to 3.2.1 ?
If you do find a simple way to reproduce it I would appreciate knowing.
> I'm running a *large* Fortran code, so I don't know how much luck I'd
> have
> in trying to boil it down to a subset. The error line in question is
> simply
> an assignation of a single value to a large array in one step, i.e.:
>
> array(1:nsmp, 1:nline, 1:ncams, 1:nblocks) = x
>
> Thanks for any help. I hope I can get past this problem. This is a
> large code that's
> misbehaving and the compiler isn't picking up any array bounds errors.
Ok, try this kludge. I don't know if it will help, but worth a try.
In coregrind/m_dispatch/dispatch-amd64-linux.S, find this (line 230):
run_innerloop_exit:
/* We're leaving. Check that nobody messed with
%mxcsr or %fpucw. We can't mess with %rax here as it
holds the tentative return value, but any other is OK. */
Change it by adding one line, to read:
run_innerloop_exit:
jmp run_innerloop_exit_REALLY
/* We're leaving. Check that nobody messed with
%mxcsr or %fpucw. We can't mess with %rax here as it
holds the tentative return value, but any other is OK. */
The new line probably needs to be indented by tab, not spaces.
Rebuild and try again. No guarantees this won't mess up something/everything
else to do with floating point ..
J
|
|
From: Catherine M. <Cat...@jp...> - 2006-09-20 22:31:00
|
Made the fix you suggested and reran. It got past that point but died soon after trying to open an HDF file. The only errors that valgrind saw were: "Conditional jump or move depends on unintialized value" "Syscall param write(buf) points to uninitialised byte(s)" This is making me wonder if maybe the problem isn't my code, but something to do with the interaction of the Portland group compiler plus HDF. Right now I'd be very very happy to find a simple 'array-bounds' error lurking in my code, but neither Valgrind nor the compiler seem to be picking up anything like that. I upgraded from Valgrind 3.2.0. I'll see if I can boil down my code to reproduce that innerloop error. It happened very early on, before the code started to do anything interesting. Catherine On Sep 20, 2006, at 3:13 PM, Julian Seward wrote: > >> valgrind: m_scheduler/scheduler.c:996 (vgPlain_scheduler): the >> 'impossible' happened. >> valgrind: VG_(scheduler), phase 3: run_innerloop detected host state >> invariant failure > > Sheesh. That's an extremely obscure error. I've never seen one of > those in the wild (outside of V development land). It indicates > a bug in Valgrind's amd64 floating point code handling. > > Are you doing something strange with floating point rounding, or > other IEEE control word stuff (precision, exception handling) ? > > What version of valgrind were you using prior to 3.2.1 ? > > If you do find a simple way to reproduce it I would appreciate knowing. > > >> I'm running a *large* Fortran code, so I don't know how much luck I'd >> have >> in trying to boil it down to a subset. The error line in question is >> simply >> an assignation of a single value to a large array in one step, i.e.: >> >> array(1:nsmp, 1:nline, 1:ncams, 1:nblocks) = x >> >> Thanks for any help. I hope I can get past this problem. This is a >> large code that's >> misbehaving and the compiler isn't picking up any array bounds errors. > > Ok, try this kludge. I don't know if it will help, but worth a try. > > In coregrind/m_dispatch/dispatch-amd64-linux.S, find this (line 230): > > run_innerloop_exit: > /* We're leaving. Check that nobody messed with > %mxcsr or %fpucw. We can't mess with %rax here as it > holds the tentative return value, but any other is OK. */ > > Change it by adding one line, to read: > > run_innerloop_exit: > jmp run_innerloop_exit_REALLY > /* We're leaving. Check that nobody messed with > %mxcsr or %fpucw. We can't mess with %rax here as it > holds the tentative return value, but any other is OK. */ > > The new line probably needs to be indented by tab, not spaces. > > Rebuild and try again. No guarantees this won't mess up > something/everything > else to do with floating point .. > > J |
|
From: Julian S. <js...@ac...> - 2006-09-20 22:40:40
|
> Made the fix you suggested and reran. It got past that point but > died soon after trying to open an HDF file. The only > errors that valgrind saw were: > "Conditional jump or move depends on unintialized value" Using 3.2.1, that's probably a reliable report. If it isn't in your code, it might help a PGI or HDF (whatever that is) person track down issues in their code. The PGI folks are Valgrind- literate, so to speak. > This is making me wonder if maybe the problem isn't my code, > but something to do with the interaction of the Portland group > compiler plus HDF. Right now I'd be very very happy to find a > simple 'array-bounds' error lurking in my code, >> array(1:nsmp, 1:nline, 1:ncams, 1:nblocks) = x Maybe you can write this assignment out longhand, as four nested loops? Assuming that's what it is .. > I upgraded from Valgrind 3.2.0. In that case I have a nasty feeling that I broke something when fixing other amd64 FP problems in the 3.2.0 - 3.2.1 transition. > I'll see if I can boil down my code to reproduce that innerloop error. Thanks. J |
|
From: Tom H. <to...@co...> - 2006-09-20 22:53:31
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> > valgrind: m_scheduler/scheduler.c:996 (vgPlain_scheduler): the
> > 'impossible' happened.
> > valgrind: VG_(scheduler), phase 3: run_innerloop detected host state
> > invariant failure
>
> Sheesh. That's an extremely obscure error. I've never seen one of
> those in the wild (outside of V development land). It indicates
> a bug in Valgrind's amd64 floating point code handling.
>
> Are you doing something strange with floating point rounding, or
> other IEEE control word stuff (precision, exception handling) ?
We've been seeing it on amd64 machines - see my postings on the
developer list a few months ago for details.
What I see is the FPU control word changing precision from 64 bit
to 80 bit if I recall correctly. I'm pretty sure it's a kernel
bug though as I did manage to reproduce spontaneous changes in
the FPU control word in a test program without valgrind involved.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2006-09-20 23:21:49
|
> > Are you doing something strange with floating point rounding, or > > other IEEE control word stuff (precision, exception handling) ? > > We've been seeing it on amd64 machines - see my postings on the > developer list a few months ago for details. > > What I see is the FPU control word changing precision from 64 bit > to 80 bit if I recall correctly. I'm pretty sure it's a kernel > bug though Ah, well remembered. Now you point it out that does sound vaguely familiar. Catherine, what kernel version are you using? It would be interesting to know if it's in the same ballpark as the ones Tom saw this problem on. J |
|
From: Catherine M. <Cat...@jp...> - 2006-09-20 23:24:34
|
Here's my kernel version number: cm...@si...:/data/L2TC/cmm/stereo_special_runs/larry [80]>uname -rv 2.6.16-1.2096_FC5 #1 SMP Wed Apr 19 05:14:26 EDT 2006 Catherine On Sep 20, 2006, at 4:21 PM, Julian Seward wrote: > >>> Are you doing something strange with floating point rounding, or >>> other IEEE control word stuff (precision, exception handling) ? >> >> We've been seeing it on amd64 machines - see my postings on the >> developer list a few months ago for details. >> >> What I see is the FPU control word changing precision from 64 bit >> to 80 bit if I recall correctly. I'm pretty sure it's a kernel >> bug though > > Ah, well remembered. Now you point it out that does sound vaguely > familiar. > > Catherine, what kernel version are you using? It would be interesting > to know if it's in the same ballpark as the ones Tom saw this > problem on. > > J |
|
From: Julian S. <js...@ac...> - 2006-09-20 23:29:27
|
Hmm. 2.6.16 isn't exactly what you'd call an ancient and buggy kernel. But .. [thinks] isn't this a red herring? If 3.2.0 does not fail on your box when running your Fortran app but 3.2.1 does, then it has to be a regression in 3.2.1. That's what you're saying happened, right? Hmm. J On Thursday 21 September 2006 00:24, Catherine Moroney wrote: > Here's my kernel version number: > > cm...@si...:/data/L2TC/cmm/stereo_special_runs/larry > [80]>uname -rv > 2.6.16-1.2096_FC5 #1 SMP Wed Apr 19 05:14:26 EDT 2006 > > Catherine > > On Sep 20, 2006, at 4:21 PM, Julian Seward wrote: > >>> Are you doing something strange with floating point rounding, or > >>> other IEEE control word stuff (precision, exception handling) ? > >> > >> We've been seeing it on amd64 machines - see my postings on the > >> developer list a few months ago for details. > >> > >> What I see is the FPU control word changing precision from 64 bit > >> to 80 bit if I recall correctly. I'm pretty sure it's a kernel > >> bug though > > > > Ah, well remembered. Now you point it out that does sound vaguely > > familiar. > > > > Catherine, what kernel version are you using? It would be interesting > > to know if it's in the same ballpark as the ones Tom saw this > > problem on. > > > > J |
|
From: Catherine M. <Cat...@jp...> - 2006-09-20 23:32:19
|
No, I got the error with both 3.2.0 and 3.2.1. I originally tried 3.2.0 and that failed so I upgraded to 3.2.1. Same error. Catherine On Sep 20, 2006, at 4:29 PM, Julian Seward wrote: > > Hmm. 2.6.16 isn't exactly what you'd call an ancient and buggy kernel. > > But .. [thinks] isn't this a red herring? If 3.2.0 does > not fail on your box when running your Fortran app but 3.2.1 does, > then it has to be a regression in 3.2.1. That's what you're > saying happened, right? > > Hmm. > > J > > > On Thursday 21 September 2006 00:24, Catherine Moroney wrote: >> Here's my kernel version number: >> >> cm...@si...:/data/L2TC/cmm/stereo_special_runs/larry >> [80]>uname -rv >> 2.6.16-1.2096_FC5 #1 SMP Wed Apr 19 05:14:26 EDT 2006 >> >> Catherine >> >> On Sep 20, 2006, at 4:21 PM, Julian Seward wrote: >>>>> Are you doing something strange with floating point rounding, or >>>>> other IEEE control word stuff (precision, exception handling) ? >>>> >>>> We've been seeing it on amd64 machines - see my postings on the >>>> developer list a few months ago for details. >>>> >>>> What I see is the FPU control word changing precision from 64 bit >>>> to 80 bit if I recall correctly. I'm pretty sure it's a kernel >>>> bug though >>> >>> Ah, well remembered. Now you point it out that does sound vaguely >>> familiar. >>> >>> Catherine, what kernel version are you using? It would be >>> interesting >>> to know if it's in the same ballpark as the ones Tom saw this >>> problem on. >>> >>> J |
|
From: Julian S. <js...@ac...> - 2006-09-20 23:37:40
|
Ah, ok. Sorry for the confusion. So either: - kernel-bug explaination applies, or - both 3.2.0 and 3.2.1 have the same, previously unknown bug J On Thursday 21 September 2006 00:32, Catherine Moroney wrote: > No, I got the error with both 3.2.0 and 3.2.1. I originally tried > 3.2.0 and that failed so I upgraded to 3.2.1. Same error. > > Catherine > > On Sep 20, 2006, at 4:29 PM, Julian Seward wrote: > > Hmm. 2.6.16 isn't exactly what you'd call an ancient and buggy kernel. > > > > But .. [thinks] isn't this a red herring? If 3.2.0 does > > not fail on your box when running your Fortran app but 3.2.1 does, > > then it has to be a regression in 3.2.1. That's what you're > > saying happened, right? > > > > Hmm. > > > > J > > > > On Thursday 21 September 2006 00:24, Catherine Moroney wrote: > >> Here's my kernel version number: > >> > >> cm...@si...:/data/L2TC/cmm/stereo_special_runs/larry > >> [80]>uname -rv > >> 2.6.16-1.2096_FC5 #1 SMP Wed Apr 19 05:14:26 EDT 2006 > >> > >> Catherine > >> > >> On Sep 20, 2006, at 4:21 PM, Julian Seward wrote: > >>>>> Are you doing something strange with floating point rounding, or > >>>>> other IEEE control word stuff (precision, exception handling) ? > >>>> > >>>> We've been seeing it on amd64 machines - see my postings on the > >>>> developer list a few months ago for details. > >>>> > >>>> What I see is the FPU control word changing precision from 64 bit > >>>> to 80 bit if I recall correctly. I'm pretty sure it's a kernel > >>>> bug though > >>> > >>> Ah, well remembered. Now you point it out that does sound vaguely > >>> familiar. > >>> > >>> Catherine, what kernel version are you using? It would be > >>> interesting > >>> to know if it's in the same ballpark as the ones Tom saw this > >>> problem on. > >>> > >>> J |
|
From: Tom H. <to...@co...> - 2006-09-20 23:35:43
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> Hmm. 2.6.16 isn't exactly what you'd call an ancient and buggy kernel.
I've seen it on a range of recent 2.6 kernels.
> But .. [thinks] isn't this a red herring? If 3.2.0 does
> not fail on your box when running your Fortran app but 3.2.1 does,
> then it has to be a regression in 3.2.1. That's what you're
> saying happened, right?
We were definitely seeing it in 3.2.0 although it is relatively
rare - the longer valgrind is running the more likely it is to
trigger. We first saw it doing long callgrind runs, but I have
seen it from memcheck as well.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Tom H. <to...@co...> - 2006-09-21 15:40:40
|
In message <30c...@lo...>
Tom Hughes <to...@co...> wrote:
> In message <200...@ac...>
> Julian Seward <js...@ac...> wrote:
>
>> Hmm. 2.6.16 isn't exactly what you'd call an ancient and buggy kernel.
>
> I've seen it on a range of recent 2.6 kernels.
I've just reproduced it on a 2.6.17-1.2157_FC5 kernel.
>> But .. [thinks] isn't this a red herring? If 3.2.0 does
>> not fail on your box when running your Fortran app but 3.2.1 does,
>> then it has to be a regression in 3.2.1. That's what you're
>> saying happened, right?
>
> We were definitely seeing it in 3.2.0 although it is relatively
> rare - the longer valgrind is running the more likely it is to
> trigger. We first saw it doing long callgrind runs, but I have
> seen it from memcheck as well.
I can reproduce it outside valgrind - here is my test program:
#include <assert.h>
#include <sched.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
uint16_t cw = 0x27f;
asm("fldcw %0" : : "m" (cw));
while ( 1 )
{
asm("fstcw %0" : : "m" (cw));
printf("cw = %x\n", cw);
if (cw != 0x27f)
{
fprintf(stderr, "cw = 0x%x\n", cw);
abort();
}
sched_yield();
}
exit( 0 );
}
I ran that and after a few hours (during which I tried to load the
machine up as much as possible) it failed with the control word having
changed from 0x27f to 0x37f.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2006-09-21 21:14:00
|
> I can reproduce it outside valgrind - here is my test program: Wow. Have you filed a kernel bug report? Presumably it would be worth doing so. J |
|
From: Tom H. <to...@co...> - 2006-09-21 23:13:13
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> > I can reproduce it outside valgrind - here is my test program:
>
> Wow.
>
> Have you filed a kernel bug report? Presumably it would be worth
> doing so.
So far I've only reproduced it on my workstation at work which has a
tainted kernel thanks to those lovely people at NVidia, so I can't
really report it yet.
I'm going to try and reproduce it on a machine without a tainted kernel
so that I can report it.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2007-01-04 16:19:12
|
On Wednesday 20 September 2006 23:53, Tom Hughes wrote: > In message <200...@ac...> > > Julian Seward <js...@ac...> wrote: > > > valgrind: m_scheduler/scheduler.c:996 (vgPlain_scheduler): the > > > 'impossible' happened. > > > valgrind: VG_(scheduler), phase 3: run_innerloop detected host state > > > invariant failure > > > > Sheesh. That's an extremely obscure error. I've never seen one of > > those in the wild (outside of V development land). It indicates > > a bug in Valgrind's amd64 floating point code handling. > > > > Are you doing something strange with floating point rounding, or > > other IEEE control word stuff (precision, exception handling) ? > > We've been seeing it on amd64 machines - see my postings on the > developer list a few months ago for details. I just got struck by this one again, on openSUSE 10.2 on amd64. I remember you identified a kernel bug report but associated with it, but now I can't find it. Any ideas? I recall that the kernel people said they fixed it. This is with Linux xxxxx 2.6.18.2-34-default #1 SMP Mon Nov 27 11:46:27 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux as supplied on an online-update'd openSUSE 10.2, untainted, AFAICS. J |
|
From: Tom H. <to...@co...> - 2007-01-04 16:26:01
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> On Wednesday 20 September 2006 23:53, Tom Hughes wrote:
>> In message <200...@ac...>
>>
>> Julian Seward <js...@ac...> wrote:
>> > > valgrind: m_scheduler/scheduler.c:996 (vgPlain_scheduler): the
>> > > 'impossible' happened.
>> > > valgrind: VG_(scheduler), phase 3: run_innerloop detected host state
>> > > invariant failure
>> >
>> > Sheesh. That's an extremely obscure error. I've never seen one of
>> > those in the wild (outside of V development land). It indicates
>> > a bug in Valgrind's amd64 floating point code handling.
>> >
>> > Are you doing something strange with floating point rounding, or
>> > other IEEE control word stuff (precision, exception handling) ?
>>
>> We've been seeing it on amd64 machines - see my postings on the
>> developer list a few months ago for details.
>
> I just got struck by this one again, on openSUSE 10.2 on amd64.
> I remember you identified a kernel bug report but associated with it,
> but now I can't find it. Any ideas? I recall that the kernel people
> said they fixed it.
The bug is http://bugzilla.kernel.org/show_bug.cgi?id=7223.
> This is with
>
> Linux xxxxx 2.6.18.2-34-default #1 SMP Mon Nov 27 11:46:27 UTC 2006 x86_64
> x86_64 x86_64 GNU/Linux
>
> as supplied on an online-update'd openSUSE 10.2, untainted, AFAICS.
It should be fixed in 2.6.19 but I'm still on 2.6.18 as RH haven't
issued any 2.6.19 updates yet so I haven't actually tested it.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Sergey V. <vs...@al...> - 2007-01-04 17:33:28
|
On Thu, 04 Jan 2007 16:25:46 +0000 Tom Hughes wrote: > In message <200...@ac...> > Julian Seward <js...@ac...> wrote: > [...] > > I just got struck by this one again, on openSUSE 10.2 on amd64. > > I remember you identified a kernel bug report but associated with it, > > but now I can't find it. Any ideas? I recall that the kernel people > > said they fixed it. > > The bug is http://bugzilla.kernel.org/show_bug.cgi?id=7223. > > > This is with > > > > Linux xxxxx 2.6.18.2-34-default #1 SMP Mon Nov 27 11:46:27 UTC 2006 x86_64 > > x86_64 x86_64 GNU/Linux > > > > as supplied on an online-update'd openSUSE 10.2, untainted, AFAICS. > > It should be fixed in 2.6.19 but I'm still on 2.6.18 as RH haven't > issued any 2.6.19 updates yet so I haven't actually tested it. The patch from that bugzilla entry is also in the 2.6.18.3 stable release, so newer 2.6.18 packages might already contain the fix. |
|
From: Dirk M. <dm...@gm...> - 2007-01-07 21:10:48
|
On Thursday, 4. January 2007 17:29, Julian Seward wrote: > I just got struck by this one again, on openSUSE 10.2 on amd64. > I remember you identified a kernel bug report but associated with it, > but now I can't find it. Any ideas? I recall that the kernel people > said they fixed it. Are you sure this is still an issue with an updated kernel? this FPU state leak is supposed to be fixed by a kernel security update. Dirk |
|
From: Tom H. <to...@co...> - 2007-01-07 23:51:15
|
In message <200...@gm...>
Dirk Mueller <dm...@gm...> wrote:
> On Thursday, 4. January 2007 17:29, Julian Seward wrote:
>
> > I just got struck by this one again, on openSUSE 10.2 on amd64.
> > I remember you identified a kernel bug report but associated with it,
> > but now I can't find it. Any ideas? I recall that the kernel people
> > said they fixed it.
>
> Are you sure this is still an issue with an updated kernel? this FPU state
> leak is supposed to be fixed by a kernel security update.
We know that - Julian's question was what kernel version fixed it...
The answer is 2.6.18.3 or 2.6.19.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Dirk M. <dm...@gm...> - 2007-01-08 12:28:11
|
On Monday, 8. January 2007 00:51, Tom Hughes wrote: > > Are you sure this is still an issue with an updated kernel? this FPU > > state leak is supposed to be fixed by a kernel security update. > We know that - Julian's question was what kernel version fixed it... Sure. But openSUSE 10.2's kernel version doesn't change - fixes are just backported to it. This was the actual question. > The answer is 2.6.18.3 or 2.6.19. Dirk |
|
From: Tom H. <to...@co...> - 2007-01-08 12:38:21
|
In message <200...@gm...>
Dirk Mueller <dm...@gm...> wrote:
> On Monday, 8. January 2007 00:51, Tom Hughes wrote:
>
>> > Are you sure this is still an issue with an updated kernel? this FPU
>> > state leak is supposed to be fixed by a kernel security update.
>> We know that - Julian's question was what kernel version fixed it...
>
> Sure. But openSUSE 10.2's kernel version doesn't change - fixes are just
> backported to it. This was the actual question.
Ah well I know nothing about SuSE. So 10.2 will never have a newer
kernel then?
Presumably the release number of the package changes... Otherwise it
would be very confusing.
BTW I have updated to a 2.6.18 kernel package from Fedora that is
based on 2.6.18.6 now so I shouldn't see the problem anymore. We
shall have to see what happens.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|