You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Konrad H. <hi...@cn...> - 2000-10-16 15:52:52
|
> > I'd like to have at least the option of raising an exception in that > > case. Note that this is not what NumPy does today. > > Does NumPy use the fpectl module? I suppose not. LLNL contribued that No. Perhaps it should, but it doesn't make sense unless fpectl works on a large number of platforms. I confess I have never looked at it. I want my code to be portable, so I don't even consider using packages that aren't. > > For the same reason that makes 2/3 return zero instead of a float > > division result. Like C or Fortran, Python treats integers, floats, > > and complex numbers as different data types. > > You know I'm in general agreement with you on this one, but I have to draw a > distinction here: Guido thinks that 2/3 returning 0 was a design mistake, > but not that math.sqrt(-1) raising an exception is a mistake. Most Python > users won't know what to do with a complex number, so it's "an error" to > them. Well, that would be a good occasion to learn about complex numbers! I remember having learned about generalized inverses by using APL in high school (in a very similar way: I was amazed that it could invert non-square matrices), and that turned out to be very useful knowledge later on. Perhaps that's a topic for the EDU-SIG... Anyway, I don't care what math.sqrt(-1) does, but I would definitely prefer Numeric.sqrt(-1) to return a complex result. And I think that someone who uses NumPy has probably heard about complex numbers. > I would like to view this in P3K (if not earlier) as being akin to > 754 exceptions: some people are delighted to have 1e300**2 return +Inf, > while others want to see OverflowError instead, and still others want to see > +Inf *and* have a sticky flag set saying "an overflow occurred". We could > treat f(x) (for f == sqrt and everything else) the same way wrt to a new > ComplexResultFromNonComplexInputsError: define "non-stop" complex results, > let the user choose whether to do nonstop or raise an exception, and a > supply a sticky flag saying whether or not any complex results were > generated from non-complex inputs. There is, however, one difference: in properly debugged production code, there should be no overflow situations, so it doesn't matter much how they are treated. Complex number can (do!) occur in production code, but there could also be production code relying on exceptions for sqrt(-1) (e.g. for input error checking). Therefore a program using several libraries might be impossible to run with either setting. Since this concerns only the math module, I'd prefer to keep separate module versions for the two cases, which can be used in parallel. > > The "right" solution, in my opinion, would be to have a single > > "number" type of which integers, float, and complex numbers are simply > > different internal representations. But such a change cannot be > > introduced without breaking a lot of code. > > The current distinction between ints and longs (unbounded ints) should also > get swallowed by this. A way to get from here to there is especially > irksome at the C API level, since, e.g., many C API functions pass Python > ints as C longs directly. A smaller number pass Python floats as C doubles > directly. I don't see the problem in that direction, it's rather C API functions that *return* numbers in C data types that would be difficult to adapt. But then, why should be C API not be allowed in P3K? it's-wonderful-that-anything-may-be-changed-in-p3k'ly Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Jon S. <js...@wm...> - 2000-10-16 15:45:54
|
Monday, 10/16/2000 Hello, all. We are making the first announce of the first stable release (1.0) of our package pyclimate, which presents some tools used for climate variability analysis and which make extensive use of Numerical Python and C. It is released under the GNU Public License. Changes from the previous release Version 1.0--October, 2000. 1) Improved tests. They are more accurate, reliable, informative, comprehensive and use less disk space. 2) The package compiles using distutils. This feature has been checked on FreeBSD, Linux and OSF platforms. 3) Some minor typos corrected in the documentation. 4) Added KPDF.c, a extension module to estimate univariate and multivariate kernel--based probability density functions. 5) Added a class to compute the vertical component of the curl of a vectorial field in diffoperators.py. 6) DCDFLIB.C is currently distributed with the package. 7) KZFilter.py has been converted into a general purpose LinearFilter.py which holds the basic operations of any linear filter. There are two different subclasses currently, the Lanczos filter and the previous Kolmogorov--Zurbenko filter, KZFilter.py. The user can define new filters just by redefining the filter coefficients subclassing LinearFilter. The package also contains (from release 0.0): IO functions ------------ -ASCII files (simple, but useful) -ncstruct.py: netCDF structure copier. From a COARDS compliant netCDF file, this module creates a COARDS compliant file, copying the needed attributes, comments, and so on in one call. Time handling routines ---------------------- * JDTime.py -> Some C/Python functions to convert from date to Scaliger's Julian Day and from Julian Day to date. We are not trying to replace mxDate, but addressing a different problem. In particular, this module contains a routine especially suited to handling monthly time steps for climatological use. * JDTimeHandler.py -> Python module which parses the units attribute of the time variable in a COARDS file and which offsets and scales adequately the time values to read/save date fields. Interface to DCDFLIB.C ---------------------- A C/Python interface to the free DCDFLIB.C library is provided. This library allows direct and inverse computations of parameters for several probability distribution functions like Chi^2, normal, binomial, F, noncentral F, and many many more. EOF analysis ------------ Empirical Orthogonal Function analysis based on the SVD decomposition of the data matrix and related functions to test the reliability/degeneracy of eigenvalues (truncation rules). Monte Carlo test of the stability of eigenvectors to temporal subsampling. SVD decomposition ----------------- SVD decomposition of the correlation matrix of two datasets, functions to compute the expansion coefficients, the squared cumulative covariance fraction and the homogeneous and heterogeneous correlation maps. Monte Carlo test of the stability of singular vectors to temporal subsampling. Multivariate digital filter --------------------------- Multivariate digital filter (high and low pass) based on the Kolmogorov-Zurbenko filter Differential operators on the sphere ------------------------------------ Some classes to compute differential operators (gradient and divergence) on a regular latitude/longitude grid. LICENSE ======= GNU General Public License Version 2. PREREQUISITES ============= To be able to use it, you will need: 1. Python ;-) 2. netCDF library 3.4 or later 3. Scientific Python, by Konrad Hinsen IF AND ONLY IF you really want to change the C code (JDTime.[hc] and pycdf.[hc]), then, you will also need SWIG. COMPILATION =========== Now, we use distutils. The installation is simpler. DOCUMENTATION ============= Postscript and PDF versions of the manual are included in the distribution. We are preparing an even better version of the documentation. AVAILABILITY ============ http://lcdx00.wm.lc.ehu.es/~jsaenz/pyclimate (Europe) http://pyclimate.zubi.net/ (USA) http://starship.python.net/crew/~jsaenz (USA) Any feedback from the users of the package will be really appreciated by the authors. Enjoy. Jon Saenz, js...@wm... Juan Zubillaga, wmp...@lg... Jesus Fernandez, ch...@wm... <P><A HREF="http://starship.python.net/crew/jsaenz/pyclimate">PyClimate 1.0</A> - Several routines for the analysis of climate variability. (16-Oct-00) |
From: Tim P. <ti...@em...> - 2000-10-15 20:36:45
|
[cb...@jp...] >> I've come to Python from MATLAB for numerics, and I really appreciated >> MATLAB's way of handling all this. I don't think MATLAB has true 754 ... [Konrad Hinsen] > MATLAB is fine for simple interactive data processing, and its > behaviour is adapted to this task. I don't think anyone would use > MATLAB for developing complex algorithms, its programming language > isn't strong enough for that. Python is, so complex algorithms have to > be considered as well. And for that kind of application, continuing a > calculation with Infs and NaNs is a debugging nightmare. A non-constructive (because futile) speculation: the first time I saw what 754 was intending to do, my immediate reaction was "hmm -- the exponent field is too narrow!". With a max (double) val in the ballpark of 1e300, you get an infinity as soon as you square something as small as 1e150, and once you get a few infinities, NaNs are sure to follow (due to Inf-Inf and Inf/Inf). The dynamic range for 754 singles is much smaller still. There's no doubt about Cray arithmetic being hard to live with, but while Mr. Cray didn't worry about proper rounding, he did devote 15 bits to Cray's exponent field (vs. 11 for 754). As a result, overflows were generally a non-issue on Cray boxes, and *nobody* complained (in my decade there) about Cray HW raising a fatal exception if one occurred. In return, you got only 48 bits of precision (vs. 53 for 754). But, for most physical problems, how accurate are the inputs? 10 bits on a good day, 20 bits on a great day? Crays worked despite their sloppy numerics because, for most problems to which they were applied, they carried more than twice the precision *and* dynamic range than the final results needed. >> I have not yet heard a decent response to the question of what to do >> when a single value in a large array is bad, and causes an exception. I'd usually trace back and try to figure out how it got "bad" to begin with ... > I'd like to have at least the option of raising an exception in that > case. Note that this is not what NumPy does today. Does NumPy use the fpectl module? I suppose not. LLNL contribued that code, but we hear very little feedback on it. It arranges to (in platform-dependent ways) convert the 754 overflow, divide-by-0 and invalid-operation signals into Python exceptions. In core Python, this is accomplished at the extraordinary expense of doing a setjmp before, and a function call + double->int conversion after, every single fp operation. A chief problem is that SIGFPE is the only handle C gives us on fp "errors", and the C std does not allow returning from a SIGFPE handler (the result of trying to is undefined, and indeed varies wildly across platforms); so if you want to regain control, you have to longjmp out of the handler. The NumPy implementation could use the PyFPE_START_PROTECT and PyFPE_END_PROTECT macros to brackete entire array operations, though, and so pay for the setjmp etc only once per array op. This is difficult stuff, but doable. >> >> sqrt(-1) >> ans = >> 0 + 1.0000i >> >> Hey! I like that! Python is dynamic, why can't we just get the actual >> answer to sqrt(-1), without using cmath, that always returns a complex ? > For the same reason that makes 2/3 return zero instead of a float > division result. Like C or Fortran, Python treats integers, floats, > and complex numbers as different data types. You know I'm in general agreement with you on this one, but I have to draw a distinction here: Guido thinks that 2/3 returning 0 was a design mistake, but not that math.sqrt(-1) raising an exception is a mistake. Most Python users won't know what to do with a complex number, so it's "an error" to them. I would like to view this in P3K (if not earlier) as being akin to 754 exceptions: some people are delighted to have 1e300**2 return +Inf, while others want to see OverflowError instead, and still others want to see +Inf *and* have a sticky flag set saying "an overflow occurred". We could treat f(x) (for f == sqrt and everything else) the same way wrt to a new ComplexResultFromNonComplexInputsError: define "non-stop" complex results, let the user choose whether to do nonstop or raise an exception, and a supply a sticky flag saying whether or not any complex results were generated from non-complex inputs. > ... > The "right" solution, in my opinion, would be to have a single > "number" type of which integers, float, and complex numbers are simply > different internal representations. But such a change cannot be > introduced without breaking a lot of code. The current distinction between ints and longs (unbounded ints) should also get swallowed by this. A way to get from here to there is especially irksome at the C API level, since, e.g., many C API functions pass Python ints as C longs directly. A smaller number pass Python floats as C doubles directly. it's-wonderful-that-p3k-will-solve-everything<wink>-ly y'rs - tim |
From: Konrad H. <hi...@cn...> - 2000-10-15 19:04:34
|
> I've come to Python from MATLAB for numerics, and I really appreciated > MATLAB's way of handling all this. I don't think MATLAB has true 754 MATLAB is fine for simple interactive data processing, and its behaviour is adapted to this task. I don't think anyone would use MATLAB for developing complex algorithms, its programming language isn't strong enough for that. Python is, so complex algorithms have to be considered as well. And for that kind of application, continuing a calculation with Infs and NaNs is a debugging nightmare. > I have not yet heard a decent response to the question of what to do > when a single value in a large array is bad, and causes an exception. I'd like to have at least the option of raising an exception in that case. Note that this is not what NumPy does today. > >> sqrt(-1) > ans = > 0 + 1.0000i > > Hey! I like that! Python is dynamic, why can't we just get the actual > answer to sqrt(-1), without using cmath, that always returns a complex ? For the same reason that makes 2/3 return zero instead of a float division result. Like C or Fortran, Python treats integers, floats, and complex numbers as different data types. And the return type of a function should depend only on the types, but not the values, of its parameters (for consistency, not because of any implementational limitation). So sqrt(a) for a of type float can either always return a float (math, Numeric) and crash for negative arguments, or always return a complex (cmath). The "right" solution, in my opinion, would be to have a single "number" type of which integers, float, and complex numbers are simply different internal representations. But such a change cannot be introduced without breaking a lot of code. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Nick B. <ni...@ni...> - 2000-10-15 05:10:50
|
> Does anyone know what other array-oriented languages use? I know what > MATLAB does: I'm an Interactive Data Language or IDL user (the Univ of Wisconsin's Space Science Ceter is split down the middle between this and MatLab, but python/numpy is definitely on the increase). As you can see from results below, like MatLab over/under-flows in IDL are reported but do not stop execution. This is the best (only) possible scenario for an interactive arrays and visualization environment. IDL> print, exp(-777) 0.00000 % Program caused arithmetic error: Floating underflow IDL> print, exp(777) Inf % Program caused arithmetic error: Floating overflow IDL> print, sqrt(-1) -NaN % Program caused arithmetic error: Floating illegal operand IDL> print, 54/0 54 % Program caused arithmetic error: Integer divide by 0 |
From: Janko H. <jh...@if...> - 2000-10-14 18:33:14
|
What is the difference of putmask and where? As it seems the only difference is the inplace behavior. This becomes more and more complicated as we have this subtle difference at many places (ravel vs. flat) and in future also the augmented assignment stuff, which also works for arrays now, although I do not know, if it's really an inplace assignment. What are the next functions for which a spacesaving variant is introduced? Would it be better to come to another convention for this type of optimization? As a side note the order of arguments is different for putmask and where. __Janko Paul F. Dubois writes: > There is (in CVS) a new function, putmask: > > c = greater(x, 0) > putmask(y, c, v) > putmask(z, c, u+2) > > The documentation is now online. Briefly: > putmask(a, m, v) sets a to v where m is true. > > a must be a contiguous array > m must be the same total size as a (shape ignored) > v will be repeated as needed to that size > > The underlying work is done in C. > > -----Original Message----- > From: num...@li... > [mailto:num...@li...]On Behalf Of > Daehyok Shin > Sent: Friday, October 13, 2000 5:26 PM > To: Numpy Discussion > Subject: [Numpy-discussion] [Q]Best way for an array operation? > > > What is the best Numpy way for the following work? > > for i in range(len(x)): > if x[i] > 0: > y[i] = v[i] > z[i] = u[i]+2 > > Daehyok Shin (Peter) > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Paul F. D. <pau...@ho...> - 2000-10-14 16:01:36
|
There is (in CVS) a new function, putmask: c = greater(x, 0) putmask(y, c, v) putmask(z, c, u+2) The documentation is now online. Briefly: putmask(a, m, v) sets a to v where m is true. a must be a contiguous array m must be the same total size as a (shape ignored) v will be repeated as needed to that size The underlying work is done in C. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Daehyok Shin Sent: Friday, October 13, 2000 5:26 PM To: Numpy Discussion Subject: [Numpy-discussion] [Q]Best way for an array operation? What is the best Numpy way for the following work? for i in range(len(x)): if x[i] > 0: y[i] = v[i] z[i] = u[i]+2 Daehyok Shin (Peter) _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Charles G W. <cg...@fn...> - 2000-10-14 02:44:46
|
Chris Barker writes: > Hey! I like that! Python is dynamic, why can't we just get the actual > answer to sqrt(-1), without using cmath, that always returns a complex ? You have to import the sqrt function from somewhere. Either you import it from math or from cmath, depending on which flavor you need. Anybody sophisticated enough to know what complex numbers are, and sophisticated enough to want to get complex values as a result of a calculation, should be sophisticated enough to be able to type "cmath". |
From: Chris B. <cb...@jp...> - 2000-10-14 01:14:14
|
Incomplete vs. non-at-all IEEE 754 support is a non argument. If you have full IEEE support (which it seems everyone here thinks would be good, but difficult), it is clear what you have. If not, you are not consistent with a standard, and therefor making up your own. That being the case, it's a matter of what we want the Python standard to be. I, for one think that NaN and Inf are fantastic!!! I think the best argument for them here is that it is almost impossible to do a lot of array based calculations without them, and you can't do serious number crunching in Python without array based calculations. I've come to Python from MATLAB for numerics, and I really appreciated MATLAB's way of handling all this. I don't think MATLAB has true 754 support, as I don't think you can change the behaviour, but you do get a consistent result across platforms (including non-iee machines like the Cray?---I have no idea). I have not yet heard a decent response to the question of what to do when a single value in a large array is bad, and causes an exception. This really does render Python essentially useless for Numerics for me. I suspect all those number crunchers that want an exception rather than an Inf are NOT using array-oriented languages. My goal is to dump MATLAB for Python, but this may prevent me from doing that. Does anyone know what other array-oriented languages use? I know what MATLAB does: >> exp(-777) ans = 0 >> exp(777) ans = Inf >> sqrt(-1) ans = 0 + 1.0000i Hey! I like that! Python is dynamic, why can't we just get the actual answer to sqrt(-1), without using cmath, that always returns a complex ? sorry, other subjet, not meant to be raised at the moment. >> 54/0 Warning: Divide by zero. ans = Inf Here we get a warning, but also a result that won't crash the computation. >> a = 0/0 Warning: Divide by zero. a = NaN >> b = a b = NaN >> a == b ans = 0 So comparing two NaNs yields a false, as it should, and though Python won't do it now, it could. One thing that a numerical routine should NEVER do is give an incorrect answer. No answer is often bad, but an incorrect one is much worse. NaN == NaN must return false. Does anyone know what FORTRAN 90 specifies (if anything)? What other array-oriented languages are there? I think what MATLAB does is what Tim is calling "bad *and* incomplete 754 support" Incomplete is surely true, "bad" is clearly a matter of opinion. It seems we have a variety of users of numerics in Python, that I break into three broad catagories: Casual users: mostly doing non-numeric stuff, with the occasional calculation: This group could use any non-cryptic handling of over/underflow, it just doesn't matter. Mid-level number crunchers: This is the group I put myself in. We crunch a lot of numbers, really like the array semantics (and speed) of NumPy, and are doing things like graphing functions, statistical calculations, etc. We have probably only taken one of two (at most) numerical analysis courses, and many don't know what the heck IEE 754 is. (The one Numerical Analysis course I did take, happened to be with Kahan, which is why I know what it is) For this group, the incomplete IEEE support is probably the best way to go. We're more likely to be annoyed by our whole computation stopping because of one bad data point, than we are to be pissed off that it didn't stop when it started to compute garbage. Hard Core Number crunchers: These are the folks that do things like write optimised routines for particular architectures, adn teach numerical analysis courses. These folks want either FULL IEEE, or plain old "classic" overflow and underflow errors, that they can handle themselves. Do these folks really use Python as other than a glue language? Are they really doing massive number crunching in Python itself, rather than in C (or whatever) extensions? If so, I'd be surprised is they didn't find Huaiyu's argument compelling when doing array based calculations. Tim pointed out that Python was not designed with 754 in mind, so for 2.0, what he is doing is probably our best bet, but it seems to not be the best ultimate solution, I hope using NaN and Inf will be considered for future versions, even if it is incomplete 754 compliance. Note: if the ultimate goal is REAL IEEE 754 compliance, is it possible without custom re-compiling your own interpreter? If so, is there any chance that it will come any time soon (2.1 ?) , so we don't have to have this discussion at all? Thanks for all of your work on this, Python just keeps getting better!! -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Janko H. <jh...@if...> - 2000-10-13 23:14:14
|
You do not define, what values are in z,y if x < 0, so I presume, they keep the current value. >>> true = greater(x,0.) >>> z=where(true, u+2., z) >>> y=where(true, v, y) HTH, __Janko Daehyok Shin writes: > What is the best Numpy way for the following work? > > for i in range(len(x)): > if x[i] > 0: > y[i] = v[i] > z[i] = u[i]+2 > > Daehyok Shin (Peter) > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Daehyok S. <sd...@em...> - 2000-10-13 21:24:18
|
What is the best Numpy way for the following work? for i in range(len(x)): if x[i] > 0: y[i] = v[i] z[i] = u[i]+2 Daehyok Shin (Peter) |
From: <tb...@pa...> - 2000-10-13 20:59:55
|
On Fri, 13 Oct 2000 num...@li... wrote: > having Python you indeed have bad *and* incomplete 754 support on every 754 > platform it runs on. Well, it depends on what you mean by "bad *and* incomplete 754 support". IEEE recommends specific semantics for comparison operators in language bindings, but I think their recommendations are undesirable from a programming language and software engineering point of view. If you follow IEEE semantics, you prohibit optimizers from optimizing in many cases and it contradicts type system semantics in some languages. Worst of all, programs that rely on IEEE semantics (in particular, of comparison operators) give no lexical indication that they do so; they will simply do something wrong, like go into an infinite loop or terminate a loop prematurely. I think it's best to raise an exception for any expression involving a NaN, unless the programmer explicitly indicated that he wants a NaN result and IEEE semantics. Another good approach is to provide separate IEEE operators and leave the behavior of the built-in operators on IEEE special values explicitly undefined. This keeps people from relying on one behavior or the other. To me, the worst possible choice is to "implement IEEE semantics correctly", i.e., in the way the IEEE authors envisioned it. (Of course, IEEE is reasonably nice from a numerical point of view; I just think they overextended themselves when talking about language bindings). > (*) Quiz: *if* you can manage to trick Python into creating a NaN on your > particular 754 platform, does the Python expression NaN == 1.0 return true, > false, or raise an exception? Answer before you try it. Then try it on > enough 754 platforms until you give up trying to guess in advance. NaN == > NaN is predictable in Python, and is the one 754 feature Python guarantees > won't work correctly on any 754 platform (although I've heard that it loses > this predictability when run using NumPy's flavor of comparisons instead of > core Python's). This is just the sort of issue I was referring to. In many dynamic languages, the assumption is that pointer quality implies object equality. That's a perfectly reasonable semantic choice. While Python may not implement "754 correctly" in the sense of the (probably) Fortran-thinking IEEE floating point designers, I think Python's choice is correct and should not be altered. People who really want IEEE floating point comparison operations can use something like ieee.equal(a,b). That alerts the reader that something special is going on, it will fail on non-IEEE platforms (indicating that the code wouldn't work correctly there anyway), and it makes only the code that needs to pay the price for strict IEEE conformance. Cheers, Thomas. |
From: <hi...@di...> - 2000-10-13 15:05:37
|
> This may or may not be related, but there are definitely problems with > arrays of complex numbers sometimes being unneccessarily promoted to > type "Object" - for example: Interesting. I have been fighting against a strangely version-dependent bug in which similar unwanted promotions to Object arrays occur when ufuncs are called, and now I wonder if there is any relation. I have tried a few times to track this down, but the code is so, well, complicated that I never get far enough before I run out of time. > And we also know that there are a lot of problems with arrays of > type "Object". So it's possible that somehow your Complex array is > getting promoted to "Object" and running against a problem there. LinearAlgebra certainly doesn't (and won't) work for Object arrays; it can handle only the data types supported by LAPACK. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Jonathan M. G. <jon...@va...> - 2000-10-12 21:48:48
|
This is a bit off-topic, but I noticed that there are lots of experts here and I have a question about IEEE-754 that has been bugging me for some time and which I was not able to get a good answer for on the USENET newsgroups. I am hoping that one of the experts in this list will take pity on my ignorance and answer what should be a simple question. Under Microsoft Visual C++, the Standard C++ library's numeric_limits<float>::signaling_NaN() and numeric_limits<double>::signaling_NaN() both return -INF values. The numeric_limits<float>::quiet_NaN() and numeric_limits<double>::quiet_NaN() both return -IND. I have not been able to determine whether this is "proper" behavior or whether it is another example of Microsoft ignoring standards? I would have thought that one of the two should return NaN. I certainly can't for the life of me figure out why someone would call INF a NaN, but as I have said, I'm pretty ignorant. Right now, if I want to use NaN's in my C++ code (e.g., to initialize newly allocated memory blocks) I build NaNs thus: >static long iNAN[2] = >{ > 0xFFFFFFFF, 0xFFFFFFFF >}; > >static double dNAN( > *reinterpret_cast<double *>(iNaN)) but would prefer something a little less bit-tweaky. Microsoft's documentation that this is the way they intend their library to work may be found at: http://msdn.microsoft.com/library/devprods/vs6/visualc/vclang/sample_Members_of_the_numeric_limits_Class_(STL_Sample).htm#_sample_stl_numeric_limits_class Thanks, Jonathan ============================================================================= Jonathan M. Gilligan jon...@va... The Robert T. Lagemann Assistant Professor Office: 615 343-6252 of Living State Physics Lab (FEL) 343-7580 Dept. of Physics and Astronomy, Box 1807-B Fax: 343-7263 6823 Stevenson Center Vanderbilt University, Nashville, TN 37235 Dep't Office: 322-2828 |
From: Tim P. <ti...@em...> - 2000-10-12 20:33:56
|
[Steven D. Majewski] > ... > I also mostly agree with Tim, except that I'm not sure that bad or > incomplete ieee support is always better than none at all. This is true, because having Python is better than not having Python, and in having Python you indeed have bad *and* incomplete 754 support on every 754 platform it runs on. Even better, it's a subset of damaged 754 support unique to each platform whose details can't be guessed or documented in advance of trying it exhaustively to see what happens(*), and a subset that can change delightfully across releases, compiler upgrades, library upgrades and seemingly irrelevant minor config changes. So if bad or incomplete IEEE support is good, Python is-- here as elsewhere --the King of Languages <wink>. Every step of this dance is thoroughly predictable. In this case, I'm doing my darnedest to nudge Python its very first step towards *real* 754 support, and getting dumped on for it by a 754 fan. Simultaneously, the "old guard" defends their lifestyle against new-fangled ideas <wink>, asking for protections unaware that 754 *requires* they get a better form of the protections they seek than they've dreamed of so far. It appears that nobody on either side has actually read the std, and I've become the very 754 Storm Trooper I used to despise. Computer life is great <wink>. all-it's-missing-is-variety-ly y'rs - tim (*) Quiz: *if* you can manage to trick Python into creating a NaN on your particular 754 platform, does the Python expression NaN == 1.0 return true, false, or raise an exception? Answer before you try it. Then try it on enough 754 platforms until you give up trying to guess in advance. NaN == NaN is predictable in Python, and is the one 754 feature Python guarantees won't work correctly on any 754 platform (although I've heard that it loses this predictability when run using NumPy's flavor of comparisons instead of core Python's). |
From: Steven D. M. <sd...@vi...> - 2000-10-12 18:25:58
|
On Thu, 12 Oct 2000 hi...@di... wrote: > > The idea that your calculation should blow up and you should > > check it and resubmit your job sounds just so ancient-20th-century- > > Fortran-JCL-and-punched-cards-technology! > > Long-running jobs are still with us, even there's neither Fortran nor > JCL in them. And for these applications, stopping is better than going > on with nonsense values. On the other hand, as you point out, exceptions > for math errors are a bit of a pain for interactive work. > > So how about making this a run-time option? I'd choose exceptions by > default and Infs and Nans by specifying a command-line option, but > there are certainly others who would prefer it the other way round. > What matters most to me is that the choice is possible somehow. > I agree entirely! Maybe I was being a bit too glib, but I didn't mean to imply that wanting it to halt or throw an exception on errors is wrongheaded. I just wanted to make sure the counter-case to what Paul was saying also got heard: Yes-- underflows or infinities where they aren't expected are usually a sign that something is very wrong somewhere. But in the case where the vector holds independent observations or data points, then usually what it means is that there's something wrong with *that* data point -- miscallibrated or mislabeled -- but no reason not to complete the calculations for all of the other points. Scaling or doing projections for interactive graphics is another case where bad points are often better than throwing an exception. ( And it's a pain to have to remember to lambda wrap all the function calls with some sort of guard when you'ld be happy to get NaNs. ) I also mostly agree with Tim, except that I'm not sure that bad or incomplete ieee support is always better than none at all. ---| Steven D. Majewski (804-982-0831) <sd...@Vi...> |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." |
From: Charles G W. <cg...@fn...> - 2000-10-12 18:25:03
|
Aureli Soria Frisch writes: > I have been working with the function (implmenting the Moore-Penrose > generalized inverse) : of the module LinearAlgebra. It seems to > present a bug when operating on matrices of complex numbers. This may or may not be related, but there are definitely problems with arrays of complex numbers sometimes being unneccessarily promoted to type "Object" - for example: Python 2.0b1 (#2, Sep 8 2000, 12:10:17) [GCC 2.95.2 19991024 (release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from Numeric import * >>> a = array([1,2,3], Complex) >>> a array([ 1.+0.j, 2.+0.j, 3.+0.j]) >>> a % 2.0 array([(1+0j) , 0j , (1+0j) ],'O') And we also know that there are a lot of problems with arrays of type "Object". So it's possible that somehow your Complex array is getting promoted to "Object" and running against a problem there. Just a guess, -cgw |
From: Harrison, R. J <Rob...@pn...> - 2000-10-12 17:39:33
|
I watch but rarely contribute to these dicussions, but I feel compelled to whole heartedly support Tim Peters comments regarding full 754 support with consistent cross platform behaviour. Like Tim I've being doing numerical computing for over two decades and IEEE is an indispensible standard. Yes, we usually disable most exception handling within performance critical kernels, but within robust solvers or modern linear algebra packages, full IEEE exception handling is vital. As Tim has said, full 754 will satisfy all users to the maximum extent possible. Robert -----Original Message----- From: Tim Peters [mailto:ti...@em...] Sent: Wednesday, October 11, 2000 7:44 PM To: Huaiyu Zhu Cc: pyt...@py...; PythonDev; Numpy-Discussion; du...@us... Subject: [Numpy-discussion] RE: Possible bug (was Re: numpy, overflow, inf, ieee, and rich comparison) [Huaiyu Zhu] > On the issue of whether Python should ignore over/underflow on > IEEE-enabled platforms: > > It can be argued that on IEEE enabled systems the proper thing to do for > overflow is simply return Inf. Raising exception is WRONG. See below. Python was not designed with IEEE-754 in mind, and *anything* that happens wrt Infs, NaNs, and all other 754 features in Python is purely an accident that can and does vary wildly across platforms. And, as you've discovered in this case, can vary wildly also across even a single library, depending on config options. We cannot consider this to be a bug since Python has had no *intended* behavior whatsoever wrt 754 gimmicks. We can and do consider gripes about these accidents to be feature requests. [Guido] > Incidentally, math.exp(800) returns inf in 1.5, and raises > OverflowError in 2.0. So at least it's consistent. [Huaiyu] > That is not consistent at all. Please read with an assumption of good faith. Guido was pointing out that-- all in the specific case of gcc+glibc on Linux (these don't hold on other platforms) --exp(800) returning Inf in 1.5 and OverflowError in 2.0 is consistent *with* that exp(-800) returns 0 in 1.5 and raises an exception in 2.0. He's right; indeed, he is in part agreeing with you. [Guido > 1.5.2 links with -lieee while 2.0 doesn't. Removing -lieee from the > 1.5.2 link line makes is raise OverflowError too. Adding it to the > 2.0 link line makes it return 0.0 for exp(-1000) and inf for > exp(1000). [Huaiyu] > If using ieee is as simple as setting such a flag, there is no > reason at all not to use it. The patch to stop setting -lieee was contributed by a Python user who claimed it fixed bugs on *their* platform. That's "the reason". We don't want to screw them either. > Here are some more examples: > ... I understand that 754 semantics can be invaluable. So does Guido. There's no argument about that. But Python doesn't yet support them, and wishing it did doesn't make it so. If linking with -lieee happens to give you the semantics you want on your platform today, you're welcome to build with that switch. It appears to be a bad choice for *most* Python users, though (more below), so we don't want to do it in the std 2.0 distro. > ... > In practice this simply means Python would not be suitable for numerical > work at all. Your biggest obstacle in getting Python support for 754 will in fact be opposition from number-crunchers. I've been slinging fp for 21 years, 15 of those writing compilers and math libraries for "supercomputer" vendors. 754 is a Very Good Thing, but is Evil in subset form (see Kahan's (justified!) vilification of Java on this point); ironically, 754 is hardest to sell to those who could benefit from it the most. > What about the other way round? No problem. It is easy to write > functions like isNaN, isInf, etc. It's easy for a platform expert to write such functions for their specific platform of expertise, but it's impossible to write them in a portable way before C99 is implemented (C99 requires that library suppliers provide them, rendering the question moot). > ... > The language should never take over or subvert decisions based on > numerical analysis. Which is why a subset of 754 is evil. 754 requires that the user be able to *choose* whether or not, e.g., overflow signals an exception. Your crusade to insist that it never raise an exception is as least as bad (I think it's much worse) as Python's most common accidental behavior (where overflow from libm usually raises an exception). One size does not fit all. [Tim] > Ignoring ERANGE entirely is not at all the same behavior as 1.5.2, and > current code certainly relies on detecting overflows in math functions. [Huaiyu] > As Guido observed ERANGE is not generated with ieee, even for > overflow. So it is the same behavior as 1.5.2. You've got a bit of a case of tunnel vision here, Huaiyu. Yes, in the specific case of gcc+glibc+Linux, ignoring ERANGE returned from exp() is what 1.5.2 acted like. But you have not proposed to ignore it merely from ERANGE returned from exp() in the specific case of gcc+glibc+Linux. This code runs on many dozens of platforms, and, e.g., as I suggested before, it looks like HPUX has always set errno on both overflow and underflow. MS Windows sets it on overflow but not underflow. Etc. We have to worry about the effects on all platforms. > Besides, no correct numerical code should depend on exceptions like > this unless the machine is incapable of handling Inf and NaN. Nonsense. 754 provides the option to raise an exception on overflow, or not, at the user's discretion, precisely because exceptions are sometimes more appropriate than Infs of NaNs. Kahan himself isn't happy with Infs and NaNs because they're too often too gross a clue (see his writings on "presubstitution" for a more useful approach). [Tim] > In no case can you expect to see overflow ignored in 2.0. [Huaiyu] > You are proposing a dramatic change from the behavior of 1.5.2. > This looks like to me to need a PEP and a big debate. It would break > a LOT of numerical computations. I personally doubt that, but realize it may be true. That's why he have beta releases. So far yours is the only feedback we've heard (thanks!), and as a result we're going to change 2.0 to stop griping about underflow, and do so in a way that will actually work across all platforms. We're probably going to break some HPUX programs as a result; but they were relying on accidents too. [Guido] > No, the configure patch is right. Tim will check in a change that > treats ERANGE with a return value of 0.0 as underflow (returning 0.0, > not raising OverflowError). [Huaiyu] > What is the reason to do this? It looks like intetionally subverting > ieee even when it is available. I thought Tim meant that only logistical > problems prevent using ieee. Python does not support 754 today. Period. I would like it to, but not in any half-assed, still platform-dependent, still random crap-shoot, still random subset of 754 features, still rigidly inflexible, way. When it does support it, Guido & I will argue that it should enable (what 754 calls) the overflow, divide-by-0 and invalid operation exceptions by default, and disable the underflow and inexact exceptions by default. This ensures that, under the default, an infinity or NaN can never be created from non-exceptional inputs without triggering an exception. Not only is that best for newbies, you'll find it's the *only* scheme for a default that can be sold to working number-crunchers (been there, done that, at Kendall Square Research). It also matches Guido's original, pre-754, *intent* for how Python numerics should work by default (it is, e.g., no accident that Python has always had an OverflowError exception but never an UnderflowError one). And that corresponds to the change Guido says <wink> I'm going to make in mathmodule.c: suppress complaints about underflow, but let complaints about overflow go through. This is not a solution, it's a step on a path towards a solution. The next step (which will not happen for 2.0!) is to provide an explicit way to, from Python, disable overflow checking, and that's simply part of providing the control and inquiry functions mandated by 754. Then code that would rather deal with Infs than exceptions can, at its explicit discretion. > If you do choose this route, please please please ignore ERANGE entirely, > whether return value is zero or not. It should be clear that isn't going to happen. I realize that triggering overflow is going to piss you off, but you have to realize that not triggering overflow is going to piss more people off, and *especially* your fellow number-crunchers. Short of serious 754 support, picking "a winner" is the best we can do for now. You have the -lieee flag on your platform du jour if you can't bear it. [Paul Dubois] > I don't have time to follow in detail this thread about changed behavior > between versions. What makes you think we do <0.5 wink>? > These observations are based on working with hundreds of code authors. I > offer them as is. FWIW, they exactly match my observations from 15 years on the HW vendor side. > a. Nobody runs a serious numeric calculation without setting underflow-to- > zero, in the hardware. You can't even afford the cost of software checks. > Unfortunately there is no portable way to do that that I know of. C allows libm implementations a lot of discretion in whether to set errno to ERANGE in case of underflow. The change we're going to make is to ignore ERANGE in case of underflow, ensuring that math.* functions will *stop* raising underflow exceptions on all platforms (they'll return a zero instead; whether +0 or -0 will remain a platform-dependent crap shoot for now). Nothing here addresses underflow exceptions that may be raised by fp hardware, though; this has solely to do with the platform libm's treatment of errno. So in this respect, we're removing Python's current unpredictability, and in the way you favor. > b. Some people use Inf but most people want the code to STOP so they can > find out where the INFS started. Otherwise, two hours later you have big > arrays of Infs and no idea how it happened. Apparently Python on libc+glibc+Linux never raised OverflowError from math.* functions in 1.5.2 (although it did on most other platforms I know about). A patch checked in to fix some other complaint on some other Unixish platform had the side-effect of making Python on libc+glibc+Linux start raising OverflowError from math.* functions in 2.0, but in both overflow and underflow cases. We intend to (as above) suppress the underflow exceptions, but let the overflow cases continue to raise OverflowError. Huaiyu's original complaint was about the underflow cases, but (as such things often do) expanded beyond that when it became clear he would get what he asked for <wink>. Again we're removing Python's current unpredictability, and again in the way you favor -- although this one is still at the mercy of whether the platform libm sets errno correctly on overflow (but it seems that most do these days). > Likewise sqrt(-1.) needs to stop, not put a zero and keep going. Nobody has proposed changing anything about libm domain (as opposed to range) errors (although Huaiyu probably should if he's serious about his flavor of 754 subsetism -- I have no idea what gcc+glibc+Linux did here on 1.5.2, but it *should* have silently returned a NaN (not a zero) without setting errno if it was *self*-consistent -- anyone care to check that under -lieee?: import math math.sqrt(-1) NaN or ValueError? 2.0 should raise ValueError regardless of what 1.5.2 did here.). just-another-day-of-universal-python-harmony-ly y'rs - tim _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Konrad H. <hi...@cn...> - 2000-10-12 17:26:21
|
> I have been working with the function (implmenting the Moore-Penrose > generalized inverse) : > > generalized_inverse > > of the module LinearAlgebra. It seems to present a bug when operating on > matrices of complex numbers. That is well possible. I use this function a lot but always on real matrices. I am not sure I even tested it on complex matrices when I wrote it ages ago. If you have a fix, it is certainly appreciated. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: <hi...@di...> - 2000-10-12 17:24:15
|
> The idea that your calculation should blow up and you should > check it and resubmit your job sounds just so ancient-20th-century- > Fortran-JCL-and-punched-cards-technology! Long-running jobs are still with us, even there's neither Fortran nor JCL in them. And for these applications, stopping is better than going on with nonsense values. On the other hand, as you point out, exceptions for math errors are a bit of a pain for interactive work. So how about making this a run-time option? I'd choose exceptions by default and Infs and Nans by specifying a command-line option, but there are certainly others who would prefer it the other way round. What matters most to me is that the choice is possible somehow. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Nick B. <ni...@ni...> - 2000-10-12 15:31:27
|
> So, in about 1 in ten runs (an hour each), the code > would crash for no obvious reason. It was a debugging > nightmare. If python catches underflows, I'm going > back to FORTRAN. hear hear. or is that here here ;) but it is a scarey concept isn't it? i've pieced together an interactive python visualization environment (a clone of RSI's IDL in fact - nickbower.com/computer/pydl) and although i didn't write the plot engine, surely there's no way to "make the algorithm better" if you have no idea if the user will try to graph asymptotic curves for example. it's just not realistic to expect the compiler to bomb out and force the user to tweak the calculation limits. > On less crucial topics, I'm strongly in favor of > preserving NaN and Inf. If I want my code to crash > when some computation goes awry, I'll use assert, > and crash it myself. i'm in favour of this too. i think favouring people who don't want to come back after hours of code execution to find Nan and Inf arrays may shaft a great deal of people (be it a minority or not). nick |
From: Paul F. D. <pau...@ho...> - 2000-10-12 15:20:50
|
If I may ask, what kind of platforms are there where people do math where the hardware *won't* support IEEE-754? _______________________________________________ The problem isn't that the hardware doesn't support it, although there used to be some important machines that didn't. (Burton Smith once told me that adding this to a supercomputer architecture slows down the cycle time by a very considerable amount, he guessed at least 20% if I recall correctly.) The problem is that access to control the hardware has no standard. Usually you have to get out the Fortran manual and look in the back if you're lucky. I've had computers where I couldn't find this information at all. Probably things have improved in the last few years but this situation is still not great. Worse, it needs to be a runtime control, not a compile option. As everyone who has tried it found out, actually using the Infs and NaN's in any portable way is pretty difficult. I prefer algorithmic vigor to prevent their appearance. While my opinion is a minority one, I wish the "standard" had never been born and I had my cycles back. It isn't really a standard, is it? |
From: Greg K. <gp...@be...> - 2000-10-12 15:02:03
|
Python has *got* to ignore underflow exceptions. Otherwise, virtually any numeric calculation that subtracts numbers will fail for no good reason. Worse, it will fail in a very sporadic, data-dependent manner. With double precision numbers, the failures will be rare, but anyone doing computations with floats will see unwanted exceptions with noticeable frequency. I once did a numeric calculation (a SVD classifier) on a system with IEEE-754 underflow exceptions turned on, and I lost a good square inch of hair and a solid week of life because of it. At one point, the numbers were converted from doubles to floats, and about 1 in every 10,000,000 of the numbers were too small to represent as a float. So, in about 1 in ten runs (an hour each), the code would crash for no obvious reason. It was a debugging nightmare. If python catches underflows, I'm going back to FORTRAN. On less crucial topics, I'm strongly in favor of preserving NaN and Inf. If I want my code to crash when some computation goes awry, I'll use assert, and crash it myself. Any serious numeric code should be loaded with assertions (any code at all, in fact). Asking the interpreter to do it for you is a crude and blunt instrument. If I may ask, what kind of platforms are there where people do math where the hardware *won't* support IEEE-754? |
From: Aureli S. F. <Aur...@ip...> - 2000-10-12 12:30:05
|
Hi, I have been working with the function (implmenting the Moore-Penrose generalized inverse) : generalized_inverse of the module LinearAlgebra. It seems to present a bug when operating on matrices of complex numbers. I want someone to confirm this point before submitting the source. Thanks in advance, Aureli ################################# Aureli Soria Frisch Fraunhofer IPK Dept. Pattern Recognition post: Pascalstr. 8-9, 10587 Berlin, Germany e-mail:au...@ip... fon: +49 30 39 00 61 50 fax: +49 30 39 17 517 ################################# |
From: Thomas W. <th...@xs...> - 2000-10-12 08:13:18
|
On Wed, Oct 11, 2000 at 10:44:20PM -0400, Tim Peters wrote: > > Likewise sqrt(-1.) needs to stop, not put a zero and keep going. > Nobody has proposed changing anything about libm domain (as opposed to > range) errors (although Huaiyu probably should if he's serious about his > flavor of 754 subsetism -- I have no idea what gcc+glibc+Linux did here on > 1.5.2, but it *should* have silently returned a NaN (not a zero) without > setting errno if it was *self*-consistent -- anyone care to check that > under -lieee?: > import math > math.sqrt(-1) >>> import math >>> math.sqrt(-1) Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: math range error The same under both 1.5.2 and 2.0c1 with -lieee. Without -lieee, both do: >>> import math >>> math.sqrt(-1) Traceback (innermost last): File "<stdin>", line 1, in ? ValueError: math domain error Consistency-conschmistency-ly y'rs, -- Thomas Wouters <th...@xs...> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! |