You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Janko H. <jh...@if...> - 2000-10-12 07:37:53
|
To look for another way out of this, the current problem are the two different wishes for vectorized computations to have NaN/Inf or Exceptions. As the actual computations for NumPy are all done in C, isn't it possible to implement a special signal handling in the NumPy code? As an option, for the people who know that they are then possibly in trouble (different behavior for arrays than for Python scalars) __Janko Just for the record. Python 1.5.2 (#1, May 10 1999, 18:46:39) [GCC egcs-2.91.60 Debian 2.1 (egcs-1.1. on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam (IPP) Type ? for more help >>> import math >>> math.sqrt(-1) Traceback (innermost last): File "<console>", line 1, in ? OverflowError: math range error >>> math.exp(-800) 0.0 >>> math.exp(800) inf >>> >>> import Numeric >>> Numeric.exp(Numeric.array([800, -800.])) array([ inf, 0.00000000e+000]) >>> Numeric.sqrt(Numeric.array([-1])) array([ nan]) >>> # So the current behavior is already inconsistent on at least one >>> # platform # Other platform (DEC Alpha) raising domain error Python 1.5.2 (#6, Sep 20 1999, 17:44:09) [C] on osf1V4 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import math >>> math.exp(-800) Traceback (innermost last): File "<stdin>", line 1, in ? OverflowError: math range error >>> math.exp(800) Traceback (innermost last): File "<stdin>", line 1, in ? OverflowError: math range error >>> math.sqrt(-1) Traceback (innermost last): File "<stdin>", line 1, in ? ValueError: math domain error >>> import Numeric >>> Numeric.exp(Numeric.array([800, -800.])) Traceback (innermost last): File "<stdin>", line 1, in ? OverflowError: math range error >>> Numeric.sqrt(Numeric.array([-1])) Traceback (innermost last): File "<stdin>", line 1, in ? ValueError: math domain error |
From: Tim P. <ti...@em...> - 2000-10-12 06:16:40
|
[Huaiyu Zhu] > ... > $ /usr/bin/python > Python 1.5.2 (#1, Sep 17 1999, 20:15:36) [GCC egcs-2.91.66 19990314/Linux > (egcs- on linux-i386 > Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam > >>> from math import * > >>> exp(777) > inf > >>> exp(-777) > 0.0 > >>> sqrt(-1) > Traceback (innermost last): > File "<stdin>", line 1, in ? > OverflowError: math range error > > This was sane behavior. Are we saying that Python 2.0 has invented > something better than IEEE 754? 754 allows users to choose the behaviors they want. Any *fixed* policy is not 754. So long as we have to fix a single policy, yes, I believe Guido (& know I) would say that raising OverflowError on exp(777), and silently returning 0 on exp(-777), and raising ValueError on sqrt(-1) (*not* OverflowError, as shown above), is indeed better than the 754 default behaviors. And 2.0 will do all three of those (& I just checked in the code to make that happen). About the sqrt example above, that's neither 754 behavior nor sane behavior: default 754 behavior on sqrt(-1) is to return a NaN and keep on going. I'm pretty sure that's what glibc linked w/ -lieee actually does, too (if it doesn't, glibc -lieee is violating 754). That 1.5.2 raised an OverflowError instead is insane, and appears purely due to an accident of how gcc compiles code to compare Infs to NaNs (Python's mathmodule.c's CHECK() macro was fooled by this into setting errno to ERANGE itself). Python should raise a ValueError there instead (corresponding to libm setting errno to EDOM -- this is a domain error, not a range error) -- which it does now under CVS Python. 754-doesn't-work-unless-you've-got-all-of-it-ly y'rs - tim |
From: Steven D. M. <sd...@vi...> - 2000-10-12 03:23:46
|
First: I haven't followed this thread from the beginning -- only the last ten or so posts. Second: One reason I didn't follow it from the start is that I'm not doing any heavy numerical stuff in Python. I've been using Lisp for that, and use Python more for string/symbolic or GUI hacking. But, if I was going to do the sort of numerical stuff I now do in Lisp in Python, I would agree with Huaiya Zhu. I do a lot of vectorized operations on what are often independent samples. If some of the numbers overflow or underflow, that just represents an outlier or bad sample. I don't want it to blow off the whole pipeline of operations on the other data points in the vector -- they are independent of the bad points. In my case, it's not that these are lengthy calculations. It's that they are interactive and tied to immediate graphical representations. If there are strange zero's or infinities in the result, there is (or should be) a way to backtrack and investigate. ( That's why people want interactive and graphical regression analysis and modeling tools! ) The idea that your calculation should blow up and you should check it and resubmit your job sounds just so ancient-20th-century- Fortran-JCL-and-punched-cards-technology! ---| Steven D. Majewski (804-982-0831) <sd...@Vi...> |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." |
From: Tim P. <ti...@em...> - 2000-10-12 02:44:36
|
[Huaiyu Zhu] > On the issue of whether Python should ignore over/underflow on > IEEE-enabled platforms: > > It can be argued that on IEEE enabled systems the proper thing to do for > overflow is simply return Inf. Raising exception is WRONG. See below. Python was not designed with IEEE-754 in mind, and *anything* that happens wrt Infs, NaNs, and all other 754 features in Python is purely an accident that can and does vary wildly across platforms. And, as you've discovered in this case, can vary wildly also across even a single library, depending on config options. We cannot consider this to be a bug since Python has had no *intended* behavior whatsoever wrt 754 gimmicks. We can and do consider gripes about these accidents to be feature requests. [Guido] > Incidentally, math.exp(800) returns inf in 1.5, and raises > OverflowError in 2.0. So at least it's consistent. [Huaiyu] > That is not consistent at all. Please read with an assumption of good faith. Guido was pointing out that-- all in the specific case of gcc+glibc on Linux (these don't hold on other platforms) --exp(800) returning Inf in 1.5 and OverflowError in 2.0 is consistent *with* that exp(-800) returns 0 in 1.5 and raises an exception in 2.0. He's right; indeed, he is in part agreeing with you. [Guido > 1.5.2 links with -lieee while 2.0 doesn't. Removing -lieee from the > 1.5.2 link line makes is raise OverflowError too. Adding it to the > 2.0 link line makes it return 0.0 for exp(-1000) and inf for > exp(1000). [Huaiyu] > If using ieee is as simple as setting such a flag, there is no > reason at all not to use it. The patch to stop setting -lieee was contributed by a Python user who claimed it fixed bugs on *their* platform. That's "the reason". We don't want to screw them either. > Here are some more examples: > ... I understand that 754 semantics can be invaluable. So does Guido. There's no argument about that. But Python doesn't yet support them, and wishing it did doesn't make it so. If linking with -lieee happens to give you the semantics you want on your platform today, you're welcome to build with that switch. It appears to be a bad choice for *most* Python users, though (more below), so we don't want to do it in the std 2.0 distro. > ... > In practice this simply means Python would not be suitable for numerical > work at all. Your biggest obstacle in getting Python support for 754 will in fact be opposition from number-crunchers. I've been slinging fp for 21 years, 15 of those writing compilers and math libraries for "supercomputer" vendors. 754 is a Very Good Thing, but is Evil in subset form (see Kahan's (justified!) vilification of Java on this point); ironically, 754 is hardest to sell to those who could benefit from it the most. > What about the other way round? No problem. It is easy to write > functions like isNaN, isInf, etc. It's easy for a platform expert to write such functions for their specific platform of expertise, but it's impossible to write them in a portable way before C99 is implemented (C99 requires that library suppliers provide them, rendering the question moot). > ... > The language should never take over or subvert decisions based on > numerical analysis. Which is why a subset of 754 is evil. 754 requires that the user be able to *choose* whether or not, e.g., overflow signals an exception. Your crusade to insist that it never raise an exception is as least as bad (I think it's much worse) as Python's most common accidental behavior (where overflow from libm usually raises an exception). One size does not fit all. [Tim] > Ignoring ERANGE entirely is not at all the same behavior as 1.5.2, and > current code certainly relies on detecting overflows in math functions. [Huaiyu] > As Guido observed ERANGE is not generated with ieee, even for > overflow. So it is the same behavior as 1.5.2. You've got a bit of a case of tunnel vision here, Huaiyu. Yes, in the specific case of gcc+glibc+Linux, ignoring ERANGE returned from exp() is what 1.5.2 acted like. But you have not proposed to ignore it merely from ERANGE returned from exp() in the specific case of gcc+glibc+Linux. This code runs on many dozens of platforms, and, e.g., as I suggested before, it looks like HPUX has always set errno on both overflow and underflow. MS Windows sets it on overflow but not underflow. Etc. We have to worry about the effects on all platforms. > Besides, no correct numerical code should depend on exceptions like > this unless the machine is incapable of handling Inf and NaN. Nonsense. 754 provides the option to raise an exception on overflow, or not, at the user's discretion, precisely because exceptions are sometimes more appropriate than Infs of NaNs. Kahan himself isn't happy with Infs and NaNs because they're too often too gross a clue (see his writings on "presubstitution" for a more useful approach). [Tim] > In no case can you expect to see overflow ignored in 2.0. [Huaiyu] > You are proposing a dramatic change from the behavior of 1.5.2. > This looks like to me to need a PEP and a big debate. It would break > a LOT of numerical computations. I personally doubt that, but realize it may be true. That's why he have beta releases. So far yours is the only feedback we've heard (thanks!), and as a result we're going to change 2.0 to stop griping about underflow, and do so in a way that will actually work across all platforms. We're probably going to break some HPUX programs as a result; but they were relying on accidents too. [Guido] > No, the configure patch is right. Tim will check in a change that > treats ERANGE with a return value of 0.0 as underflow (returning 0.0, > not raising OverflowError). [Huaiyu] > What is the reason to do this? It looks like intetionally subverting > ieee even when it is available. I thought Tim meant that only logistical > problems prevent using ieee. Python does not support 754 today. Period. I would like it to, but not in any half-assed, still platform-dependent, still random crap-shoot, still random subset of 754 features, still rigidly inflexible, way. When it does support it, Guido & I will argue that it should enable (what 754 calls) the overflow, divide-by-0 and invalid operation exceptions by default, and disable the underflow and inexact exceptions by default. This ensures that, under the default, an infinity or NaN can never be created from non-exceptional inputs without triggering an exception. Not only is that best for newbies, you'll find it's the *only* scheme for a default that can be sold to working number-crunchers (been there, done that, at Kendall Square Research). It also matches Guido's original, pre-754, *intent* for how Python numerics should work by default (it is, e.g., no accident that Python has always had an OverflowError exception but never an UnderflowError one). And that corresponds to the change Guido says <wink> I'm going to make in mathmodule.c: suppress complaints about underflow, but let complaints about overflow go through. This is not a solution, it's a step on a path towards a solution. The next step (which will not happen for 2.0!) is to provide an explicit way to, from Python, disable overflow checking, and that's simply part of providing the control and inquiry functions mandated by 754. Then code that would rather deal with Infs than exceptions can, at its explicit discretion. > If you do choose this route, please please please ignore ERANGE entirely, > whether return value is zero or not. It should be clear that isn't going to happen. I realize that triggering overflow is going to piss you off, but you have to realize that not triggering overflow is going to piss more people off, and *especially* your fellow number-crunchers. Short of serious 754 support, picking "a winner" is the best we can do for now. You have the -lieee flag on your platform du jour if you can't bear it. [Paul Dubois] > I don't have time to follow in detail this thread about changed behavior > between versions. What makes you think we do <0.5 wink>? > These observations are based on working with hundreds of code authors. I > offer them as is. FWIW, they exactly match my observations from 15 years on the HW vendor side. > a. Nobody runs a serious numeric calculation without setting underflow-to- > zero, in the hardware. You can't even afford the cost of software checks. > Unfortunately there is no portable way to do that that I know of. C allows libm implementations a lot of discretion in whether to set errno to ERANGE in case of underflow. The change we're going to make is to ignore ERANGE in case of underflow, ensuring that math.* functions will *stop* raising underflow exceptions on all platforms (they'll return a zero instead; whether +0 or -0 will remain a platform-dependent crap shoot for now). Nothing here addresses underflow exceptions that may be raised by fp hardware, though; this has solely to do with the platform libm's treatment of errno. So in this respect, we're removing Python's current unpredictability, and in the way you favor. > b. Some people use Inf but most people want the code to STOP so they can > find out where the INFS started. Otherwise, two hours later you have big > arrays of Infs and no idea how it happened. Apparently Python on libc+glibc+Linux never raised OverflowError from math.* functions in 1.5.2 (although it did on most other platforms I know about). A patch checked in to fix some other complaint on some other Unixish platform had the side-effect of making Python on libc+glibc+Linux start raising OverflowError from math.* functions in 2.0, but in both overflow and underflow cases. We intend to (as above) suppress the underflow exceptions, but let the overflow cases continue to raise OverflowError. Huaiyu's original complaint was about the underflow cases, but (as such things often do) expanded beyond that when it became clear he would get what he asked for <wink>. Again we're removing Python's current unpredictability, and again in the way you favor -- although this one is still at the mercy of whether the platform libm sets errno correctly on overflow (but it seems that most do these days). > Likewise sqrt(-1.) needs to stop, not put a zero and keep going. Nobody has proposed changing anything about libm domain (as opposed to range) errors (although Huaiyu probably should if he's serious about his flavor of 754 subsetism -- I have no idea what gcc+glibc+Linux did here on 1.5.2, but it *should* have silently returned a NaN (not a zero) without setting errno if it was *self*-consistent -- anyone care to check that under -lieee?: import math math.sqrt(-1) NaN or ValueError? 2.0 should raise ValueError regardless of what 1.5.2 did here.). just-another-day-of-universal-python-harmony-ly y'rs - tim |
From: Huaiyu Z. <hua...@ya...> - 2000-10-12 02:17:03
|
[Paul Dubois] > > > > a. Nobody runs a serious numeric calculation without setting > > underflow-to-zero, in the hardware. You can't even afford the cost of > > software checks. Unfortunately there is no portable way to do that that I > > know of. Amen. > > > > b. Some people use Inf but most people want the code to STOP so they can > > find out where the INFS started. Otherwise, two hours later you have big > > arrays of Infs and no idea how it happened. Likewise sqrt(-1.) needs to > > stop, not put a zero and keep going. $ /usr/bin/python Python 1.5.2 (#1, Sep 17 1999, 20:15:36) [GCC egcs-2.91.66 19990314/Linux (egcs- on linux-i386 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> from math import * >>> exp(777) inf >>> exp(-777) 0.0 >>> sqrt(-1) Traceback (innermost last): File "<stdin>", line 1, in ? OverflowError: math range error This was sane behavior. Are we saying that Python 2.0 has invented something better than IEEE 754? [Guido van Rossum] > Thanks, Paul! This behavior has always been what I wanted Python to > do (even though it's not always what Python did, depending on the > platform) and also what Tim's proposed patch will implement for the > specific case of math.exp() (and other math functions that may > underflow or overflow), on most reasonable platforms. Guido, with due respect to your decisions on Python issues, I simply have to fight this one. It is one thing to accomodate for naive users, but it is another to dumb down every one else. Case 1. Someone writes a flawed numerical routine. Two hours later he finds his array filled with Inf and NaN. Case 2. Someone writes a perfect numerical routine. Two hours later he gets an exception, because the error is near zero. Solution for case 1. Use better algorithm. Use better error control. Raise exceptions when error is too large. These are proper solutions. They are easy and efficient to implement. They are needed anyway - If something's wrong, you want to raise exceptions far earlier than Inf, certainly before you get arrays filled with elements like 1e300. Solution for case 2. Almost impossible. The division between under- and over-flow is artificial. What about 1/x or similar functions? The only way to work on such a platform is to abandon vectorized computation. > There are still lots of places where the platform gives Python no > choice of creating NaN and Inf, and because there's no > platform-independent way to test for these, they are hard to avoid in > some cases; but eventually, Tim will find a way to root them out. And > for people like Huaiyu, who want to see Inf, there will (eventually) > be a way to select this as a run-time option; and ditto for whoever > might want underflow to raise an exception. I can understand that exceptions are the only available choices if IEEE is not available. But is there a compelling reason that Python should behave "better" than IEEE when it's in fact available? If the reason is to protect naive users, I can think of several responses: 1. For people doing one-off interactive work, returning Inf is in fact more informative. 2. For users iterative numerical computations, they need to be educated about error control. Otherwise they won't get corrent results anyway. 3. For really serious work, we could provide good numerical modules so that they don't need to write themselves. To make this happen fast the fewer debacles like this one the better. Case in point: Someone asked for regession modules two weeks ago. I was trying to convert my old matlab programs, which only took a few hours. But I wasted a week of (spare) time fighting for some mysterious "overflow". Turns out that a Guassian is near zero when it's far from center, and Python does not like it. In practice, Inf may be generated more often as a proper value than by mistake. This is not an issue about whether someone "prefers" Inf or exception. It is about whether there is a choice to do proper computation. Returning Inf does not prevent someone to raise exception. Raising exception automatically prevents perfect algorithms to work properly. As Kevin has volunteered to help with IEEE implementation and made a plan, is there a strong reason to drop IEEE for Linux in 2.0? If there is insufficient time to carry out his plan, wouldn't it be prudent to keep things as they were in 1.5.2? Huaiyu |
From: Guido v. R. <gu...@py...> - 2000-10-12 00:42:47
|
> I don't have time to follow in detail this thread about changed behavior > between versions. These observations are based on working with hundreds of > code authors. I offer them as is. > > a. Nobody runs a serious numeric calculation without setting > underflow-to-zero, in the hardware. You can't even afford the cost of > software checks. Unfortunately there is no portable way to do that that I > know of. > > b. Some people use Inf but most people want the code to STOP so they can > find out where the INFS started. Otherwise, two hours later you have big > arrays of Infs and no idea how it happened. Likewise sqrt(-1.) needs to > stop, not put a zero and keep going. Thanks, Paul! This behavior has always been what I wanted Python to do (even though it's not always what Python did, depending on the platform) and also what Tim's proposed patch will implement for the specific case of math.exp() (and other math functions that may underflow or overflow), on most reasonable platforms. There are still lots of places where the platform gives Python no choice of creating NaN and Inf, and because there's no platform-independent way to test for these, they are hard to avoid in some cases; but eventually, Tim will find a way to root them out. And for people like Huaiyu, who want to see Inf, there will (eventually) be a way to select this as a run-time option; and ditto for whoever might want underflow to raise an exception. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Paul F. D. <pau...@ho...> - 2000-10-12 00:07:48
|
I don't have time to follow in detail this thread about changed behavior between versions. These observations are based on working with hundreds of code authors. I offer them as is. a. Nobody runs a serious numeric calculation without setting underflow-to-zero, in the hardware. You can't even afford the cost of software checks. Unfortunately there is no portable way to do that that I know of. b. Some people use Inf but most people want the code to STOP so they can find out where the INFS started. Otherwise, two hours later you have big arrays of Infs and no idea how it happened. Likewise sqrt(-1.) needs to stop, not put a zero and keep going. |
From: Huaiyu Z. <hua...@ya...> - 2000-10-11 23:27:11
|
On the issue of whether Python should ignore over/underflow on IEEE-enabled platforms: [Tim Peters] > That would stop the exception on exp() underflow, which is what you're > concerned about. It would also stop exceptions on exp() overflow, and on > underflow and overflow for all other math functions too. I doubt Guido will > ever let Python ignore overflow by default, #ifdef'ed or not. A semantic > change that jarring certainly won't happen for 2.0 (which is just a week > away). It can be argued that on IEEE enabled systems the proper thing to do for overflow is simply return Inf. Raising exception is WRONG. See below. [Guido van Rossum] > Incidentally, math.exp(800) returns inf in 1.5, and raises > OverflowError in 2.0. So at least it's consistent. That is not consistent at all. Suppose I'm plotting the curve f(x) where x include some singular points of f. In the first case the plot works with some portion of the curve clipped. In the second case it bombs. [Tim Peters] > Nothing like that will happen without a PEP first. I would like to see > *serious* 754 support myself, but that depends too on many platform experts > contributing real work (if everyone ran WinTel, I could do it myself > <wink>). [Guido van Rossum] > Bingo! > > 1.5.2 links with -lieee while 2.0 doesn't. Removing -lieee from the > 1.5.2 link line makes is raise OverflowError too. Adding it to the > 2.0 link line makes it return 0.0 for exp(-1000) and inf for > exp(1000). If using ieee is as simple as setting such a flag, there is no reason at all not to use it. Here are some more examples: Suppose you have done hours of computation on a problem. Just as you are about to get the result, you get an exception. Why? Because the residual error is too close to zero. Suppose you want to plot the curve of Gausian distribution. Oops, it fails. Because beyond a certain region the value is near zero. With these kinds of problems, vectorized numerical calculation becomes nearly impossible. How do you work in such an environment? You have to wrap every calculation in a try/except structure, and whenever there is an exception, you have to revert to elementwise operations. In practice this simply means Python would not be suitable for numerical work at all. What about the other way round? No problem. It is easy to write functions like isNaN, isInf, etc. With these one can raise exceptions in any place one want. It is even possible to raise exceptions if a matrix is singular to a certain precision, etc. The key point to observe here is that most numerical work involve more than one element. Some of them may be out of mahcine bounds but the whole thing could still be quite meaningful. Vice versa it is also quite possible that all elements are within bounds while the whole thing is meaningless. The language should never take over or subvert decisions based on numerical analysis. [Tim Peters] > Ignoring ERANGE entirely is not at all the same behavior as 1.5.2, and > current code certainly relies on detecting overflows in math functions. As Guido observed ERANGE is not generated with ieee, even for overflow. So it is the same behavior as 1.5.2. Besides, no correct numerical code should depend on exceptions like this unless the machine is incapable of handling Inf and NaN. Even in the cases where you do want to detect overflow, it is still wrong to use exceptions. Here's an example: x*log(x) approaches 0 as x approaches 0. If x==0 then log(x)==-Inf but 0*-Inf==NaN, not what one would want. But exception is the wrong tool to hangle this, because if x is an array, some of its element may be zero but other's may not. The right way to do it is something like def entropy(probability): p = max(probability, 1e-323) return p*log(p) [Tim Peters] > In no case can you expect to see overflow ignored in 2.0. You are proposing a dramatic change from the behavior of 1.5.2. This looks like to me to need a PEP and a big debate. It would break a LOT of numerical computations. [Thomas Wouters] > I remember the patch that did this, on SF. It was titled "don't link with > -lieee if it isn't necessary" or something. Not sure what it would break, > but mayhaps declaring -lieee necessary on glibc systems is the right fix ? > > (For the non-autoconf readers among us: the first snippet writes a test > program to see if the function '__fpu_control' exists when linking with > -lieee in addition to $LIBS, and if so, adds -lieee to $LIBS. The second > snippet writes a test program to see if the function '__fpu_control' > exists with the current collection of $LIBS. If it doesn't, it tries it > again with -lieee, > > Pesonally, I think the patch should just be reversed... The comment above > the check certainly could be read as 'Linux requires -lieee for correct > f.p. operations', and perhaps that's how it was meant. The patch as described seems to be based on flawed thinking. The numbers Inf and NaN are always necessary. The -lieee could only be unnecessary if the behavior is the same as IEEE. Obviously it isn't. So I second Thomas's suggestion. [Tim Peters] > If no progress is made on determining the true cause over the next few days, > I'll hack mathmodule.c to ignore ERANGE in the specific case the result > returned is a zero (which would "fix" your exp underflow problem without > stopping overflow detection). Since this will break code on any platform > where errno was set to ERANGE on underflow in 1.5.2, I'll need to have a > long discussion w/ Guido first. I *believe* that much is actually sellable > for 2.0, because it moves Python in a direction I know he likes regardless > of whether he ever becomes a 754 True Believer. [Guido van Rossum] > No, the configure patch is right. Tim will check in a change that > treats ERANGE with a return value of 0.0 as underflow (returning 0.0, > not raising OverflowError). What is the reason to do this? It looks like intetionally subverting ieee even when it is available. I thought Tim meant that only logistical problems prevent using ieee. If you do choose this route, please please please ignore ERANGE entirely, whether return value is zero or not. The only possible way that ERANGE could be useful at all is if it could be set independently for each element of an array, and if it behave as a warning instead of an exception, ie. the calculation would continue if it is not caught. Well, then, Inf and NaN serve this purpose perfectly. It is very reasonable to set errno in glibc for this; it is completely unreasonable to raise an exception in Python, because exceptions cannot be set to individual elements and they cannot be ignored. Huaiyu -- Huaiyu Zhu hz...@us... Matrix for Python Project http://MatPy.sourceforge.net |
From: L. B. <bu...@ic...> - 2000-10-11 18:01:48
|
It was an optional module (fpectl) that got added into Python 1.5. Looks like it has been carried forward pretty much unchanged into the 2.0 release candidate. To use the facility, you need to build python with --with-fpectl, then identify "dangerous" (likely to generate SIGFPE) operations, and surround them with a pair of macros PyFPE_START_PROTECT and PyFPE_END_PROTECT. This has the effect of turning any SIGFPE into a Python exception. Start with Include/pyfpe.h, look for example usage in Objects/floatobject.c. Grep the python source for FPE_ to find the several places where these hooks are located. [ Paul Dubois wrote ] >About controlling floating-point behavior with Numpy: I think that somewhere >buried in the sources Lee Busby had us all set if we would just go through >the source and stick in some macro in the right places (this was maybe 3 >years ago, hence the accurate and detailed memory dump) but it was on the >todo list so long it outlived the death of the todo list. > >Anyway, my recollection is if we ever did it then we would have control when >things like overflow happen. But we haven't. My last experience of this sort >of thing was that it was completely a hardware-dependent thing, and you had >to find out from each manufacturer what their routine was and how to call it >or what compiler flag to use. > >Well, sorry to be so imprecise but I thought it worth mentioning that some >steps had been taken even if I don't remember what they were. |
From: Paul F. D. <pau...@ho...> - 2000-10-11 12:56:36
|
About controlling floating-point behavior with Numpy: I think that somewhere buried in the sources Lee Busby had us all set if we would just go through the source and stick in some macro in the right places (this was maybe 3 years ago, hence the accurate and detailed memory dump) but it was on the todo list so long it outlived the death of the todo list. Anyway, my recollection is if we ever did it then we would have control when things like overflow happen. But we haven't. My last experience of this sort of thing was that it was completely a hardware-dependent thing, and you had to find out from each manufacturer what their routine was and how to call it or what compiler flag to use. Well, sorry to be so imprecise but I thought it worth mentioning that some steps had been taken even if I don't remember what they were. |
From: Fred Y. <fr...@on...> - 2000-10-10 19:16:14
|
I couldn't seem to download a prebuilt distribution of NumPy from SourceForge earlier today, so I connected via CVS and grabbed the latest code. I was eventually able to build and install it OK, but I had to copy Lib/version.py to Lib/numeric_version.py since the latter didn't exist but other code refers to it. Just a "heads up"... -- Fred Yankowski fr...@On... tel: +1.630.879.1312 Principal Consultant www.OntoSys.com fax: +1.630.879.1370 OntoSys, Inc 38W242 Deerpath Rd, Batavia, IL 60510, USA |
From: Pearu P. <pe...@io...> - 2000-10-10 17:47:03
|
Hi! I have started a project PySymbolic - "Doing Symbolics in Python" PySymbolic is a collection of tools for doing elementary symbolic calculations in Python. The project is in alpha stage. Current features include basic simplifications: *** removing redundant parenthesis: (a) -> a *** applying factorization: -(a) -> -a, +a -> a, ~(a) -> ~a *** collecting terms: n*a+b+m*a -> k*a+b (where k=n+m) *** collecting powers: a**c*b/a**d -> a**(c-d)*b *** cancellations: 0*a -> 0, a+0 -> 0, a**0 -> 1, 1*a -> a, a**1 -> a *** rational arithmetic: 3/4+2/6-1 -> 1/12 *** opening parenthesis: (a+b)*c -> a*c+b*c *** expanding integer powers: (a+b)**n -> a**n+a**(n-1)*b+... *** sorting terms in alphabethic order: a+c+b -> a+b+c *** (more to come) Projects homepage is http://cens.ioc.ee/projects/pysymbolic/ You can download the source from http://cens.ioc.ee/projects/pysymbolic/PySymbolic.tgz Regards, Pearu |
From: Paul F. D. <pau...@ho...> - 2000-10-09 19:53:25
|
CALL FOR PAPERS, POSTERS, AND PARTICIPATION Ninth International Python Conference March 5-8, 2001 Long Beach, California Web site: http://python9.org The 9th International Python Conference (Python 9) is a forum for Python users, software developers, and researchers to present current work, discuss future plans for the language and commercial applications, and learn about interesting uses of Python. The conference will consist of a day of tutorials, two days of refereed paper and application tracks including a Zope track and a multi-technology Python Applications Track, a developers' day, a small exhibition, demonstrations, and posters. Paper submission deadline: Monday, November 6, 2000 Notification of acceptance: Monday, December 11, 2000 Final papers due: Monday, January 15, 2001 Authors are invited to submit papers for the Refereed Paper Track that: * Present new and useful applications and tools that utilize Python * Describe the use of Python in large, mission-critical or unusual applications * Address practical programming problems, and provide lessons based on experience for Python programmers Also sought are poster presentations of interesting work in progress. Full information is available on the website, http://python9.org |
From: Emmanuel V. <emm...@li...> - 2000-10-03 05:46:36
|
Oups, you're right... In most (all ?) systems, memcpy() is a true function, and is *not* inlined. Jim was coding in the C++ way: trusting the optimizer ! Thank you, Emmanuel |
From: Pete S. <pe...@sh...> - 2000-10-02 19:21:46
|
i've got a quick optimization for the arrayobject.c source. it speeds my usage of numpy up by about 100%. i've tested with other numpy apps and noticed a minimum of about 20% speed. anyways, in "do_sliced_copy", change out the following block: if (src_nd == 0 && dest_nd == 0) { for(j=0; j<copies; j++) { memcpy(dest, src, elsize); dest += elsize; } return 0; } with this slightly larger one: if (src_nd == 0 && dest_nd == 0) { switch(elsize) { case sizeof(char): memset(dest, *src, copies); break; case sizeof(short): for(j=copies; j; --j, dest += sizeof(short)) *(short*)dest = *(short*)src; break; case sizeof(long): for(j=copies; j; --j, dest += sizeof(int)) *(int*)dest = *(int*)src; break; case sizeof(double): for(j=copies; j; --j, dest += sizeof(double)) *(double*)dest = *(double*)src; break; default: for(j=copies; j; --j, dest += elsize) memcpy(dest, src, elsize); } return 0; } anyways, you can see it's no brilliant algorithm change, but for me, getting a free 2X speedup is a big help. i'm hoping something like this can get merged into the next releases? after walking through the numpy code, i was surprised how almost every function falls back to do_sliced_copy (guess that's why it's at the top of the source?). that made it a quick target for making optimization changes. |
From: Paul F. D. <pau...@ho...> - 2000-09-29 14:22:54
|
File released. Thank you! In future you can just email me directly. -- PFD > -----Original Message----- > From: num...@li... > [mailto:num...@li...]On Behalf Of Pete > Shinners > Sent: Wednesday, September 27, 2000 8:27 AM > To: Numpy Discussion > Subject: [Numpy-discussion] precompiled binary for 17 > > > i've compiled my own numeric, and i thought i'd offer it > back for other win users. i haven't seen a precompiled > binary package for win32 yet, so maybe this can get > added to sourceforge? if not, here's a link that'll > work for a couple weeks > > http://www.shinners.org/pete/NumPy17_20.zip > > this is numeric-17.0, compiled for 2.0beta > > > btw, much thanks and congrats to the numpy team. > getting this compiled and installed was WORLDS > better than working with the 16.x releases. thanks! > (and congrats!) > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Paul F. D. <pau...@ho...> - 2000-09-29 02:12:34
|
> I was surprised to not find a "PyArray_Contiguous" function, or > something like it. The following macro is define in arrayobject.h: #define PyArray_ISCONTIGUOUS(m) ((m)->flags & CONTIGUOUS) # |
From: Chris B. <cb...@jp...> - 2000-09-28 22:24:16
|
Hi all, I'm writing an extension module that uses PyArrayObjects, and I want to be able to tell if an array passed in is contiguous. I know that I can use PyArray_ContiguousFromObject, and it will just return a reference to the same array if it is contiguous, but I want to know whether or not a new array has been created. I realize that most of the time it doesn't matter, but in this case I am chaning the array in place, and need to pass it to another function that is expecting a standard C array, so I need to make sure the user has passed in a contiguous array. I was surprised to not find a "PyArray_Contiguous" function, or something like it. I see that there is a field in PyArrayObject (int flags) that has a bit indicating whether a field is contiguous, but being a newbe to C as well, I'm not sure how to get at it. I'd love a handy utility function, but if one doesn't exist, can someone send me the code I need to check that bit? thanks, -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Pete S. <pe...@sh...> - 2000-09-27 15:26:59
|
i've compiled my own numeric, and i thought i'd offer it back for other win users. i haven't seen a precompiled binary package for win32 yet, so maybe this can get added to sourceforge? if not, here's a link that'll work for a couple weeks http://www.shinners.org/pete/NumPy17_20.zip this is numeric-17.0, compiled for 2.0beta btw, much thanks and congrats to the numpy team. getting this compiled and installed was WORLDS better than working with the 16.x releases. thanks! (and congrats!) |
From: Paul F. D. <pau...@ho...> - 2000-09-22 18:11:27
|
I have released the current CVS image as 17.0. The tag is r17_0. |
From: Greg B. <gb...@cf...> - 2000-09-21 15:28:08
|
> Can anyone comment on what's happening with arrays of booleans in Numpy? > I don't want to duplicate someone elses effort, but don't mind standing > on someone elses shoulders. My application calls for operations on both > non-sparse and sparce matrices. "Rich comparisons" are really only syntactic sugar. For non-sparse matrices the ufuncs greater(a,b) and less(a,b) will do the job. Equivalents for sparse matrices probably exist. -Greg Ball |
From: Charles G W. <cg...@fn...> - 2000-09-20 00:22:31
|
tr...@ui... writes: > Hello -- I'm a Numpy Newbie. I have a question for the Numpy gurus about > comparisons of multiarray objects and arrays of booleans in particular. > > Back when I was checking out Numpy for an application, I read a note in the > Ascher/Dubois/Hinsen/Hugunin/Oliphant manual about comparisons of multiarray > objects. It said and still says: > > "Currently, comparisons of multiarray objects results in exceptions, > since reasonable results (arrays of booleans) are not doable without > non-trivial changes to the Python core. These changes are planned for > Python 1.6, at which point array object comparisons will be updated." > ( http://numpy.sourceforge.net/numdoc/HTML/numdoc.html ) > > Can anyone comment on what's happening with arrays of booleans in > Numpy? I believe that the passage you quote is referring to David Ascher's "Rich Comparisons" proposal. This is not in Python 1.6 and is not in Python 2.0 and nobody is currently working on it. The section in the manual should be revised. |
From: Pete S. <pe...@vi...> - 2000-09-19 22:58:44
|
I have image data stored in a 3D array. the code is pretty simple, but what i don't like about it is needing to transpose the array twice. here's the code... def map_rgb_array(surf, a): "returns an array of 2d mapped pixels from a 3d array" #image info needed to convert map rgb data loss = surf.get_losses[:3] shift = surf.get_shifts()[:3] prep = a >> loss << shift return transpose(bitwise_or.reduce(transpose(prep))) once i get to the 'prep' line, my 3d image has all the values for R,G,B mapped to the appropriate bitspace. all i need to do is bitwise_or the values together "finalcolor = R|G|B". this is where my lack of guru-ness with numpy comes in. the only way i could figure to do this was transpose the array and then apply the bitwise_or. only problem is then i need to de-transpose the array to get it back to where i started. is there some form or mix of bitwise_or i can use to make it operate on the 3rd axis instead of the 1st? thanks all, hopefully this can be improved. legibility comes second to speed as a priority :] ... hmm, tinkering time passes ... i did find this, "prep[:,:,0] | prep[:,:,1] | prep[:,:,2]". is something like that going to be best? "bitwise_or.reduce((prep[:,:,0], prep[:,:,1], prep[:,:,2]))" |
From: <tr...@ui...> - 2000-09-19 21:24:29
|
Hello -- I'm a Numpy Newbie. I have a question for the Numpy gurus about comparisons of multiarray objects and arrays of booleans in particular. Back when I was checking out Numpy for an application, I read a note in the Ascher/Dubois/Hinsen/Hugunin/Oliphant manual about comparisons of multiarray objects. It said and still says: "Currently, comparisons of multiarray objects results in exceptions, since reasonable results (arrays of booleans) are not doable without non-trivial changes to the Python core. These changes are planned for Python 1.6, at which point array object comparisons will be updated." ( http://numpy.sourceforge.net/numdoc/HTML/numdoc.html ) Can anyone comment on what's happening with arrays of booleans in Numpy? I don't want to duplicate someone elses effort, but don't mind standing on someone elses shoulders. My application calls for operations on both non-sparse and sparce matrices. douglas |
From: Paul F. D. <pau...@ho...> - 2000-09-19 18:39:47
|
Check it out at http://numpy.sourceforge.net/doc |