From: David G. <Dav...@no...> - 2006-10-12 06:27:56
|
Travis Oliphant wrote: > What could be simpler? ;-) Having sqrt(-1) return 1j (without having to remember that in order to get this, you have to write sqrt(-1+0j) instead). DG |
From: David C. <da...@ar...> - 2006-10-12 06:35:34
|
David Goldsmith wrote: > Travis Oliphant wrote: > >> What could be simpler? ;-) >> > Having sqrt(-1) return 1j (without having to remember that in order to > get this, you have to write sqrt(-1+0j) instead). > > But this can sometimes lead to confusing errors hard to track when you don't want to treat complex numbers. That's one of the thing I hated in matlab, actually: if you don't want to handle complex numbers, you have to check regularly for it. So I don't think this is simpler. David |
From: Travis O. <oli...@ie...> - 2006-10-12 06:39:02
|
David Goldsmith wrote: > Travis Oliphant wrote: > >> What could be simpler? ;-) >> > Having sqrt(-1) return 1j (without having to remember that in order to > get this, you have to write sqrt(-1+0j) instead). > > That's exactly what scipy.sqrt(-1) does. That was my point. -Travis |
From: David G. <Dav...@no...> - 2006-10-12 06:10:57
|
Travis Oliphant wrote: > pe...@ce... wrote: > > >> Could sqrt(-1) made to return 1j again? >> >> > Not in NumPy. But, in scipy it could. > > Ohmigod!!! You are definitely going to scare away many, many potential users - if I wasn't obliged to use open source at work, you'd be scaring me away. I was thinking about translating all my personal fractal-generating Matlab code into Python, but I certainly won't be doing that now! |
From: Travis O. <oli...@ie...> - 2006-10-12 06:25:00
|
David Goldsmith wrote: > Travis Oliphant wrote: > >> pe...@ce... wrote: >> >> >> >>> Could sqrt(-1) made to return 1j again? >>> >>> >>> >> Not in NumPy. But, in scipy it could. >> >> >> > Ohmigod!!! You are definitely going to scare away many, many potential > users - if I wasn't obliged to use open source at work, you'd be scaring > me away. Why in the world does it scare you away. This makes no sense to me. If you don't like the scipy version don't use it. NumPy and SciPy are not the same thing. The problem we have is that the scipy version (0.3.2) already had this feature (and Numeric didn't). What is so new here that is so scary ? -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-12 06:49:42
|
On 10/12/06, Travis Oliphant <oli...@ie...> wrote: > Why in the world does it scare you away. This makes no sense to me. > If you don't like the scipy version don't use it. NumPy and SciPy are > not the same thing. I'd like to pitch in (again) on this issue, but I'll try to make sure that it's clear that I'm NOT arguing about sqrt() in particular, one way or another. It's perfectly clear that numpy != scipy to all of us. And yet, I think it is equally clear that the two are /very/ tightly related. Scipy builds on top of numpy and it directly exposes a LOT of the numpy API as scipy functions: In [21]: import numpy as n, scipy as s In [22]: common_names = set(dir(n)) & set(dir(s)) In [23]: [getattr(n,x) is getattr(s,x) for x in common_names ].count(True) Out[23]: 450 In [24]: len(common_names) Out[24]: 462 That's 450 objects from numpy which are directly exposed in Scipy, while only 12 names are in both top-level namespaces and yet are different objects. Put another way, scipy is a direct wrap of 97% of the numpy top-level namespace. While /we/ know they are distinct entities, to the casual user a 97% match looks pretty close to being the same, especially when the non-identical things are all non-numerical: In [27]: [x for x in common_names if getattr(n,x) is not getattr(s,x)] Out[27]: ['pkgload', 'version', '__config__', '__file__', '__all__', '__doc__', 'show_config', '__version__', '__path__', '__name__', 'info', 'test'] In [32]: n.__version__,s.__version__ Out[32]: ('1.0.dev3306', '0.5.2.dev2252') Basically, at least for these versions, the top-level API of scipy is a strict superset of the numpy one for all practical purposes. I think it's fair to say that if we start sprinkling special cases where certain objects happen to have the same name but produce different results for the same inputs, confusion will arise. Please note that I see a valid reason for scipy.foo != numpy.foo when the scipy version uses code with extra features, is faster, has additional options, etc. But as I said in a previous message, I think that /for the same input/, we should really try to satisfy that numpy.foo(x) == scipy.foo(x) (which is NOT the same as 'numpy.foo is scipy.foo') within reason. Obviously the scipy version may succeed where the numpy one fails due to better algorithms, or be faster, etc. I'm talking about a general principle here. I doubt I'll be able to state my point with any more clarity, so I'll stop now. But I really believe that this particular aspect of consistency between numpy and scipy is a /very/ important one for its adoption in wider communities. Best, f |
From: Scott S. <sin...@uk...> - 2006-10-12 07:12:34
|
Fernando Perez wrote: > Please note that I see a valid reason for scipy.foo != numpy.foo when > the scipy version uses code with extra features, is faster, has > additional options, etc. But as I said in a previous message, I think > that /for the same input/, we should really try to satisfy that > > numpy.foo(x) == scipy.foo(x) (which is NOT the same as 'numpy.foo is scipy.foo') > > within reason. As far as I can tell this is exactly what happens. Consider the issue under discussion... ---------------------------------- >>> import numpy as np >>> np.sqrt(-1) -1.#IND >>> np.sqrt(-1+0j) 1j >>> a = complex(-1) >>> np.sqrt(a) 1j >>> import scipy as sp >>> sp.sqrt(-1) -1.#IND >>> np.sqrt(-1+0j) 1j >>> sp.sqrt(a) 1j >>> np.__version__ '1.0rc1' >>> sp.__version__ '0.5.1' >>> ---------------------------------- I'm sure that this hasn't changed in the development versions. Surely the point is that when your algorithm can potentially produce a complex result, the logical thing to do is to use a complex data type. In this case Numpy and Scipy behave in a way which is intuitive. If complex results are surprising and unexpected then the algorithm is probably in error or poorly understood ;-) Cheers, Scott |
From: Travis O. <oli...@ie...> - 2006-10-12 07:31:01
|
> I'd like to pitch in (again) on this issue, but I'll try to make sure > that it's clear that I'm NOT arguing about sqrt() in particular, one > > way or another. > Fernando, I don't disagree with you in principle. I don't think anybody does. I think we should try to keep the interfaces and expectations of scipy and numpy the same. Unfortunately, we have competing issues in this particular case (in the case of the functions in numpy.lib.scimath). Nobody has suggested an alternative to the current situation in SVN that is satsifying to enough users. Here is the situation. 1) NumPy ufuncs never up-cast to complex numbers without the user explicitly requesting it so sqrt(-1) creates a floating-point error condition which is either caught or ignored according to the user's desires. To get complex results from sqrt you have put in complex numbers to begin with. That's inherent in the way ufuncs work. This is long-standing behavior that has good reasons for it's existence. I don't see this changing. That's why I suggested to move the discussion over to scipy (we have the fancy functions in NumPy, they are just not in the top-level name-space). Now, it would be possible to give ufuncs a dtype keyword argument that allowed you to specify which underlying loop was to be used for the calculation. That way you wouldn't have to convert inputs to complex numbers before calling the ufunc, but could let the ufunc do it in chunks during the loop. That is certainly a reasonable enhancement: sqrt(a, dtype=complex). This no-doubt has a "library-ish"-feeling, but that is what NumPy is. If such a change is desireable, I don't think it would be much to implement it. 2) In SciPy 0.3.2 the top-level math functions were overloaded with these fancy argument-checking versions. So that scipy.sqrt(-1) would return 1j. This was done precisely to attract users like David G. who don't mind data-type conversions on the fly, but prefer automatic conversion (funny being called non-sensical when I was the one who wrote those original scipy functions --- I guess I'm schizophrenic). We changed this in SciPy 0.5.1 by accident without any discussion. It was simply a by-product of moving scipy_base (including those special-handling functions) into NumPy and forgetting to import those functions again into top-level SciPy. It was an oversight that caused backwards compatibilty issues. So, I simply changed it back to what SciPy used to be in SVN. If we want to change SciPy, then fine, but let's move that discussion over to scipy-dev. In short, I appreciate all the comments and the differences of opinion they point out, but they are ultimately non-productive. We can't just change top-level sqrt to be the fancy function willy-nilly. Paul says I'm nice (he's not talked to my children recently), but I'm not that much of a push-over. There are very good reasons that NumPy has the behavior it does. In addition, the fancy functions are already there in numpy in numpy.lib.scimath. So, use them from there if you like them. Create your own little mynumpy module that does from numpy import * from numpy.lib.scimath import * and have a ball. Python is flexible enough that the sky is not going to fall if the library doesn't do things exactly the way you would do it. We can still cooperate in areas that we agree on. Again. Put this to rest on NumPy and move the discussion over to scipy-dev. -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-12 07:46:34
|
On 10/12/06, Travis Oliphant <oli...@ie...> wrote: > > > I'd like to pitch in (again) on this issue, but I'll try to make sure > > that it's clear that I'm NOT arguing about sqrt() in particular, one > > > > way or another. > > > > Fernando, > > I don't disagree with you in principle. I don't think anybody does. I > think we should try to keep the interfaces and expectations of scipy and > numpy the same. OK, I'm glad to hear that. For a moment during the discussion I misunderstood your intent and thought you did disagree with this, which worried me. Sorry for the confusion and any noise caused. I realize the sqrt() topic is a tricky one which I've deliberately sidestepped, since I wasn't too interested in that particular case but rather in the general principle. I'll leave that discussion to continue on scipy-dev for those who are intereseted in it. Regards, f |
From: Travis O. <oli...@ie...> - 2006-10-12 07:50:06
|
Travis Oliphant wrote: > Now, it would be possible to give ufuncs a dtype keyword argument that > allowed you to specify which underlying loop was to be used for the > calculation. That way you wouldn't have to convert inputs to complex > numbers before calling the ufunc, but could let the ufunc do it in > chunks during the loop. That is certainly a reasonable enhancement: > > sqrt(a, dtype=complex). > > This no-doubt has a "library-ish"-feeling, but that is what NumPy is. > If such a change is desireable, I don't think it would be much to > implement it. > > This could be implemented, but only with a version number increase in the C-API (we would have to change the c-signature of the ufunc tp_call. This would mean that the next release of NumPy would be binary incompatible with packages built to previous NumPy releases. I've really been trying to avoid doing that. So, unless there are strong requests for this feature that outweigh the negatives of re-building dependent packages, then this feature will have to wait. OTOH: I suppose it could be implemented in a different way (using indexing or a method call): sqrt[complex](a) --- I remember Tim suggesting some use for indexing on ufuncs earlier though. -Travis |
From: Tim H. <tim...@ie...> - 2006-10-12 14:27:21
|
Travis Oliphant wrote: > Travis Oliphant wrote: > >> Now, it would be possible to give ufuncs a dtype keyword argument that >> allowed you to specify which underlying loop was to be used for the >> calculation. That way you wouldn't have to convert inputs to complex >> numbers before calling the ufunc, but could let the ufunc do it in >> chunks during the loop. That is certainly a reasonable enhancement: >> >> sqrt(a, dtype=complex). >> >> This no-doubt has a "library-ish"-feeling, but that is what NumPy is. >> If such a change is desireable, I don't think it would be much to >> implement it. >> >> >> > This could be implemented, but only with a version number increase in > the C-API (we would have to change the c-signature of the ufunc tp_call. > > This would mean that the next release of NumPy would be binary > incompatible with packages built to previous NumPy releases. > > I've really been trying to avoid doing that. So, unless there are > strong requests for this feature that outweigh the negatives of > re-building dependent packages, then this feature will have to wait. > > OTOH: I suppose it could be implemented in a different way (using > indexing or a method call): > > sqrt[complex](a) --- I remember Tim suggesting some use for indexing > on ufuncs earlier though. > It wouldn't surprise me if I did -- it sounds like the kind of thing I'd propose -- but I certainly can't remember what I was proposing. -tim |
From: David G. <Dav...@no...> - 2006-10-12 07:54:40
|
Travis Oliphant wrote: > David Goldsmith wrote: > >>> >>> >>> >> I don't use scipy (and don't want to because of the overhead) but it >> sounds like I should because if I'm taking the square root of a variable >> whose value at run time happens to be real but less than zero, I *want* >> the language I'm using to return an imaginary; in other words, it's not >> the scipy behavior which "scares" me, its the numpy (which I do/have >> been using) behavior. >> > > O.K. Well the functions you want are in numpy.lib.scimath. I should > have directed you there. You actually don't need scipy installed at > all. Just import sqrt from numpy.lib.scimath. I'm sorry I > misunderstood the issue. > Got it. And if I understand correctly, the import order you specify in the little mynumpy example you included in your latest response to Fernando will result in any "overlap" between numpy and numpy.lib.scimath to call the latter's version of things rather than the former's, yes? DG > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Travis O. <oli...@ee...> - 2006-10-12 19:16:13
|
David Goldsmith wrote: >Got it. And if I understand correctly, the import order you specify in >the little mynumpy example you included in your latest response to >Fernando will result in any "overlap" between numpy and >numpy.lib.scimath to call the latter's version of things rather than the >former's, yes? > > Right. The last import will be used for any common-names (variables get re-bound to the new functions...) -Travis |
From: Travis O. <oli...@ee...> - 2006-10-11 22:20:01
|
Fernando Perez wrote: >On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > > > >>pe...@ce... wrote: >> >> >>>Could sqrt(-1) made to return 1j again? >>> >>> >>> >>Not in NumPy. But, in scipy it could. >> >> > >Without taking sides on which way to go, I'd like to -1 the idea of a >difference in behavior between numpy and scipy. > >IMHO, scipy should be within reason a strict superset of numpy. > > This was not the relationship of scipy to Numeric. For me, it's the fact that scipy *used* to have the behavior that scipy.sqrt(-1) return 1j and now doesn't that is the kicker. On the other hand requiring all calls to numpy.sqrt to go through an "argument-checking" wrapper is a bad idea as it will slow down other uses. So, I committed a change to scipy to bring it back into compatibility with 0.3.2 >Gratuitious differences in behavior like this one are going to drive >us all mad. > >There are people who import scipy for everything, others distinguish >between numpy and scipy, others use numpy alone and at some point in >their life's code they do > >import numpy as N -> import scipy as N > >because they start needing stuff not in plain numpy. Having different >APIs and behaviors appear there is, I think, a Seriously Bad Idea >(TM). > > I think the SBI is mixing numpy and scipy gratuitously (which I admit I have done in the past). I'm trying to repent.... -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-11 22:59:29
|
On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > >IMHO, scipy should be within reason a strict superset of numpy. > > > > > This was not the relationship of scipy to Numeric. > > For me, it's the fact that scipy *used* to have the behavior that > > scipy.sqrt(-1) return 1j > > and now doesn't that is the kicker. That's fine, my only point was that we should really strive for consitency between the two. I think most users should be able to expect that numpy.foo(x) == scipy.foo(x) for all cases where foo exists in both. The scipy.foo() call might be faster, or take extra arguments for flexibility, and the above might only be true within floating point accuracy (since a different algorithm may be used), but hopefully functions with the same name do the same thing in both. I really think breaking this will send quite a few potential users running for the hills, and this is what I meant by 'superset'. Perhaps I wasn't clear enough. Cheers, f |
From: Fernando P. <fpe...@gm...> - 2006-10-11 23:02:17
|
On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > >There are people who import scipy for everything, others distinguish > >between numpy and scipy, others use numpy alone and at some point in > >their life's code they do > > > >import numpy as N -> import scipy as N > > > >because they start needing stuff not in plain numpy. Having different > >APIs and behaviors appear there is, I think, a Seriously Bad Idea > >(TM). > > > > > I think the SBI is mixing numpy and scipy gratuitously (which I admit I > have done in the past). I'm trying to repent.... Well, the problem is that it may not be so easy not to do so, esp. for new users. The fact that scipy absorbs and exposes many numpy functions makes this a particularly easy trap for anyone to fall into. The fact that even seasoned users do it should be an indicator that the 'right thing to do' is anything but obvious, IMHO. Once the dust settles on numpy 1.0, I think that the issues of how scipy plays with it, API consistency, coding best practices, etc, will need serious attention. But let's cross one bridge at a time :) Cheers, f |
From: <pe...@ce...> - 2006-10-11 23:13:29
|
On Wed, 11 Oct 2006, Travis Oliphant wrote: > On the other hand requiring all calls to numpy.sqrt to go through an > "argument-checking" wrapper is a bad idea as it will slow down other uses. Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower than scipy.sqrt on negative input but ~2 times faster on positive input: In [47]: pos_input = numpy.arange(1,100,0.001) In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) 1000 loops, best of 3: 4.68 ms per loop In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) 1000 loops, best of 3: 10 ms per loop In [50]: neg_input = -pos_input In [52]: %timeit -n 1000 b=numpy.sqrt(neg_input) 1000 loops, best of 3: 99.3 ms per loop In [53]: %timeit -n 1000 b=scipy.sqrt(neg_input) 1000 loops, best of 3: 29.2 ms per loop nan's are making things really slow, Pearu |
From: Travis O. <oli...@ee...> - 2006-10-11 23:24:35
|
pe...@ce... wrote: > >On Wed, 11 Oct 2006, Travis Oliphant wrote: > > > >>On the other hand requiring all calls to numpy.sqrt to go through an >>"argument-checking" wrapper is a bad idea as it will slow down other uses. >> >> > >Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower >than scipy.sqrt on negative input but ~2 times faster on positive input: > >In [47]: pos_input = numpy.arange(1,100,0.001) > >In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) >1000 loops, best of 3: 4.68 ms per loop > >In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) >1000 loops, best of 3: 10 ms per loop > > This is the one that concerns me. Slowing everybody down who knows they have positive values just for people that don't seems problematic. >In [50]: neg_input = -pos_input > >In [52]: %timeit -n 1000 b=numpy.sqrt(neg_input) >1000 loops, best of 3: 99.3 ms per loop > >In [53]: %timeit -n 1000 b=scipy.sqrt(neg_input) >1000 loops, best of 3: 29.2 ms per loop > >nan's are making things really slow, > > Yeah, they do. This actually makes the case for masked arrays, rather than using NAN's. -Travis |
From: <pe...@ce...> - 2006-10-11 23:40:04
|
On Wed, 11 Oct 2006, Travis Oliphant wrote: > >Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower > >than scipy.sqrt on negative input but ~2 times faster on positive input: > > > >In [47]: pos_input = numpy.arange(1,100,0.001) > > > >In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) > >1000 loops, best of 3: 4.68 ms per loop > > > >In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) > >1000 loops, best of 3: 10 ms per loop > > > > > > This is the one that concerns me. Slowing everybody down who knows they > have positive values just for people that don't seems problematic. I think the code in scipy.sqrt can be optimized from def _fix_real_lt_zero(x): x = asarray(x) if any(isreal(x) & (x<0)): x = _tocomplex(x) return x def sqrt(x): x = _fix_real_lt_zero(x) return nx.sqrt(x) to (untested) def _fix_real_lt_zero(x): x = asarray(x) if not isinstance(x,(nt.csingle,nt.cdouble)) and any(x<0): x = _tocomplex(x) return x def sqrt(x): x = _fix_real_lt_zero(x) return nx.sqrt(x) or def sqrt(x): old = nx.seterr(invalid='raises') try: r = nx.sqrt(x) except FloatingPointError: x = _tocomplex(x) r = nx.sqrt(x) nx.seterr(**old) return r I haven't timed these cases yet.. Pearu |
From: Pierre GM <pie...@en...> - 2006-10-12 00:11:00
|
> >nan's are making things really slow, > > Yeah, they do. This actually makes the case for masked arrays, rather > than using NAN's. Travis, Talking about masked arrays, I'm about being done rewriting numpy.core.ma, mainly transforming MaskedArray as a subclass of ndarray (it should be OK by the end of the week), and allowing for an easy subclassing of MaskedArrays (which is far from being the case right now) What would be the best procedure to submit it ? Ticket on SVN ? Wiki on scipy.org ? Thanks again for your time ! Pierre |
From: David G. <Dav...@no...> - 2006-10-12 06:32:12
|
Travis Oliphant wrote: > > This is the one that concerns me. Slowing everybody down who knows they > have positive values just for people that don't seems problematic. > > Then have a "sqrtp" function for those users who are fortunate enough to know ahead of time that they'll only be talking square roots of nonnegatives. DG |
From: A. M. A. <per...@gm...> - 2006-10-12 00:24:04
|
On 11/10/06, pe...@ce... <pe...@ce...> wrote: > """ > In SciPy 0.3.x the ufuncs were overloaded by more "intelligent" versions. > A very attractive feature was that sqrt(-1) would yield 1j as in Matlab. > Then you can program formulas directly (e.g., roots of a 2nd order > polynomial) and the right answer is always achieved. In the Matlab-Python > battle in mathematics education, this feature is important. > > Now in SciPy 0.5.x sqrt(-1) yields nan. A lot of code we have, especially > for introductory numerics and physics courses, is now broken. > This has already made my colleagues at the University skeptical to > Python as "this lack of backward compatibility would never happen in Matlab". > > Another problem related to Numeric and numpy is that in these courses we > use ScientificPython several places, which applies Numeric and will > continue to do so. You then easily get a mix of numpy and Numeric > in scripts, which may cause problems and at least extra overhead. > Just converting to numpy in your own scripts isn't enough if you call > up libraries using and returning Numeric. > """ > > I wonder, what are the reasons that numpy.sqrt(-1) returns nan? > Could sqrt(-1) made to return 1j again? If not, shouldn't > numpy.sqrt(-1) raise a ValueError instead of returning silently nan? What is the desired behaviour of sqrt? Should sqrt always return a complex array, regardless of the type of its input? This will be extremely surprising to many users, whose memory usage suddenly doubles and for whom many functions no longer work the way they're accustomed to. Should it return a complex array only when any entry in its input is negative? This will be even *more* surprising when a negative (perhaps even -0) value appears in their matrix (for example, does a+min(a) yield -0s in the minimal values?) and suddenly it's complex. A ValueError is also surprising, and it forces the user to sanitize her array before taking the square root, instead of whenever convenient. If you want MATLAB behaviour, use only complex arrays. If the problem is backward incompatibility, there's a reason 1.0 hasn't been released yet... A. M. Archibald |
From: Stefan v. d. W. <st...@su...> - 2006-10-12 00:41:23
|
On Wed, Oct 11, 2006 at 08:24:01PM -0400, A. M. Archibald wrote: > What is the desired behaviour of sqrt? [...] > Should it return a complex array only when any entry in its input is > negative? This will be even *more* surprising when a negative (perhaps > even -0) value appears in their matrix (for example, does a+min(a) > yield -0s in the minimal values?) and suddenly it's complex. Luckily sqrt(-0.) gives -0.0 and not nan ;) Regards St=E9fan |
From: Paul D. <pfd...@gm...> - 2006-10-12 01:36:00
|
This is a meta-statement about this argument. We already had it. Repeatedly. Whether you choose it one way or the other, for Numeric the community chose it the way it did for a reason. It is a good reason. It isn't stupid. There were good reasons for the other way. Those reasons weren't stupid. It was a 'choice amongst equals'. Being compatible with some other package is all very nice but it is simply a different choice and the choice was already made 10 years ago. If scipy chose to do this differently then you now have an intractable problem; somebody is going to get screwed. So, next time somebody tells you that some different choice amongst equals should be made for this and that good reason, just say no. This is why having a project leader who is mean like me is better than having a nice guy like Travis. (:-> |
From: David G. <Dav...@no...> - 2006-10-12 06:16:11
|
Sven Schreiber wrote: > Travis Oliphant schrieb: > > >>> If not, shouldn't >>> >>> >>> numpy.sqrt(-1) raise a ValueError instead of returning silently nan? >>> >>> >>> >> This is user adjustable. You change the error mode to raise on >> 'invalid' instead of pass silently which is now the default. >> >> -Travis >> >> > > Could you please explain how this adjustment is done, or point to the > relevant documentation. > Thank you, > Sven > I'm glad you asked this, Sven, 'cause I was thinking that if making this "user adjustment" is this advanced (I too have no idea what you're talking about, Travis), then this would be another significant strike against numpy (but I was holding my tongue, since I'd just let fly in my previous email). DG |