You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Charles R H. <cha...@gm...> - 2006-10-12 04:50:05
|
On 10/11/06, Tim Hochberg <tim...@ie...> wrote: > > Greg Willden wrote: > > On 10/11/06, *Travis Oliphant* <oli...@ee... > > <mailto:oli...@ee...>> wrote: > > > > Stefan van der Walt wrote: > > >Further, if I understand correctly, changing sqrt and power to give > > >the right answer by default will slow things down somewhat. But > > is it > > >worth sacrificing intuitive usage for speed? > > > > > For NumPy, yes. > > > > This is one reason that NumPy by itself is not a MATLAB replacement. > > > > > > This is not about being a Matlab replacement. > > This is about correctness. > > Numpy purports to handle complex numbers. > > Mathematically, sqrt(-1) is a complex number. > That's vastly oversimplified. If you are working with the reals, then > sqrt(-1) is undefined (AKA nan). If you are working in the complex > plane, then sqrt(-1) is indeed *a* complex number; And if you are working over the rationals, sqrt(-1) and sqrt(-2) lead to different field extensions ;) Of course, numpy doesn't *have* rationals, so I'm just being cute. <snip> Personally I think that the default error mode should be tightened up. > Then people would only see these sort of things if they really care > about them. Using Python 2.5 and the errstate class I posted earlier: > > # This is what I like for the default error state > numpy.seterr(invalid='raise', divide='raise', over='raise', > under='ignore') I like these choices too. Overflow, division by zero, and sqrt(-x) are usually errors, indicating bad data or programming bugs. Underflowed floats, OTOH, are just really, really small numbers and can be treated as zero. An exception might be if the result is used in division and no error is raised, resulting in a loss of accuracy. If complex results are *expected*, then this should be made explicit by using complex numbers. Numpy allows finegrained control of data types and array ordering, it is a bit closer to the metal than Matlab. This extra control allows greater efficiency, both in speed and in storage, at the cost of a bit more care on the programmers side. Chuck |
From: Tim H. <tim...@ie...> - 2006-10-12 04:08:38
|
Greg Willden wrote: > On 10/11/06, *Travis Oliphant* <oli...@ee... > <mailto:oli...@ee...>> wrote: > > Stefan van der Walt wrote: > >Further, if I understand correctly, changing sqrt and power to give > >the right answer by default will slow things down somewhat. But > is it > >worth sacrificing intuitive usage for speed? > > > For NumPy, yes. > > This is one reason that NumPy by itself is not a MATLAB replacement. > > > This is not about being a Matlab replacement. > This is about correctness. > Numpy purports to handle complex numbers. > Mathematically, sqrt(-1) is a complex number. That's vastly oversimplified. If you are working with the reals, then sqrt(-1) is undefined (AKA nan). If you are working in the complex plane, then sqrt(-1) is indeed *a* complex number; of course you don't know *which* complex number it is unless you also specify the branch. > Therefore Numpy *must* return a complex number. No I don't think that it must. I've found it a very useful tool for the past decade plus without it returning complex numbers from sqrt. > Speed should not take precedence over correctness. The current behavior is not incorrect. > > If Numpy doesn't return a complex number then it shouldn't pretend to > support complex numbers. Please relax. Personally I think that the default error mode should be tightened up. Then people would only see these sort of things if they really care about them. Using Python 2.5 and the errstate class I posted earlier: # This is what I like for the default error state numpy.seterr(invalid='raise', divide='raise', over='raise', under='ignore') a = -numpy.arange(10) with errstate(invalid='ignore'): print numpy.sqrt(a) # This happily returns a bunch of NANs; and one zero. print numpy.sqrt(a.astype(complex)) # This returns a bunch of complex values print numpy.sqrt(a) # This raise a floating point error. No silent NANs returned This same error state make the vagaries of dividing by zero less surprising as well. -tim |
From: Travis O. <oli...@ie...> - 2006-10-12 04:01:45
|
Greg Willden wrote: > On 10/11/06, *Travis Oliphant* <oli...@ee... > <mailto:oli...@ee...>> wrote: > > Stefan van der Walt wrote: > >Further, if I understand correctly, changing sqrt and power to give > >the right answer by default will slow things down somewhat. But > is it > >worth sacrificing intuitive usage for speed? > > > For NumPy, yes. > > This is one reason that NumPy by itself is not a MATLAB replacement. > > > This is not about being a Matlab replacement. > This is about correctness. I disagree. NumPy does the "correct" thing when you realize that sqrt is a function that returns the same type as it's input. The field over-which the operation takes place is defined by the input data-type and not the input "values". Either way can be considered correct mathematically. As Paul said it was a design decision not to go searching through the array to determine whether or not there are negative numbers in the array. Of course you can do that if you want and that's what scipy.sqrt does. > Numpy purports to handle complex numbers. > Mathematically, sqrt(-1) is a complex number. Or, maybe it's undefined if you are in the field of real numbers. It all depends. > Therefore Numpy *must* return a complex number. Only if the input is complex. That is a reasonable alternative to your specification. > > If Numpy doesn't return a complex number then it shouldn't pretend to > support complex numbers. Of course it supports complex numbers, it just doesn't support automatic conversion to complex numbers. It supports complex numbers the same way Python supports them (i.e. you have to use cmath to get sqrt(-1) == 1j) People can look at this many ways without calling the other way of looking at it unreasonable. I don't see a pressing need to change this in NumPy, and in fact see many reasons to leave it the way it is. This discussion should move to the scipy list because that is the only place where a change could occur. -Travis. |
From: Bill B. <wb...@gm...> - 2006-10-12 03:48:50
|
On 10/12/06, Greg Willden <gre...@gm...> wrote: > On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > > Stefan van der Walt wrote: > > >Further, if I understand correctly, changing sqrt and power to give > > >the right answer by default will slow things down somewhat. But is it > > >worth sacrificing intuitive usage for speed? > > > > > For NumPy, yes. > > > > This is one reason that NumPy by itself is not a MATLAB replacement. > > This is not about being a Matlab replacement. > This is about correctness. > Numpy purports to handle complex numbers. > Mathematically, sqrt(-1) is a complex number. > Therefore Numpy *must* return a complex number. > Speed should not take precedence over correctness. Unless your goal is speed. Then speed should take precedence over correctness. And unless you're a fan of quaternions, in which case *which* square root of -1 should it return? It's interesting to note that although python has had complex numbers pretty much from the beginning, math.sqrt(-1) returns an error. If you want to work with complex square roots you need to use cmath.sqrt(). Basically you have to tell python that compex numbers are something you care about by using the module designed for complex math. This scimath module is a similar deal. But perhaps the name could be a little more obvious / short? Right now it seems it only deals with complex numbers, so maybe having "complex" or "cmath" in the name would make it clearer. Hmm there is a numpy.math, why not a numpy.cmath. > If Numpy doesn't return a complex number then it shouldn't pretend to > support complex numbers. That's certainly being overdramatic. Lot's of folks are doing nifty stuff with complex FFTs every day using numpy/scipy. And I'm sure they will continue to no matter what numpy.sqrt(-1) returns. --bb |
From: Greg W. <gre...@gm...> - 2006-10-12 03:17:15
|
On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > > Stefan van der Walt wrote: > >Further, if I understand correctly, changing sqrt and power to give > >the right answer by default will slow things down somewhat. But is it > >worth sacrificing intuitive usage for speed? > > > For NumPy, yes. > > This is one reason that NumPy by itself is not a MATLAB replacement. This is not about being a Matlab replacement. This is about correctness. Numpy purports to handle complex numbers. Mathematically, sqrt(-1) is a complex number. Therefore Numpy *must* return a complex number. Speed should not take precedence over correctness. If Numpy doesn't return a complex number then it shouldn't pretend to support complex numbers. Greg -- Linux. Because rebooting is for adding hardware. |
From: Travis O. <oli...@ie...> - 2006-10-12 02:43:40
|
David Novakovic wrote: > Hi, > > i'm moving some old perl PDL code to python. I've come across a line > which changes values in a diagonal line accross a matrix. > > matrix.diagonal() returns a list of values, but making changes to these > does not reflect in the original (naturally). > > I'm just wondering if there is a way that i can increment all the values > along a diagonal? > You can refer to a diagonal using flattened index with a element-skip equal to the number of columns+1. Thus, a.flat[::a.shape[1]+1] += 1 will increment the elements of a along the main diagonal. -Travis |
From: Tim H. <tim...@ie...> - 2006-10-12 01:41:03
|
Travis Oliphant wrote: > Tim Hochberg wrote: > > >> With python 2.5 out now, perhaps it's time to come up with a with >> statement context manager. Something like: >> >> from __future__ import with_statement >> import numpy >> >> class errstate(object): >> def __init__(self, **kwargs): >> self.kwargs = kwargs >> def __enter__(self): >> self.oldstate = numpy.seterr(**self.kwargs) >> def __exit__(self, *exc_info): >> numpy.seterr(**self.oldstate) >> >> a = numpy.arange(10) >> a/a # ignores divide by zero >> with errstate(divide='raise'): >> a/a # raise exception on divide by zer >> # Would ignore divide by zero again if we got here. >> >> -tim >> >> >> >> > > This looks great. I think most people aren't aware of the with > statement and what it can do (I'm only aware because of your posts, for > example). > > So, what needs to be added to your example in order to just add it to > numpy? > As far as I know, just testing and documentation -- however testing was so minimal that I may find some other stuff. I'll try to clean it up tomorrow so that I'm a little more confident that it works correctly and I'll send another note out then. -tim |
From: Paul D. <pfd...@gm...> - 2006-10-12 01:36:00
|
This is a meta-statement about this argument. We already had it. Repeatedly. Whether you choose it one way or the other, for Numeric the community chose it the way it did for a reason. It is a good reason. It isn't stupid. There were good reasons for the other way. Those reasons weren't stupid. It was a 'choice amongst equals'. Being compatible with some other package is all very nice but it is simply a different choice and the choice was already made 10 years ago. If scipy chose to do this differently then you now have an intractable problem; somebody is going to get screwed. So, next time somebody tells you that some different choice amongst equals should be made for this and that good reason, just say no. This is why having a project leader who is mean like me is better than having a nice guy like Travis. (:-> |
From: Stefan v. d. W. <st...@su...> - 2006-10-12 00:41:23
|
On Wed, Oct 11, 2006 at 08:24:01PM -0400, A. M. Archibald wrote: > What is the desired behaviour of sqrt? [...] > Should it return a complex array only when any entry in its input is > negative? This will be even *more* surprising when a negative (perhaps > even -0) value appears in their matrix (for example, does a+min(a) > yield -0s in the minimal values?) and suddenly it's complex. Luckily sqrt(-0.) gives -0.0 and not nan ;) Regards St=E9fan |
From: David N. <da...@di...> - 2006-10-12 00:36:16
|
David Novakovic wrote: > Thanks for the help, i've learnt a lot and also figured out something > that does what I want, i'll paste an interactive session below: > > x = zeros((4,7)) > x > array([[0, 0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0, 0]]) > index = arange(min(x.shape[0], x.shape[1])) > index2 = copy.deepcopy(index) #deep copy may be overkill > for a,b in enumerate(index): > ... index2[a] += a > Turns out this is not good at all. I guess I'm still open to suggestions then :( Dave > ... > if len(x[:,0]) > len(x[0]): > ... x[index2,index] +=1 > ... else: > ... x[index,index2] +=1 > ... > x > array([[1, 0, 0, 0, 0, 0, 0], > [0, 0, 1, 0, 0, 0, 0], > [0, 0, 0, 0, 1, 0, 0], > [0, 0, 0, 0, 0, 0, 1]]) > > > Thanks for the tips > > Dave Novakovic > > P.S. subscribed to the list now > > Bill Baxter wrote: > >> Forgot to CC you... >> >> ---------- Forwarded message ---------- >> From: Bill Baxter <wb...@gm...> >> Date: Oct 12, 2006 8:58 AM >> Subject: Re: [Numpy-discussion] incrementing along a diagonal >> To: Discussion of Numerical Python >> <num...@li...> >> >> >> On 10/12/06, David Novakovic <da...@di...> wrote: >> >>> Johannes Loehnert wrote: >>> This is very nice, exactly what i want, but it doesnt work for mxn >>> matricies: >>> >>> >>>>>> x = zeros((5,3)) >>>>>> x >>>>>> >>> array([[0, 0, 0], >>> [0, 0, 0], >>> [0, 0, 0], >>> [0, 0, 0], >>> [0, 0, 0]]) >>> >>>>>> index = arange(min(x.shape[0],x.shape[1])) >>>>>> x[index,index] += 1 >>>>>> x >>>>>> >>> array([[1, 0, 0], >>> [0, 1, 0], >>> [0, 0, 1], >>> [0, 0, 0], >>> [0, 0, 0]]) >>> >> Exactly what output are you expecting? That is the definition of the >> 'diagonal' for a non-square matrix. If you're expecting something >> else then what you want is not the diagonal. >> >> >>> Just for reference, this is the line of perl i'm trying to port: >>> >>> >>> like: >>> >>> for index in diag_iter(matrix,*axes): >>> matrix[index] +=1 >>> >> That's not going to change the mathematical definition of the diagonal >> of a non-square matrix. >> >> >>> PS: If anyone would care to link me to the subscription page for the >>> mailing list so you dont have to CC me all the time :) >>> >> Check the bottom of this message. >> >> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Num...@li... >>> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>> >>> >> --bb >> >> > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: A. M. A. <per...@gm...> - 2006-10-12 00:24:04
|
On 11/10/06, pe...@ce... <pe...@ce...> wrote: > """ > In SciPy 0.3.x the ufuncs were overloaded by more "intelligent" versions. > A very attractive feature was that sqrt(-1) would yield 1j as in Matlab. > Then you can program formulas directly (e.g., roots of a 2nd order > polynomial) and the right answer is always achieved. In the Matlab-Python > battle in mathematics education, this feature is important. > > Now in SciPy 0.5.x sqrt(-1) yields nan. A lot of code we have, especially > for introductory numerics and physics courses, is now broken. > This has already made my colleagues at the University skeptical to > Python as "this lack of backward compatibility would never happen in Matlab". > > Another problem related to Numeric and numpy is that in these courses we > use ScientificPython several places, which applies Numeric and will > continue to do so. You then easily get a mix of numpy and Numeric > in scripts, which may cause problems and at least extra overhead. > Just converting to numpy in your own scripts isn't enough if you call > up libraries using and returning Numeric. > """ > > I wonder, what are the reasons that numpy.sqrt(-1) returns nan? > Could sqrt(-1) made to return 1j again? If not, shouldn't > numpy.sqrt(-1) raise a ValueError instead of returning silently nan? What is the desired behaviour of sqrt? Should sqrt always return a complex array, regardless of the type of its input? This will be extremely surprising to many users, whose memory usage suddenly doubles and for whom many functions no longer work the way they're accustomed to. Should it return a complex array only when any entry in its input is negative? This will be even *more* surprising when a negative (perhaps even -0) value appears in their matrix (for example, does a+min(a) yield -0s in the minimal values?) and suddenly it's complex. A ValueError is also surprising, and it forces the user to sanitize her array before taking the square root, instead of whenever convenient. If you want MATLAB behaviour, use only complex arrays. If the problem is backward incompatibility, there's a reason 1.0 hasn't been released yet... A. M. Archibald |
From: David N. <da...@di...> - 2006-10-12 00:21:14
|
Thanks for the help, i've learnt a lot and also figured out something that does what I want, i'll paste an interactive session below: x = zeros((4,7)) x array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) index = arange(min(x.shape[0], x.shape[1])) index2 = copy.deepcopy(index) #deep copy may be overkill for a,b in enumerate(index): ... index2[a] += a ... if len(x[:,0]) > len(x[0]): ... x[index2,index] +=1 ... else: ... x[index,index2] +=1 ... x array([[1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 1]]) Thanks for the tips Dave Novakovic P.S. subscribed to the list now Bill Baxter wrote: > Forgot to CC you... > > ---------- Forwarded message ---------- > From: Bill Baxter <wb...@gm...> > Date: Oct 12, 2006 8:58 AM > Subject: Re: [Numpy-discussion] incrementing along a diagonal > To: Discussion of Numerical Python > <num...@li...> > > > On 10/12/06, David Novakovic <da...@di...> wrote: >> Johannes Loehnert wrote: >> This is very nice, exactly what i want, but it doesnt work for mxn >> matricies: >> >> >>> x = zeros((5,3)) >> >>> x >> array([[0, 0, 0], >> [0, 0, 0], >> [0, 0, 0], >> [0, 0, 0], >> [0, 0, 0]]) >> >>> index = arange(min(x.shape[0],x.shape[1])) >> >>> x[index,index] += 1 >> >>> x >> array([[1, 0, 0], >> [0, 1, 0], >> [0, 0, 1], >> [0, 0, 0], >> [0, 0, 0]]) > > Exactly what output are you expecting? That is the definition of the > 'diagonal' for a non-square matrix. If you're expecting something > else then what you want is not the diagonal. > >> Just for reference, this is the line of perl i'm trying to port: >> >> >> like: >> >> for index in diag_iter(matrix,*axes): >> matrix[index] +=1 > > That's not going to change the mathematical definition of the diagonal > of a non-square matrix. > >> PS: If anyone would care to link me to the subscription page for the >> mailing list so you dont have to CC me all the time :) > > Check the bottom of this message. > >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> > > --bb > |
From: Stefan v. d. W. <st...@su...> - 2006-10-12 00:17:45
|
On Wed, Oct 11, 2006 at 05:21:44PM -0600, Travis Oliphant wrote: > Stefan van der Walt wrote: >=20 > >I agree with Fernando on this one. > > > >Further, if I understand correctly, changing sqrt and power to give > >the right answer by default will slow things down somewhat. But is it > >worth sacrificing intuitive usage for speed? > > =20 > > > For NumPy, yes.=20 >=20 > This is one reason that NumPy by itself is not a MATLAB > replacement.=20 Intuitive usage is hopefully not a MATLAB-only feature. > >N.power(2,-2) =3D=3D 0 > > > >and > > > >N.sqrt(-1) =3D=3D nan > > > >just doesn't feel right. =20 > > >=20 > Only because your expectations are that NumPy *be* a MATLAB=20 > replacement. The problem is that it sacrifices too much for that to be= =20 > the case. And we all realize that NumPy needs more stuff added to it=20 > to be like IDL/MATLAB such as SciPy, Matplotlib, IPython, etc. I have none such expectations -- I havn't used MATLAB in over 5 years. All I'm saying is that, since the value of the square-root of -1 is not nan and 2^(-2) is not 0, it doesn't surprise me that this behaviour confuses people. > The "intuitive" functions (which must do argument checking) are (in=20 > numpy.lib.scimath) but exported as >=20 > scipy.power (actually I need to check that one...) > scipy.sqrt >=20 > What could be simpler? ;-) I'm sure this is going to come back and haunt us. We have two libraries, one depends on and exposes the API of the other, yet it also overrides some functions with its own behaviour, while keeping the same names. I'll shut up now :) Cheers St=E9fan |
From: Pierre GM <pie...@en...> - 2006-10-12 00:11:00
|
> >nan's are making things really slow, > > Yeah, they do. This actually makes the case for masked arrays, rather > than using NAN's. Travis, Talking about masked arrays, I'm about being done rewriting numpy.core.ma, mainly transforming MaskedArray as a subclass of ndarray (it should be OK by the end of the week), and allowing for an easy subclassing of MaskedArrays (which is far from being the case right now) What would be the best procedure to submit it ? Ticket on SVN ? Wiki on scipy.org ? Thanks again for your time ! Pierre |
From: Bill B. <wb...@gm...> - 2006-10-11 23:58:47
|
On 10/12/06, David Novakovic <da...@di...> wrote: > Johannes Loehnert wrote: > This is very nice, exactly what i want, but it doesnt work for mxn > matricies: > > >>> x = zeros((5,3)) > >>> x > array([[0, 0, 0], > [0, 0, 0], > [0, 0, 0], > [0, 0, 0], > [0, 0, 0]]) > >>> index = arange(min(x.shape[0],x.shape[1])) > >>> x[index,index] += 1 > >>> x > array([[1, 0, 0], > [0, 1, 0], > [0, 0, 1], > [0, 0, 0], > [0, 0, 0]]) Exactly what output are you expecting? That is the definition of the 'diagonal' for a non-square matrix. If you're expecting something else then what you want is not the diagonal. > Just for reference, this is the line of perl i'm trying to port: > > > like: > > for index in diag_iter(matrix,*axes): > matrix[index] +=1 That's not going to change the mathematical definition of the diagonal of a non-square matrix. > PS: If anyone would care to link me to the subscription page for the > mailing list so you dont have to CC me all the time :) Check the bottom of this message. > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > --bb |
From: Travis O. <oli...@ee...> - 2006-10-11 23:47:04
|
Tim Hochberg wrote: >With python 2.5 out now, perhaps it's time to come up with a with >statement context manager. Something like: > > from __future__ import with_statement > import numpy > > class errstate(object): > def __init__(self, **kwargs): > self.kwargs = kwargs > def __enter__(self): > self.oldstate = numpy.seterr(**self.kwargs) > def __exit__(self, *exc_info): > numpy.seterr(**self.oldstate) > > a = numpy.arange(10) > a/a # ignores divide by zero > with errstate(divide='raise'): > a/a # raise exception on divide by zer > # Would ignore divide by zero again if we got here. > >-tim > > > This looks great. I think most people aren't aware of the with statement and what it can do (I'm only aware because of your posts, for example). So, what needs to be added to your example in order to just add it to numpy? -Travis |
From: <pe...@ce...> - 2006-10-11 23:40:04
|
On Wed, 11 Oct 2006, Travis Oliphant wrote: > >Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower > >than scipy.sqrt on negative input but ~2 times faster on positive input: > > > >In [47]: pos_input = numpy.arange(1,100,0.001) > > > >In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) > >1000 loops, best of 3: 4.68 ms per loop > > > >In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) > >1000 loops, best of 3: 10 ms per loop > > > > > > This is the one that concerns me. Slowing everybody down who knows they > have positive values just for people that don't seems problematic. I think the code in scipy.sqrt can be optimized from def _fix_real_lt_zero(x): x = asarray(x) if any(isreal(x) & (x<0)): x = _tocomplex(x) return x def sqrt(x): x = _fix_real_lt_zero(x) return nx.sqrt(x) to (untested) def _fix_real_lt_zero(x): x = asarray(x) if not isinstance(x,(nt.csingle,nt.cdouble)) and any(x<0): x = _tocomplex(x) return x def sqrt(x): x = _fix_real_lt_zero(x) return nx.sqrt(x) or def sqrt(x): old = nx.seterr(invalid='raises') try: r = nx.sqrt(x) except FloatingPointError: x = _tocomplex(x) r = nx.sqrt(x) nx.seterr(**old) return r I haven't timed these cases yet.. Pearu |
From: David N. <da...@di...> - 2006-10-11 23:34:49
|
Johannes Loehnert wrote: >> I'm just wondering if there is a way that i can increment all the values >> along a diagonal? >> > > Assume you want to change mat. > > # min() only necessary for non-square matrices > index = arange(min(mat.shape[0], mat.shape[1])) > # add 1 to each diagonal element > matrix[index, index] += 1 > # add some other stuff > matrix[index, index] += some_array_shaped_like_index > > > HTH, Johannes > > Thank you very much for the prompt reply, I'm just having a problem with this method: This method appears to only work if the matrix is mxm for example: >>> zeros((5,5)) array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) >>> x = zeros((5,5)) >>> index = arange(min(x.shape[0],x.shape[1])) >>> x[index,index] += 1 >>> x array([[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]]) >>> This is very nice, exactly what i want, but it doesnt work for mxn matricies: >>> x = zeros((5,3)) >>> x array([[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]) >>> index = arange(min(x.shape[0],x.shape[1])) >>> x[index,index] += 1 >>> x array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0], [0, 0, 0]]) >>> So the min part is right for mxn matrices - but perhaps there is a way to use the index differently. I'm very new to numpy, so excuse my noobness :) Just for reference, this is the line of perl i'm trying to port: (my $dummy = $ones->diagonal(0,1))++; # ones is a matrix created with zeroes() Yes, i know it is horribly ugly, but in this case, diagonal returns a list of references to the values in the original matrix, so values can be changed in place. I much prefer python - hence the port, but it seems like a hard thing to replicate. Perhaps there could be a function that returns an iterator over the values in a matrix and returns the index's. like: for index in diag_iter(matrix,*axes): matrix[index] +=1 Once again, cheers - i hope we can figure something out :) Dave Novakovic PS: If anyone would care to link me to the subscription page for the mailing list so you dont have to CC me all the time :) |
From: Mathew Y. <my...@jp...> - 2006-10-11 23:25:07
|
I'm running the following python c:\Python24\Scripts\f2py.py --fcompiler=absoft -c foo.pyf foo.f and it seems that the compiler info isn't being passed down. When distutils tries to compile I get the error --------------------------------------------------- File "C:\Python24\Lib\site-packages\numpy\distutils\command\build_ext.py", li e 260, in build_extension f_objects += self.fcompiler.compile(f_sources, AttributeError: 'NoneType' object has no attribute 'compile' ------------------------------------------------ so the fcompiler isn't being set. Any help? Mathew Here is the complete stack trace Traceback (most recent call last): File "c:\Python24\Scripts\f2py.py", line 26, in ? main() File "C:\Python24\Lib\site-packages\numpy\f2py\f2py2e.py", line 552, in main run_compile() File "C:\Python24\Lib\site-packages\numpy\f2py\f2py2e.py", line 539, in run_co mpile setup(ext_modules = [ext]) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 174, in set up return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 149, in setup dist.run_commands() File "C:\Python24\lib\distutils\dist.py", line 946, in run_commands self.run_command(cmd) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "C:\Python24\lib\distutils\command\build.py", line 112, in run self.run_command(cmd_name) File "C:\Python24\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "C:\Python24\Lib\site-packages\numpy\distutils\command\build_ext.py", lin e 121, in run self.build_extensions() File "C:\Python24\lib\distutils\command\build_ext.py", line 405, in build_exte nsions self.build_extension(ext) File "C:\Python24\Lib\site-packages\numpy\distutils\command\build_ext.py", lin e 260, in build_extension f_objects += self.fcompiler.compile(f_sources, AttributeError: 'NoneType' object has no attribute 'compile' |
From: Travis O. <oli...@ee...> - 2006-10-11 23:24:35
|
pe...@ce... wrote: > >On Wed, 11 Oct 2006, Travis Oliphant wrote: > > > >>On the other hand requiring all calls to numpy.sqrt to go through an >>"argument-checking" wrapper is a bad idea as it will slow down other uses. >> >> > >Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower >than scipy.sqrt on negative input but ~2 times faster on positive input: > >In [47]: pos_input = numpy.arange(1,100,0.001) > >In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) >1000 loops, best of 3: 4.68 ms per loop > >In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) >1000 loops, best of 3: 10 ms per loop > > This is the one that concerns me. Slowing everybody down who knows they have positive values just for people that don't seems problematic. >In [50]: neg_input = -pos_input > >In [52]: %timeit -n 1000 b=numpy.sqrt(neg_input) >1000 loops, best of 3: 99.3 ms per loop > >In [53]: %timeit -n 1000 b=scipy.sqrt(neg_input) >1000 loops, best of 3: 29.2 ms per loop > >nan's are making things really slow, > > Yeah, they do. This actually makes the case for masked arrays, rather than using NAN's. -Travis |
From: Travis O. <oli...@ee...> - 2006-10-11 23:21:45
|
Stefan van der Walt wrote: >I agree with Fernando on this one. > >Further, if I understand correctly, changing sqrt and power to give >the right answer by default will slow things down somewhat. But is it >worth sacrificing intuitive usage for speed? > > For NumPy, yes. This is one reason that NumPy by itself is not a MATLAB replacement. >N.power(2,-2) == 0 > >and > >N.sqrt(-1) == nan > >just doesn't feel right. > Only because your expectations are that NumPy *be* a MATLAB replacement. The problem is that it sacrifices too much for that to be the case. And we all realize that NumPy needs more stuff added to it to be like IDL/MATLAB such as SciPy, Matplotlib, IPython, etc. >Why not then have > >N.power(2,-2) == 0.24 >N.sqrt(-1) == 1j > >and write a special function that does fast calculation of >square-roots for positive values? > > We've already done this. The special functions are called numpy.power numpy.sqrt (notice that if you do numpy.sqrt(-1+0j) you get the "expected" answer emphasizing that numpy does no "argument" checking to determine the output). The "intuitive" functions (which must do argument checking) are (in numpy.lib.scimath) but exported as scipy.power (actually I need to check that one...) scipy.sqrt What could be simpler? ;-) -Travis |
From: <pe...@ce...> - 2006-10-11 23:13:29
|
On Wed, 11 Oct 2006, Travis Oliphant wrote: > On the other hand requiring all calls to numpy.sqrt to go through an > "argument-checking" wrapper is a bad idea as it will slow down other uses. Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower than scipy.sqrt on negative input but ~2 times faster on positive input: In [47]: pos_input = numpy.arange(1,100,0.001) In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) 1000 loops, best of 3: 4.68 ms per loop In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) 1000 loops, best of 3: 10 ms per loop In [50]: neg_input = -pos_input In [52]: %timeit -n 1000 b=numpy.sqrt(neg_input) 1000 loops, best of 3: 99.3 ms per loop In [53]: %timeit -n 1000 b=scipy.sqrt(neg_input) 1000 loops, best of 3: 29.2 ms per loop nan's are making things really slow, Pearu |
From: Stefan v. d. W. <st...@su...> - 2006-10-11 23:07:28
|
On Wed, Oct 11, 2006 at 03:37:34PM -0600, Fernando Perez wrote: > On 10/11/06, Travis Oliphant <oli...@ee...> wrote: >=20 > > pe...@ce... wrote: > > >Could sqrt(-1) made to return 1j again? > > > > > Not in NumPy. But, in scipy it could. >=20 > Without taking sides on which way to go, I'd like to -1 the idea of a > difference in behavior between numpy and scipy. >=20 > IMHO, scipy should be within reason a strict superset of numpy. > Gratuitious differences in behavior like this one are going to drive > us all mad. >=20 > There are people who import scipy for everything, others distinguish > between numpy and scipy, others use numpy alone and at some point in > their life's code they do >=20 > import numpy as N -> import scipy as N >=20 > because they start needing stuff not in plain numpy. Having different > APIs and behaviors appear there is, I think, a Seriously Bad Idea > (TM). I agree with Fernando on this one. Further, if I understand correctly, changing sqrt and power to give the right answer by default will slow things down somewhat. But is it worth sacrificing intuitive usage for speed? N.power(2,-2) =3D=3D 0 and N.sqrt(-1) =3D=3D nan just doesn't feel right. Why not then have N.power(2,-2) =3D=3D 0.24 N.sqrt(-1) =3D=3D 1j and write a special function that does fast calculation of square-roots for positive values? Cheers St=E9fan |
From: Fernando P. <fpe...@gm...> - 2006-10-11 23:02:17
|
On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > >There are people who import scipy for everything, others distinguish > >between numpy and scipy, others use numpy alone and at some point in > >their life's code they do > > > >import numpy as N -> import scipy as N > > > >because they start needing stuff not in plain numpy. Having different > >APIs and behaviors appear there is, I think, a Seriously Bad Idea > >(TM). > > > > > I think the SBI is mixing numpy and scipy gratuitously (which I admit I > have done in the past). I'm trying to repent.... Well, the problem is that it may not be so easy not to do so, esp. for new users. The fact that scipy absorbs and exposes many numpy functions makes this a particularly easy trap for anyone to fall into. The fact that even seasoned users do it should be an indicator that the 'right thing to do' is anything but obvious, IMHO. Once the dust settles on numpy 1.0, I think that the issues of how scipy plays with it, API consistency, coding best practices, etc, will need serious attention. But let's cross one bridge at a time :) Cheers, f |
From: Fernando P. <fpe...@gm...> - 2006-10-11 22:59:29
|
On 10/11/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > >IMHO, scipy should be within reason a strict superset of numpy. > > > > > This was not the relationship of scipy to Numeric. > > For me, it's the fact that scipy *used* to have the behavior that > > scipy.sqrt(-1) return 1j > > and now doesn't that is the kicker. That's fine, my only point was that we should really strive for consitency between the two. I think most users should be able to expect that numpy.foo(x) == scipy.foo(x) for all cases where foo exists in both. The scipy.foo() call might be faster, or take extra arguments for flexibility, and the above might only be true within floating point accuracy (since a different algorithm may be used), but hopefully functions with the same name do the same thing in both. I really think breaking this will send quite a few potential users running for the hills, and this is what I meant by 'superset'. Perhaps I wasn't clear enough. Cheers, f |