You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Victoria G. L. <la...@st...> - 2006-09-14 15:18:05
|
Francesc Altet wrote: >El dj 14 de 09 del 2006 a les 02:11 -0700, en/na Andrew Straw va >escriure: > > >>>>My main focus is on the fact that you might read '<i4' as >>>>"less" than 4-bytes int, which is very confusing ! >>>> >>>> >>>> >>>> >>>I can agree it's confusing at first, but it's the same syntax the struct >>>module uses which is the Python precedent for this. >>> >>> >>> >>I'm happy with seeing the repr() value since I know what it means, but I >>can see Sebastian's point. Perhaps there's a middle ground -- the str() >>representation for simple dtypes could contain both the repr() value and >>an English description. For example, something along the lines of >>"dtype('<i4') (4 byte integer, little endian)". For more complex dtypes, >>the repr() string could be given without any kind of English translation. >> >> > >+1 > >I was very used (and happy) to the numarray string representation for >types ('Int32', 'Complex64'...) and looking at how NumPy represents it >now, I'd say that this is a backwards step in readability. Something >like '<i4' would look good for a low-level library, but not for a >high-level one like NumPy, IMO. > > I agree entirely. The first type I got '<i4' instead of 'Int32', my reaction was "What the hell is that?" It looked disturbingly like line-noise corrupted text to me! (Blast from the past...) +1 from me as well. Vicki Laidler |
From: Francesc A. <fa...@ca...> - 2006-09-14 15:10:34
|
El dj 14 de 09 del 2006 a les 02:11 -0700, en/na Andrew Straw va escriure: > >> My main focus is on the fact that you might read '<i4' as > >> "less" than 4-bytes int, which is very confusing ! > >> =20 > >> =20 > > I can agree it's confusing at first, but it's the same syntax the struc= t=20 > > module uses which is the Python precedent for this. > > =20 > I'm happy with seeing the repr() value since I know what it means, but I > can see Sebastian's point. Perhaps there's a middle ground -- the str() > representation for simple dtypes could contain both the repr() value and > an English description. For example, something along the lines of > "dtype('<i4') (4 byte integer, little endian)". For more complex dtypes, > the repr() string could be given without any kind of English translation. +1 I was very used (and happy) to the numarray string representation for types ('Int32', 'Complex64'...) and looking at how NumPy represents it now, I'd say that this is a backwards step in readability. Something like '<i4' would look good for a low-level library, but not for a high-level one like NumPy, IMO. Cheers, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=C3=A1rabos Coop. V. Enjoy Data "-" |
From: Martin W. <mar...@gm...> - 2006-09-14 14:24:46
|
Hi gurus, I'm debugging a C-extension module that uses numpy. My version is 1.0b1. Can I safely ignore the following compiler warning? .../lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:903: warning: `_import_array' defined but not used Any help would be appreciated. Regards, Martin Wiechert |
From: Pierre T. <thi...@ph...> - 2006-09-14 13:01:18
|
On 9/13/06, Francesc Altet <fa...@ca...> wrote: > Well, it seems that malloc actually takes more time when asking for more > space. However, this can't be the reason why Pierre is seeing that: > > a = numpy.exp(a) [1] > > is slower than > > numpy.exp(a,out=a) [2] > > as I'd say that this increment in time is negligible compared with > processing times of those big arrays. In fact, here are my times: > > >>> Timer("a = numpy.exp(a)", "import numpy;a = > numpy.random.rand(2048,2048) + 1j * > numpy.random.rand(2048,2048)").repeat(3,1) > [2.5527338981628418, 2.5427830219268799, 2.5074479579925537] > >>> Timer("numpy.exp(a,out=a)", "import numpy;a = > numpy.random.rand(2048,2048) + 1j * > numpy.random.rand(2048,2048)").repeat(3,1) > [2.5298278331756592, 2.5082788467407227, 2.5222280025482178] > > So, both times are comparable. Yeah, sorry about that: I had not checked carefully the timing. It seemed slower to me, but you're right, this is not a problem as long as there is enough free RAM. Ok, I'll go back to my coding and do like I should always do: care about optimization later. Thanks for all the comments and explanations! Pierre -- Pierre Thibault 616 Clark Hall, Cornell University (607) 255-5522 |
From: Andrew S. <str...@as...> - 2006-09-14 09:11:08
|
Travis Oliphant wrote: > Sebastian Haase wrote: > >> Travis Oliphant wrote: >> >> >> >>> It's not necessarily dead, the problem is complexity of implementation >>> and more clarity about how all dtypes are supposed to be printed, not >>> just this particular example. A patch would be very helpful here. If >>> desired it could be implemented in _internal.py and called from there in >>> arrayobject.c >>> >>> But, to get you thinking... How should the following be printed >>> >>> dtype('c4') >>> >>> dtype('a4,i8,3f4') >>> >>> dtype('3f4') >>> >>> >>> -Travis >>> >>> >> I would argue that if the simple cases were addressed first those would >> cover 90% (if not 99% for most people) of the cases occurring in >> people's daily use. >> For complex types (like 'a4,i8,3f4') I actually think the current text >> is compact and good. >> (Lateron one could think about >> 'c4' --> '4 chars' >> '3f4' --> '3 float32s' >> >> but already I don't know: is there any difference between 'c4' and >> '4c1'? What is the difference between 'c4' and 'a4' !? >> ) >> >> >> My main focus is on the fact that you might read '<i4' as >> "less" than 4-bytes int, which is very confusing ! >> >> > I can agree it's confusing at first, but it's the same syntax the struct > module uses which is the Python precedent for this. > I'm happy with seeing the repr() value since I know what it means, but I can see Sebastian's point. Perhaps there's a middle ground -- the str() representation for simple dtypes could contain both the repr() value and an English description. For example, something along the lines of "dtype('<i4') (4 byte integer, little endian)". For more complex dtypes, the repr() string could be given without any kind of English translation. -Andrew |
From: Nils W. <nw...@ia...> - 2006-09-14 06:47:41
|
Travis Oliphant wrote: > I'd like to make the first release-candidate of NumPy 1.0 this weekend. > > Any additions wanting to make the first official release candidate > should be checked in by then. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Is it possible to circumvent the error messages if one uses Python2.4 ? ImportError: ctypes is not available. Nils |
From: Travis O. <oli...@ie...> - 2006-09-14 03:56:02
|
Sebastian Haase wrote: > Travis, > what is the "new string directives as the first element of the item > tuple" !? > These have been there for a while, but I recently added a couple of capabilities. > I always liked the idea of having a "shortest possible" way for creating > (or concatenating) > rows with "r_" > *and* > columns with "c_" > ! > > Why did the "c_" have to be removed !? > It wasn't removed, I thought to deprecate it. Owing to your response and the fact that others seem to use c_ quite a bit, I've kept it as a short hand for r_['1,2,0', ...] This means that arrays will be concatenated along the 1st axis after being up-graded to (at-least) 2-dimensional arrays with 1's placed at the end of the new shape. Thus, c_[[1,2,3],[4,5,6] produces array([[1, 4], [2, 5], [3, 6]]) This is a bit different if you were using c_ when you should have been using r_. -Travis |
From: Travis O. <oli...@ie...> - 2006-09-14 03:43:49
|
Sebastian Haase wrote: > Travis Oliphant wrote: > > >> It's not necessarily dead, the problem is complexity of implementation >> and more clarity about how all dtypes are supposed to be printed, not >> just this particular example. A patch would be very helpful here. If >> desired it could be implemented in _internal.py and called from there in >> arrayobject.c >> >> But, to get you thinking... How should the following be printed >> >> dtype('c4') >> >> dtype('a4,i8,3f4') >> >> dtype('3f4') >> >> >> -Travis >> > > > I would argue that if the simple cases were addressed first those would > cover 90% (if not 99% for most people) of the cases occurring in > people's daily use. > For complex types (like 'a4,i8,3f4') I actually think the current text > is compact and good. > (Lateron one could think about > 'c4' --> '4 chars' > '3f4' --> '3 float32s' > > but already I don't know: is there any difference between 'c4' and > '4c1'? What is the difference between 'c4' and 'a4' !? > ) > > > My main focus is on the fact that you might read '<i4' as > "less" than 4-bytes int, which is very confusing ! > I can agree it's confusing at first, but it's the same syntax the struct module uses which is the Python precedent for this. > As far as a patch is concerned: is _internal.py already being called now > from arrayobject.c for the str() and repr() methods ? And is there so > Yes, you can easily make a call to _internal.py from arrayobject.c (it's how some things are actually implemented). If you just provide a Python function to call for dtype.__str__ then that would suffice. > far any difference in str() and repr() ? > I assume that repr() has to stay exactly the way it is right now - right !? > > Yeah, the repr() probably needs to stay the same -Travis |
From: Travis O. <oli...@ie...> - 2006-09-14 03:40:41
|
Charles R Harris wrote: > > > On 9/13/06, *Travis Oliphant* <oli...@ie... > <mailto:oli...@ie...>> wrote: > > I'd like to make the first release-candidate of NumPy 1.0 this > weekend. > > Any additions wanting to make the first official release candidate > should be checked in by then. > > > There are a few cleanups and added functionality I have in mind but > nothing that would affect the release. Do you plan to keep the 1.0 > release as is with only added fixes and then make a 1.1 release not > too long after that contains additions, or are you thinking that > modifications that don't affect the API should all go into 1.0.x or > some such? The plan is for 1.0.x to contain modifications that don't affect the API (good additions should be O.K.). We want extensions compiled against 1.0.x to work for a long time. The 1.1 release won't be for at least a year and probably longer. 1.0.1 would be a maintenance release of the 1.0 release. -Travis |
From: Sebastian H. <ha...@ms...> - 2006-09-14 03:35:25
|
Travis, what is the "new string directives as the first element of the item tuple" !? I always liked the idea of having a "shortest possible" way for creating (or concatenating) rows with "r_" *and* columns with "c_" ! Why did the "c_" have to be removed !? Thanks, Sebastan NumPy wrote: > #235: r_, c_, hstack, vstack, column_stack should be made more consistent > -------------------------+-------------------------------------------------- > Reporter: baxissimo | Owner: somebody > Type: enhancement | Status: closed > Priority: normal | Milestone: > Component: numpy.lib | Version: devel > Severity: normal | Resolution: fixed > Keywords: | > -------------------------+-------------------------------------------------- > Changes (by oliphant): > > * status: new => closed > * resolution: => fixed > > Comment: > > r_ is the only current quick-creator. You can now get the functionality > of all others using string directives as the first element of the item > tuple. > > Column stack was fixed. > |
From: Sebastian H. <ha...@ms...> - 2006-09-14 03:25:43
|
Travis Oliphant wrote: >> Ticket #188 dtype should have nicer str representation >> Is this one now officially dead ? >> "<i4" is not intuitively readable ! '<i4' as repr() is OK >> but str() should rather return 'int32 (little endian)' >> > It's not necessarily dead, the problem is complexity of implementation > and more clarity about how all dtypes are supposed to be printed, not > just this particular example. A patch would be very helpful here. If > desired it could be implemented in _internal.py and called from there in > arrayobject.c > > But, to get you thinking... How should the following be printed > > dtype('c4') > > dtype('a4,i8,3f4') > > dtype('3f4') > > > -Travis I would argue that if the simple cases were addressed first those would cover 90% (if not 99% for most people) of the cases occurring in people's daily use. For complex types (like 'a4,i8,3f4') I actually think the current text is compact and good. (Lateron one could think about 'c4' --> '4 chars' '3f4' --> '3 float32s' but already I don't know: is there any difference between 'c4' and '4c1'? What is the difference between 'c4' and 'a4' !? ) My main focus is on the fact that you might read '<i4' as "less" than 4-bytes int, which is very confusing ! As far as a patch is concerned: is _internal.py already being called now from arrayobject.c for the str() and repr() methods ? And is there so far any difference in str() and repr() ? I assume that repr() has to stay exactly the way it is right now - right !? Thanks, Sebastian |
From: Charles R H. <cha...@gm...> - 2006-09-14 01:24:37
|
On 9/13/06, Francesc Altet <fa...@ca...> wrote: > > El dt 12 de 09 del 2006 a les 13:28 -0600, en/na Travis Oliphant va > escriure: > > >[BTW, numpy.empty seems twice as slower in my machine. Why? > > > > > > > > >>>>Timer("a=numpy.empty(10000,dtype=numpy.complex128)", "import > > >>>> > > >>>> > > >numpy").repeat(3,10000) > > >[0.37033700942993164, 0.31780219078063965, 0.31607294082641602] > > >] > > > > > > > > Now, you are creating an empty array with 10000 elements in it. > > Ups, my bad. So, here are the correct times for array creation: > > >>> Timer("a=numpy.empty(10,dtype=numpy.complex128)", "import > numpy").repeat(3,10000) > [0.083303928375244141, 0.080381870269775391, 0.077172040939331055] > >>> Timer("a=numpy.empty(100,dtype=numpy.complex128)", "import > numpy").repeat(3,10000) > [0.086454868316650391, 0.084085941314697266, 0.083555936813354492] > >>> Timer("a=numpy.empty(1000,dtype=numpy.complex128)", "import > numpy").repeat(3,10000) > [0.084996223449707031, 0.082299947738647461, 0.081347942352294922] > >>> Timer("a=numpy.empty(10000,dtype=numpy.complex128)", "import > numpy").repeat(3,10000) > [0.31068897247314453, 0.30376386642456055, 0.30176281929016113] > >>> Timer("a=numpy.empty(100000,dtype=numpy.complex128)", "import > numpy").repeat(3,10000) > [0.42552995681762695, 0.36864185333251953, 0.36870002746582031] > >>> Timer("a=numpy.empty(1000000,dtype=numpy.complex128)", "import > numpy").repeat(3,10000) > [0.48045611381530762, 0.41251182556152344, 0.40645909309387207] > > So, it seems that there are a certain time dependency with size > > array of 10 elements --> 7.7 us > array of 100 elements --> 8.4 us > array of 1000 elements --> 8.1 us > array of 10000 elements --> 30.2 us > array of 100000 elements --> 36.9 us > array of 1000000 elements --> 40.6 us The transition looks a bit like a cache effect, although I don't see why the cache should enter in. But all the allocations look pretty fast to me. Chuck |
From: Charles R H. <cha...@gm...> - 2006-09-14 01:18:41
|
On 9/13/06, Travis Oliphant <oli...@ie...> wrote: > > I'd like to make the first release-candidate of NumPy 1.0 this weekend. > > Any additions wanting to make the first official release candidate > should be checked in by then. There are a few cleanups and added functionality I have in mind but nothing that would affect the release. Do you plan to keep the 1.0 release as is with only added fixes and then make a 1.1 release not too long after that contains additions, or are you thinking that modifications that don't affect the API should all go into 1.0.x or some such? Chuck |
From: Charles R H. <cha...@gm...> - 2006-09-14 01:07:42
|
On 9/13/06, Matthew Brett <mat...@gm...> wrote: > > Hi, > > > For example, if you do array([a,b,c]).shape(), the answer is normally > > (3,) unless a b and c happen to all be lists of the same length, at > > which point your array could have a much more complicated shape... but > > as the person who wrote "array([a,b,c])" it's tempting to assume that > > the result has shape (3,), only to discover subtle bugs much later. > > Very much agree with this. > > > If we were writing an array-creation function from scratch, would > > there be any reason to include object-array creation in the same > > function as uniform array creation? It seems like a bad idea to me. > > > > If not, the problem is just compatibility with Numeric. Why not simply > > write a wrapper function in python that does Numeric-style guesswork, > > and put it in the compatibility modules? How much code will actually > > break? > > Can I encourage any more comments? This suggestion seems very > sensible to me, and I guess this is our very last chance to change > this. The current behavior does seem to violate least surprise - at > least to my eye. I've been thinking about how to write a new constructor for objects. Because array has been at the base of numpy for many years I think it is too late to change it now, but perhaps a new and more predictable constructor for objects may eventually displace it. The main problem in constructing arrays of objects is more information needs to be supplied because the user's intention can't be reliably deduced from the current syntax. That said, I have no idea how widespread the use of object arrays is and so don't know how much it really matters. I don't use them much myself. Chuck |
From: Matthew B. <mat...@gm...> - 2006-09-14 00:13:54
|
Hi, > For example, if you do array([a,b,c]).shape(), the answer is normally > (3,) unless a b and c happen to all be lists of the same length, at > which point your array could have a much more complicated shape... but > as the person who wrote "array([a,b,c])" it's tempting to assume that > the result has shape (3,), only to discover subtle bugs much later. Very much agree with this. > If we were writing an array-creation function from scratch, would > there be any reason to include object-array creation in the same > function as uniform array creation? It seems like a bad idea to me. > > If not, the problem is just compatibility with Numeric. Why not simply > write a wrapper function in python that does Numeric-style guesswork, > and put it in the compatibility modules? How much code will actually > break? Can I encourage any more comments? This suggestion seems very sensible to me, and I guess this is our very last chance to change this. The current behavior does seem to violate least surprise - at least to my eye. Best, Matthew |
From: Matthew B. <mat...@gm...> - 2006-09-14 00:09:53
|
Hi, I was surprised by this - but maybe I shouldn't have been: In [7]:iscomplex('a') Out[7]:True In [8]:iscomplex(u'a') Out[8]:True Best, Matthew |
From: Travis O. <oli...@ie...> - 2006-09-14 00:08:22
|
Sebastian Haase wrote: > Hi! > I would like to hear about three tickets I submitted some time ago: > > Ticket #230 a**2 not executed as a*a if a.dtype = int32 > is this easy to fix ? > > Fixed. Now, all arrays with a**2 are executed as a*a (float arrays are still executed as square(a) (is this needed)? > Ticket #229 numpy.random.poisson(0) should return 0 > I hope there is agreement that the edge-case of 0 should/could be handled > without raising an exception. I submitted a patch (please test first!) > any comments on this one. > Fixed. This seems reasonable to me. > Ticket #188 dtype should have nicer str representation > Is this one now officially dead ? > "<i4" is not intuitively readable ! '<i4' as repr() is OK > but str() should rather return 'int32 (little endian)' > It's not necessarily dead, the problem is complexity of implementation and more clarity about how all dtypes are supposed to be printed, not just this particular example. A patch would be very helpful here. If desired it could be implemented in _internal.py and called from there in arrayobject.c But, to get you thinking... How should the following be printed dtype('c4') dtype('a4,i8,3f4') dtype('3f4') -Travis |
From: Sebastian H. <ha...@ms...> - 2006-09-13 22:51:12
|
Hi! I would like to hear about three tickets I submitted some time ago: Ticket #230 a**2 not executed as a*a if a.dtype = int32 is this easy to fix ? Ticket #229 numpy.random.poisson(0) should return 0 I hope there is agreement that the edge-case of 0 should/could be handled without raising an exception. I submitted a patch (please test first!) any comments on this one. Ticket #188 dtype should have nicer str representation Is this one now officially dead ? "<i4" is not intuitively readable ! '<i4' as repr() is OK but str() should rather return 'int32 (little endian)' Read also: http://aspn.activestate.com/ASPN/Mail/Message/3207949 Thanks, Sebastian Haase On Wednesday 13 September 2006 13:18, Travis Oliphant wrote: > I'd like to make the first release-candidate of NumPy 1.0 this weekend. > > Any additions wanting to make the first official release candidate > should be checked in by then. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Albert S. <fu...@gm...> - 2006-09-13 22:41:41
|
Hello all I just ran NumPy and SciPy through Valgrind, and everything looks clean = on that the NumPy side. Some other things that could be fixed for RC1: - GCC 4.1.1 warning in ufuncobject.c: numpy/core/src/ufuncobject.c: In function = =E2PyUFunc_RegisterLoopForType=E2: numpy/core/src/ufuncobject.c:3215: warning: "cmp" may be used = uninitialized in this function - Time to kill the dft package? /usr/lib/python2.4/site-packages/numpy/dft/__init__.py:2: UserWarning: = The dft subpackage will be removed by 1.0 final -- it is now called fft warnings.warn("The dft subpackage will be removed by 1.0 final -- it = is now called fft") - Test failure =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ERROR: check_instance_methods (numpy.core.tests.test_defmatrix.test_matrix_return) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", = line 166, in check_instance_methods b =3D f(*args) ValueError: setitem must have at least one argument Although not strictly NumPy issues, the following crops up when you run = the SciPy test suite through Valgrind: Valgrind warnings when running scipy.integrate.tests.test_integrate.test_odeint http://projects.scipy.org/scipy/scipy/ticket/246 Valgrind warnings when running Cephes tests http://projects.scipy.org/scipy/scipy/ticket/247 Memory leak in fitpack http://projects.scipy.org/scipy/scipy/ticket/248 I think I've figured out #248. #246 should be relatively easy to fix. = #247 is... interesting. Regards, Albert > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Travis Oliphant > Sent: 13 September 2006 22:18 > To: numpy-discussion > Subject: [Numpy-discussion] NumPy 1.0 release-candidate 1.0 this = weekend >=20 > I'd like to make the first release-candidate of NumPy 1.0 this = weekend. >=20 > Any additions wanting to make the first official release candidate > should be checked in by then. >=20 > -Travis |
From: Travis O. <oli...@ie...> - 2006-09-13 20:18:08
|
I'd like to make the first release-candidate of NumPy 1.0 this weekend. Any additions wanting to make the first official release candidate should be checked in by then. -Travis |
From: Travis O. <oli...@ie...> - 2006-09-13 19:06:44
|
Ryan Gutenkunst wrote: > Hi all, > > I'm migrating an application from Numeric to numpy, and I've run into a > significant application slowdown related to arithmetic on array-scalars. > > Yeah, that is a common thing. There are two factors: 1) array indexing and 2) array scalar math. I don't think array scalar math has to be slower in principle then Python code. I think it's ability to handle interaction with multiple scalars that is operating more slowly right now. It could be sped up. The array indexing code is slower (in fact Numeric's indexing code is slower then just using lists also). > I'm guessing speeding up the scalar-array math would be difficult, if > not impossible. (Maybe I'm wrong?) > I think scalar-array math could be sped up quite a bit. I haven't done much in that area at all. Right now a lot of setup code is handled generically instead of type-specifically like it could be. > I notice that numpy_array.item() will give me the first element as a > normal scalar. Would it be possible for numpy_array.item(N) to return > the Nth element of the array as a normal scalar? > Now this is an interesting idea. It would allow you to by-pass the slow-indexing as well as the array scalar computation should you desire it. I like it and am going to add it unless there are convincing objections. -Travis |
From: Ryan G. <rn...@co...> - 2006-09-13 18:32:35
|
Hi all, I'm migrating an application from Numeric to numpy, and I've run into a significant application slowdown related to arithmetic on array-scalars. The inner loop of the application is integrating a nonlinear set of differential equations using odeint, with the rhs a dynamically-generated (only once) python function. In that function I copy the entries of the current x array to a bunch of local variables, do a bunch of arithmetic, and assign the results to a dx_dt array. The arithmetic is approximately 3x slower using numpy than Numeric, because numpy returns array-scalars while Numeric returns normal scalars. (Simple example below.) I can wrap all my arrays accesses with float() casts, but that introduces a noticable overhead (~50% for problems of interest). I'm guessing speeding up the scalar-array math would be difficult, if not impossible. (Maybe I'm wrong?) I notice that numpy_array.item() will give me the first element as a normal scalar. Would it be possible for numpy_array.item(N) to return the Nth element of the array as a normal scalar? Thanks a bunch, Ryan The effect can be isolated as (running in python 2.4 on a 32-bit Athlon): In [1]: import Numeric, numpy In [2]: a_old, a_new = Numeric.array([1.0, 2.0]), numpy.array([1.0, 2.0]) In [3]: b_old, b_new = a_old[0], a_new[0] In [4]: %time for ii in xrange(1000000):c = b_old + 1.0 CPU times: user 0.40 s, sys: 0.00 s, total: 0.40 s Wall time: 0.40 In [5]: %time for ii in xrange(1000000):c = b_new + 1.0 CPU times: user 1.20 s, sys: 0.00 s, total: 1.20 s Wall time: 1.22 In [6]: Numeric.__version__, numpy.__version__ Out[6]: ('24.2', '1.0b5') -- Ryan Gutenkunst | Cornell LASSP | "It is not the mountain | we conquer but ourselves." Clark 535 / (607)227-7914 | -- Sir Edmund Hillary AIM: JepettoRNG | http://www.physics.cornell.edu/~rgutenkunst/ |
From: Hanno K. <kl...@ph...> - 2006-09-13 15:09:45
|
Hello, when I try to build numpy-1.0b5 it doesn't find my ATLAS libraries. I had to build hem from scratch (version 3.7.17) and the compile went well. I have installed them under /scratch/python2.4/atlas furthermore, I have blas and lapack installed under /scatch/python2.4/lib I adjusted the environment variables as: BLAS=/scratch/python2.4/lib/libfblas.a LAPACK=/scratch/python2.4/lib/libflapack.a ATLAS=/scratch/python2.4/atlas and my site.cfg looks like [atlas] library_dirs = /scratch/python2.4/atlas/lib atlas_libs = lapack, blas, cblas, atlas include_dirs = /scratch/python2.4/atlas/include python setup.py config then finds the blas and lapack libraries under /scratch/python2.4/lib but does not find the atlas libraries. What am I doing wrong here? Hanno P.S.: The output of python setup.py config reads: atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas libraries lapack_atlas not found in /scratch/python2.4/atlas libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas/include/atlas libraries lapack_atlas not found in /scratch/python2.4/atlas/include/atlas libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas/include libraries lapack_atlas not found in /scratch/python2.4/atlas/include libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas/lib libraries lapack_atlas not found in /scratch/python2.4/atlas/lib libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/lib libraries lapack_atlas not found in /scratch/python2.4/lib libraries lapack,blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack,blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas libraries lapack_atlas not found in /scratch/python2.4/atlas libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas/include/atlas libraries lapack_atlas not found in /scratch/python2.4/atlas/include/atlas libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas/include libraries lapack_atlas not found in /scratch/python2.4/atlas/include libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/atlas/lib libraries lapack_atlas not found in /scratch/python2.4/atlas/lib libraries lapack,blas,cblas,atlas not found in /scratch/python2.4/lib libraries lapack_atlas not found in /scratch/python2.4/lib libraries lapack,blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack,blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /scratch/src/numpy-1.0b5/numpy/distutils/system_info.py:1205: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: FOUND: libraries = ['lapack'] library_dirs = ['/scratch/python2.4/lib'] language = f77 FOUND: libraries = ['lapack', 'fblas'] library_dirs = ['/scratch/python2.4/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 running config -- Hanno Klemm kl...@ph... |
From: Paxton P. <ze...@i-...> - 2006-09-13 11:09:38
|
Hi =20 QUIT OVE t RPA e YIN a G FOR YOU e R P o HAR z MAC j Y =20 S u AV k E u p p to 50 w % wi v th http://www.prlawoec.com |
From: Johannes L. <a.u...@gm...> - 2006-09-13 08:24:04
|
Hi, one word in advance, instead of optimizing it is advisable to seek for a way to refactorize the algorithm using smaller arrays, since this kind of optimization almost certainly reduces readability. If you do it, comment well. ;-) If you have very large arrays and want to do some arithmetics on it, say B = 2*B + C you can use inplace operators to avoid memory overhead: B *= 2 B += C Another trick which works in most situations is to do the outermost loop in python: for i in xrange(len(B)): B[i] = 2*B[i] + C[i] This reduces the temporary array size to 1/len(B) while still being fast (if the other dimensions are large enough). For very large 1d arrays, you could split them into chunks of a certain size. However, you have to be careful that your calculation does not access already-calculated elements of B. Consider the following example: In [2]: B=arange(10) In [3]: B+B[::-1] Out[3]: array([9, 9, 9, 9, 9, 9, 9, 9, 9, 9]) In [4]: B += B[::-1] In [5]: B Out[5]: array([ 9, 9, 9, 9, 9, 14, 15, 16, 17, 18]) Johannes |