You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Sasha <nd...@ma...> - 2006-06-15 03:46:30
|
On 6/14/06, David M. Cooke <co...@ph...> wrote: > After working with them for a while, I'm going to go on record and say that I > prefer the long names from Numeric and numarray (like linear_least_squares, > inverse_real_fft, etc.), as opposed to the short names now used by default in > numpy (lstsq, irefft, etc.). I know you can get the long names from > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > defaults. > I agree in spirit, but note that inverse_real_fft is still short for inverse_real_fast_fourier_transform. Presumably, fft is a proper noun in many people vocabularies, but so may be lstsq depending who you ask. > Abbreviations aren't necessary unique (quick! what does eig() return by > default?), and aren't necessarily obvious. A Google search for irfft vs. > irefft for instance turns up only the numpy code as (English) matches for > irefft, while irfft is much more common. > Short names have one important advantage in scientific languages: they look good in expressions. What is easier to understand: hyperbolic_tangent(x) = hyperbolic_sinus(x)/hyperbolic_cosinus(x) or tanh(x) = sinh(x)/cosh(x) ? I am playing devil's advocate here a little because personally, I always recommend the following as a compromize: sinh = hyperbolic_sinus ... tanh(x) = sinh(x)/cosh(x) But the next question is where to put "sinh = hyperbolic_sinus": right before the expression using sinh? at the top of the module (import hyperbolic_sinus as sinh)? in the math library? If you pick the last option, do you need hyperbolic_sinus to begin with? If you pick any other option, how do you prevent others from writing sh = hyperbolic_sinus instead of sinh? > Also, Numeric and numarray compatibility is increased by using the long > names: those two don't have the short ones. > > Fitting names into 6 characters when out of style decades ago. (I think > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > Short names are still popular in scientific programming: <http://www.nsl.com/papers/style.pdf>. I am still +1 for keeping linear_least_squares and inverse_real_fft, but not just because abreviations are bad as such - if an established acronym such as fft exists we should be free to use it. |
From: Scott R. <sr...@nr...> - 2006-06-15 03:21:05
|
I'll add my 2 cents to this and agree with David. Arguments about how short name are important for interactive work are pretty bogus given the beauty of modern tab-completion. And I'm not sure what other arguments there are... Scott On Wed, Jun 14, 2006 at 11:13:25PM -0400, David M. Cooke wrote: > After working with them for a while, I'm going to go on record and say that I > prefer the long names from Numeric and numarray (like linear_least_squares, > inverse_real_fft, etc.), as opposed to the short names now used by default in > numpy (lstsq, irefft, etc.). I know you can get the long names from > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > defaults. > > Abbreviations aren't necessary unique (quick! what does eig() return by > default?), and aren't necessarily obvious. A Google search for irfft vs. > irefft for instance turns up only the numpy code as (English) matches for > irefft, while irfft is much more common. > > Also, Numeric and numarray compatibility is increased by using the long > names: those two don't have the short ones. > > Fitting names into 6 characters when out of style decades ago. (I think > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > > My 2 cents... > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |co...@ph... > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sr...@nr... Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 |
From: David M. C. <co...@ph...> - 2006-06-15 03:13:30
|
After working with them for a while, I'm going to go on record and say that I prefer the long names from Numeric and numarray (like linear_least_squares, inverse_real_fft, etc.), as opposed to the short names now used by default in numpy (lstsq, irefft, etc.). I know you can get the long names from numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better defaults. Abbreviations aren't necessary unique (quick! what does eig() return by default?), and aren't necessarily obvious. A Google search for irfft vs. irefft for instance turns up only the numpy code as (English) matches for irefft, while irfft is much more common. Also, Numeric and numarray compatibility is increased by using the long names: those two don't have the short ones. Fitting names into 6 characters when out of style decades ago. (I think MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) My 2 cents... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Mathew Y. <my...@jp...> - 2006-06-14 21:06:22
|
Travis suggested I use svn and this worked! Thanks Travis! I'm now getting 1 test failure. I'd love to dot this 'i' ====================================================================== FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 42, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with <type 'float96scalar'>: got 6765201.00000000000364 ---------------------------------------------------------------------- Ran 377 tests in 0.347s FAILED (failures=1) Mathew Yeates wrote: > I consistently core dump when I do the following > 1) from the console I do > >import numpy > >numpy.test(level=1,verbosity=2) > >numpy.test(level=1,verbosity=2) > >numpy.test(level=1,verbosity=2) > > the third time (and only the third) I get a core dump in test_types. It > happens on the line > val = vala+valb > when k=2 atype= uint8scalar l=16 btype=complex192scalar valb=(1.0+0.0j) > > Any help in debugging this? > Mathew > > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Sebastian H. <ha...@ms...> - 2006-06-14 20:23:09
|
Hi, Thanks for the reply. Just for general enjoyment: I found a solution: It seems that substituting N.exp(-700) by N.e ** -700 changes the behaviour to the better ... Thanks, Sebastian Haase On Monday 12 June 2006 15:19, Sasha wrote: > BTW, here is the relevant explanation from mathmodule.c: > > /* ANSI C generally requires libm functions to set ERANGE > * on overflow, but also generally *allows* them to set > * ERANGE on underflow too. There's no consistency about > * the latter across platforms. > * Alas, C99 never requires that errno be set. > * Here we suppress the underflow errors (libm functions > * should return a zero on underflow, and +- HUGE_VAL on > * overflow, so testing the result for zero suffices to > * distinguish the cases). > */ > > On 6/12/06, Sasha <nd...@ma...> wrote: > > I don't know about numarray, but the difference between Numeric and > > python math module stems from the fact that the math module ignores > > errno set by C library and only checks for infinity. Numeric relies > > > > on errno exclusively, numpy ignores errors by default: > > >>> import numpy,math,Numeric > > >>> numpy.exp(-760) > > > > 0.0 > > > > >>> math.exp(-760) > > > > 0.0 > > > > >>> Numeric.exp(-760) > > > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > OverflowError: math range error > > > > >>> numpy.exp(760) > > > > inf > > > > >>> math.exp(760) > > > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > OverflowError: math range error > > > > >>> Numeric.exp(760) > > > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > OverflowError: math range error > > > > I would say it's a bug in Numeric, so you are out of luck. > > > > Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: > > >>> exp = > > >>> MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-1 > > >>>00,100)) exp(-760).filled() > > > > 0 > > > > You would need to replace -100,100 with the bounds appropriate for your > > system. > > > > On 6/12/06, Sebastian Haase <ha...@ms...> wrote: > > > Hi, > > > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient > > > way to do a non linear minimization. It uses the "old" Numeric module. > > > But since I upgraded to Numeric 24.2 I get OverflowErrors that I > > > tracked down to > > > > > > >>> Numeric.exp(-760.) > > > > > > Traceback (most recent call last): > > > File "<input>", line 1, in ? > > > OverflowError: math range error > > > > > > >From numarray I'm used to getting this: > > > >>> na.exp(-760) > > > > > > 0.0 > > > > > > Mostly I'm confused because my code worked before I upgraded to version > > > 24.2. > > > > > > Thanks for any hints on how I could revive my code... > > > -Sebastian Haase |
From: Mathew Y. <my...@jp...> - 2006-06-14 20:07:14
|
I consistently core dump when I do the following 1) from the console I do >import numpy >numpy.test(level=1,verbosity=2) >numpy.test(level=1,verbosity=2) >numpy.test(level=1,verbosity=2) the third time (and only the third) I get a core dump in test_types. It happens on the line val = vala+valb when k=2 atype= uint8scalar l=16 btype=complex192scalar valb=(1.0+0.0j) Any help in debugging this? Mathew |
From: Robert K. <rob...@gm...> - 2006-06-14 17:55:06
|
Eric Emsellem wrote: > Hi, > > I just switched to Suse 10.1 (from Suse 10.0) and for some reason now > the new installed modules do not go under > /usr/lib/python2.4/site-packages/ as usual but under > /usr/local/lib/python2.4/site-packages/ > (the "local" is the difference). > > How can I go back to the normal setting ? You can edit ~/.pydistutils.cfg to add this section: [install] prefix=/usr However, Suse probably made the change for a reason. Distribution vendors like to control /usr and let the user/sysadmin do what he wants in /usr/local . It is generally a Good Idea to respect that. If the Suse python group is not incompetent, then they will have already made the modifications necessary to make sure that /usr/local/lib/python2.4/site-packages is appropriately on your PYTHONPATH and other such modifications. > thanks a lot for any input there. > > > Eric > P.S.: I seem to then have problem with lapack_lite.so (undefined symbol: > s_cat) and it may be linked I don't think so. That looks like it might be a function that should be in libg2c, but I'm not sure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Eric E. <ems...@ob...> - 2006-06-14 17:17:49
|
Hi, I just switched to Suse 10.1 (from Suse 10.0) and for some reason now the new installed modules do not go under /usr/lib/python2.4/site-packages/ as usual but under /usr/local/lib/python2.4/site-packages/ (the "local" is the difference). How can I go back to the normal setting ? thanks a lot for any input there. Eric P.S.: I seem to then have problem with lapack_lite.so (undefined symbol: s_cat) and it may be linked |
From: Sasha <nd...@ma...> - 2006-06-14 16:39:26
|
On 6/14/06, Martin Wiechert <mar...@gm...> wrote: >... > does anybody know, why > > maximum.reduce (()) > > does not return -inf? > Technically, because >>> maximum.identity is None True It is theoretically feasible to change maximum.identity to -inf, but that would be inconsistent with the default dtype being int. For example >>> add.identity, type(add.identity) (0, <type 'int'>) Another reason is that IEEE special values are not universally supported yet. I would suggest to add 'initial' keyword to reduce. If this is done, the type of 'initial' may also supply the default for 'dtype' argument of reduce that was added in numpy. Another suggestion in this area is to change identity attribute of ufuncs from a scalar to dtype:scalar dictionary. Finally, a bug report: >>> add.identity = None Traceback (most recent call last): File "<stdin>", line 1, in ? SystemError: error return without exception set |
From: Martin W. <mar...@gm...> - 2006-06-14 16:00:54
|
Hi list, does anybody know, why maximum.reduce (()) does not return -inf? Looks very natural to me and as a byproduct maximum.reduce would ignore nans, thereby removing the need of nanmax etc. The current convention gives >>> from numpy import * >>> maximum.reduce ((1,nan)) 1.0 >>> maximum.reduce ((nan, 1)) nan >>> maximum.reduce (()) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: zero-size array to ufunc.reduce without identity >>> Cheers, Martin |
From: Christopher H. <ch...@st...> - 2006-06-14 15:17:59
|
The daily numpy build and tests I run have failed for revision 2617. Below is the error message I receive on my RHE 3 box: ====================================================================== FAIL: Check reading the nested fields of a nested array (1st level) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/sparty1/dev/site-packages/lib/python/numpy/core/tests/test_numerictypes.py", line 283, in check_nested1_acessors dtype='U2')) File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 139, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 215, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 207, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal (mismatch 100.0%) x: array([u'NN', u'OO'], dtype='<U8') y: array([u'NN', u'OO'], dtype='<U2') On my Solaris 8 box this same test causes a bus error: Check creation of single-dimensional objects ... ok Check creation of 0-dimensional objects ... ok Check creation of multi-dimensional objects ... ok Check creation of single-dimensional objects ... ok Check reading the top fields of a nested array ... ok Check reading the nested fields of a nested array (1st level)Bus Error (core dumped) Chris |
From: Martin W. <mar...@gm...> - 2006-06-14 14:22:27
|
Thanks Pau, that's exactly what I was looking for. Martin On Wednesday 14 June 2006 12:01, you wrote: > On 6/14/06, Martin Wiechert <Mar...@mp...> wrote: > > Hi Simon, > > > > thanks for your reply. > > > > A [I, J] > > > > seems to only work if the indices are *strides* as in your example. I > > need fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] > > won't do what I want. As you can see from the example session I posted it > > does not address the whole rectangle IxJ but only the elements (I_1, > > J_1), (I_2, J_2). E.g., if I==J this is the diagonal of the submatrix, > > not the full submatrix. > > you can use A[ ix_(I,J) ] to do what you want. > > But, if you just want subrectangular regions then A[1:4,1:4] is enough. > Please note that A[1:4,1:4] is not the same as A[ arange(1,4), arange(1,4) > ], but is the same as A[ ix_(arange(1,4), arange(1,4)) ]. > > hope this heps > pau |
From: Tim H. <tim...@co...> - 2006-06-14 13:52:36
|
Ivan Vilata i Balaguer wrote: >En/na Tim Hochberg ha escrit:: > > > >>Francesc Altet wrote: >>[...] >> >> >>>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>>some users (specially in 32-bit platforms), is a type with the same rights >>>than the others and we would like to give support for it in numexpr. In fact, >>>Ivan Vilata already has implemented this suport in our local copy of numexpr, >>>so perhaps (I say perhaps because we are in the middle of a big project now >>>and are a bit scarce of time resources) we can provide the patch against the >>>latest version of David for your consideration. With this we can solve the >>>problem with int64 support in 32-bit platforms (although addmittedly, the VM >>>gets a bit more complicated, I really think that this is worth the effort) >>> >>> >>In addition to complexity, I worry that we'll overflow the code cache at >>some point and slow everything down. To be honest I have no idea at what >>point that is likely to happen, but I know they worry about it with the >>Python interpreter mainloop. Also, it becomes much, much slower to >>compile past a certain number of case statements under VC7, not sure >>why. That's mostly my problem though. >>[...] >> >> > >Hi! For your information, the addition of separate, predictably-sized >int (int32) and long (int64) types to numexpr was roughly as complicated >as the addition of boolean types, so maybe the increase of complexity >isn't that important (but I recognise I don't know the effect on the >final size of the VM). > > I didn't expect it to be any worse than booleans (I would imagine it's about the same). It's just that there's a point at which we are going to slow down the VM do to sheer size. I don't know where that point is, so I'm cautious. Booleans seem like they need to be supported directly in the interpreter, while only one each (the largest one) of ints, floats and complexs do. Booleans are different since they have different behaviour than integers, so they need a separate set of opcodes. For floats and complexes, the largest is also the most commonly used, so this works out well. For ints on the other hand, int32 is the most commonly used, but int64 is the largest, so the approach of using the largest is going to result in a speed hit for the most common integer case. Implementing both, as you've done solves that, but as I say, I worry about making the interpreter core too big. I expect that you've timed things before and after the addition of int64 and not gotten a noticable slowdown. That's good, although it doesn't entirely mean we're out of the woods since I expect that more opcodes that we just need to add will show up and at some point I we may run into an opcode crunch. Or maybe I'm just being paranoid. >As soon as I have time (and a SVN version of numexpr which passes the >tests ;) ) I will try to merge back the changes and send a patch to the >list. Thanks for your patience! :) > > I look forward to seeing it. Now if only I can get svn numexpr to stop seqfaulting under windows I'll be able to do something useful... -tim >:: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > Cárabos Coop. V. V V Enjoy Data > "" > > > |
From: Ivan V. i B. <iv...@ca...> - 2006-06-14 10:14:57
|
En/na Tim Hochberg ha escrit:: > Francesc Altet wrote: > [...] >>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre = for=20 >>some users (specially in 32-bit platforms), is a type with the same rig= hts=20 >>than the others and we would like to give support for it in numexpr. In= fact,=20 >>Ivan Vilata already has implemented this suport in our local copy of nu= mexpr,=20 >>so perhaps (I say perhaps because we are in the middle of a big project= now=20 >>and are a bit scarce of time resources) we can provide the patch agains= t the=20 >>latest version of David for your consideration. With this we can solve = the=20 >>problem with int64 support in 32-bit platforms (although addmittedly, t= he VM=20 >>gets a bit more complicated, I really think that this is worth the effo= rt) >=20 > In addition to complexity, I worry that we'll overflow the code cache a= t=20 > some point and slow everything down. To be honest I have no idea at wha= t=20 > point that is likely to happen, but I know they worry about it with the= =20 > Python interpreter mainloop. Also, it becomes much, much slower to=20 > compile past a certain number of case statements under VC7, not sure=20 > why. That's mostly my problem though. > [...] Hi! For your information, the addition of separate, predictably-sized int (int32) and long (int64) types to numexpr was roughly as complicated as the addition of boolean types, so maybe the increase of complexity isn't that important (but I recognise I don't know the effect on the final size of the VM). As soon as I have time (and a SVN version of numexpr which passes the tests ;) ) I will try to merge back the changes and send a patch to the list. Thanks for your patience! :) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Pau G. <pau...@gm...> - 2006-06-14 10:02:08
|
On 6/14/06, Martin Wiechert <Mar...@mp...> wrote: > Hi Simon, > > thanks for your reply. > > A [I, J] > > seems to only work if the indices are *strides* as in your example. I need > fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] won't do > what I want. As you can see from the example session I posted it does not > address the whole rectangle IxJ but only the elements (I_1, J_1), (I_2, J_2). > E.g., if I==J this is the diagonal of the submatrix, not the full submatrix. you can use A[ ix_(I,J) ] to do what you want. But, if you just want subrectangular regions then A[1:4,1:4] is enough. Please note that A[1:4,1:4] is not the same as A[ arange(1,4), arange(1,4) ], but is the same as A[ ix_(arange(1,4), arange(1,4)) ]. hope this heps pau |
From: Karol L. <kar...@kn...> - 2006-06-14 09:50:44
|
On Wednesday 14 June 2006 11:14, Martin Wiechert wrote: > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] > > which is not pretty and more importantly only works for reading the > subrectangle. Writing does *not* work. (Cf. session below.) > > Any help would be appreciated. > > Thanks, > Martin You can also use A[m:n,r:s] to refernce a subarray. =46or instance: >>> a =3D numpy.zeros((5,5)) >>> b =3D numpy.ones((3,3)) >>> a[1:4,1:4] =3D b >>> a array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]]) Cheers, Karol =2D-=20 written by Karol Langner =B6ro cze 14 11:49:35 CEST 2006 |
From: Ivan V. i B. <iv...@ca...> - 2006-06-14 09:42:58
|
En/na Mathew Yeates ha escrit:: > I typically deal with very large arrays that don't fit in memory. How=20 > does Numpy handle this? In Matlab I can use memory mapping but I would = > prefer caching as is done in The Gimp. Hi Mathew. If you are in the need of storing large arrays on disk, you may have a look at Pytables_. It will save you some headaches with the on-disk representation of your arrays (it uses the self-describing HDF5 format), it allows you to load specific slices of arrays, and it provides caching of data. The latest versions also support numpy. Hope that helps, =2E. _PyTables: http://www.pytables.org/ :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Martin W. <Mar...@mp...> - 2006-06-14 09:36:33
|
Hi Simon, thanks for your reply. A [I, J] seems to only work if the indices are *strides* as in your example. I need fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] won't do what I want. As you can see from the example session I posted it does not address the whole rectangle IxJ but only the elements (I_1, J_1), (I_2, J_2). E.g., if I==J this is the diagonal of the submatrix, not the full submatrix. Martin On Wednesday 14 June 2006 20:25, Simon Burton wrote: > On Wed, 14 Jun 2006 11:14:17 +0200 > > Martin Wiechert <mar...@gm...> wrote: > > Hi list, > > > > is there a concise way to address a subrectangle of a 2d array? So far > > I'm using > > > > A [I] [:, J] > > what about A[I,J] ? > > Simon. > > >>> import numpy > >>> a=numpy.zer > > numpy.zeros numpy.zeros_like > > >>> a=numpy.zeros([4,4]) > >>> a > > array([[0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 0]]) > > >>> a[2:3,2:3]=1 > >>> a > > array([[0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 1, 0], > [0, 0, 0, 0]]) > > >>> a[1:3,1:3]=1 > >>> a > > array([[0, 0, 0, 0], > [0, 1, 1, 0], > [0, 1, 1, 0], > [0, 0, 0, 0]]) |
From: Karol L. <kar...@kn...> - 2006-06-14 09:31:52
|
On Wednesday 14 June 2006 11:14, Martin Wiechert wrote: > Hi list, > > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] > > which is not pretty and more importantly only works for reading the > subrectangle. Writing does *not* work. (Cf. session below.) > > Any help would be appreciated. > > Thanks, > Martin You can achieve this by using the "take" function twice, in this fashion: >>> a =3D numpay.ones((10,10)) >>> for i in range(5): =2E.. for j in range(5): =2E.. a[i][j] =3D i+j =2E..=20 >>> a array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) >>> print a.take.__doc__ a.take(indices, axis=3DNone). Selects the elements in indices from array a= =20 along the given axis. >>> a.take((1,2,3),axis=3D0) array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]) >>> a.take((1,2,3),axis=3D0).take((2,3),axis=3D1) array([[3, 4], [4, 5], [5, 6]]) Cheers, Karol =2D-=20 written by Karol Langner =B6ro cze 14 11:27:33 CEST 2006 |
From: Simon B. <si...@ar...> - 2006-06-14 09:27:43
|
On Wed, 14 Jun 2006 11:14:17 +0200 Martin Wiechert <mar...@gm...> wrote: > > Hi list, > > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] what about A[I,J] ? Simon. >>> import numpy >>> a=numpy.zer numpy.zeros numpy.zeros_like >>> a=numpy.zeros([4,4]) >>> a array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) >>> a[2:3,2:3]=1 >>> a array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) >>> a[1:3,1:3]=1 >>> a array([[0, 0, 0, 0], [0, 1, 1, 0], [0, 1, 1, 0], [0, 0, 0, 0]]) >>> -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com |
From: Martin W. <mar...@gm...> - 2006-06-14 09:17:12
|
Hi list, is there a concise way to address a subrectangle of a 2d array? So far I'm using A [I] [:, J] which is not pretty and more importantly only works for reading the subrectangle. Writing does *not* work. (Cf. session below.) Any help would be appreciated. Thanks, Martin In [1]:a = zeros ((4,4)) In [2]:b = ones ((2,2)) In [3]:c = array ((1,2)) In [4]:a [c] [:, c] = b In [5]:a Out[5]: array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) In [6]:a [:, c] [c] = b In [7]:a Out[7]: array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) In [8]:a [c, c] = b In [9]:a Out[9]: array([[0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) In [10]:a [c] [:, c] Out[10]: array([[1, 0], [0, 1]]) In [11]: |
From: Desperate <nbq...@le...> - 2006-06-14 04:40:36
|
HOLLYWOOD INTERMED (HYWI.PK) THIS S,T,O,C,K IS EXTREMELY UNDERVALUED Huge Advertising Campaign this week! Breakout Forecast for June, 2006 Current Price: $1.04 Short Term Price Target: $3.25 Recommendation: S,t,r,o,n,g Buy *300+% profit potential short term RECENT HOT NEWS released MUST READ ACT NOW GLENDALE, CA -- May 31, 2006 - Hollywood Intermediate, Inc. (HYWI.PK - News), a provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. The company is now offering the same high resolution digital intermediate services for films originating on a 16MM film format, a popular format for independent film makers About HOLLYWOOD INTERMED (HYWI.PK): Hollywood Intermediate affords Motion Pictures the ability to scan their selected original camera negative at 2K or 4K film resolution, conforming a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and includes the output of a High Definition preview master as well as final film, broadcast and DVD distribution Lotta AchieveIT OReilly thousands Contest Winners Every Day winners every FAQs Gallery Covers Text/Low Bandwidth Submit Factual Update latest download Income Bad Loans Auto Insurence Quote Rx Uk Insurance Mortgage five books. Please LASER AND Enhanced Detection ZHONGYU DegreePhD Aerospace Peter Pulsed AM Osorio: database original improved handling task Adam Mansbach similar nonlinear studies. technique suited nearby artifacts country maps continued edition. Several regional boundary carREAD buying guide tipsOur brothers Jalopnik posted fivestep access entire content. Commodore your book. One thing dig is how designed pool Oliver Wangs Classic Material. anyone who doesnt kind that Personals Puerto Vallarta separate deemed fuller found Along regular updates features African Republic Chad Chile China Christmas Cocos PayPal below RSS avowed Apple CherryOS Desktops Drinks MXS VX Bench Mexican Laying Ceramic Floor Tile Hollywood Perfumes Wholesale Voip Clark Earth overlay back within blog |
From: Robert K. <rob...@gm...> - 2006-06-14 03:22:23
|
Mathew Yeates wrote: > I finally got things linked with libg2c but now I get > import linalg -> failed: ld.so.1: python: fatal: relocation error: file > /u/fuego0/myeates/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: > symbol dgeev_: referenced symbol not found > > I looked all through my ATLAS source and I see no dgeenv anywhere.No > file of that name and no refernces to that function. Anybody know what > up with this? ATLAS itself only provides optimized versions of some LAPACK routines. You need to combine it with the full LAPACK to get full coverage. Please read the ATLAS FAQ for instructions: http://math-atlas.sourceforge.net/errata.html#completelp -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Mathew Y. <my...@jp...> - 2006-06-14 02:21:46
|
I finally got things linked with libg2c but now I get import linalg -> failed: ld.so.1: python: fatal: relocation error: file /u/fuego0/myeates/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: symbol dgeev_: referenced symbol not found I looked all through my ATLAS source and I see no dgeenv anywhere.No file of that name and no refernces to that function. Anybody know what up with this? Mathew |
From: Travis O. <oli...@ie...> - 2006-06-14 01:41:43
|
I've updated the description of the array interface (array protocol). The web-page is http://numeric.scipy.org/array_interface.html Basically, the Python-side interface has been compressed to the single attribute __array_interface__. There is still the __array_struct__ attribute which now has a descr member to the structure returned (but the ARR_HAS_DESCR flag must be set or it must be ignored). NumPy has been updated so that the old Python-side attributes are now spelled: __array_<somename>__ --> __array_interface__['<somename>'] -Travis |