You can subscribe to this list here.
2000 
_{Jan}
(8) 
_{Feb}
(49) 
_{Mar}
(48) 
_{Apr}
(28) 
_{May}
(37) 
_{Jun}
(28) 
_{Jul}
(16) 
_{Aug}
(16) 
_{Sep}
(44) 
_{Oct}
(61) 
_{Nov}
(31) 
_{Dec}
(24) 

2001 
_{Jan}
(56) 
_{Feb}
(54) 
_{Mar}
(41) 
_{Apr}
(71) 
_{May}
(48) 
_{Jun}
(32) 
_{Jul}
(53) 
_{Aug}
(91) 
_{Sep}
(56) 
_{Oct}
(33) 
_{Nov}
(81) 
_{Dec}
(54) 
2002 
_{Jan}
(72) 
_{Feb}
(37) 
_{Mar}
(126) 
_{Apr}
(62) 
_{May}
(34) 
_{Jun}
(124) 
_{Jul}
(36) 
_{Aug}
(34) 
_{Sep}
(60) 
_{Oct}
(37) 
_{Nov}
(23) 
_{Dec}
(104) 
2003 
_{Jan}
(110) 
_{Feb}
(73) 
_{Mar}
(42) 
_{Apr}
(8) 
_{May}
(76) 
_{Jun}
(14) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(108) 
_{Oct}
(82) 
_{Nov}
(89) 
_{Dec}
(94) 
2004 
_{Jan}
(117) 
_{Feb}
(86) 
_{Mar}
(75) 
_{Apr}
(55) 
_{May}
(75) 
_{Jun}
(160) 
_{Jul}
(152) 
_{Aug}
(86) 
_{Sep}
(75) 
_{Oct}
(134) 
_{Nov}
(62) 
_{Dec}
(60) 
2005 
_{Jan}
(187) 
_{Feb}
(318) 
_{Mar}
(296) 
_{Apr}
(205) 
_{May}
(84) 
_{Jun}
(63) 
_{Jul}
(122) 
_{Aug}
(59) 
_{Sep}
(66) 
_{Oct}
(148) 
_{Nov}
(120) 
_{Dec}
(70) 
2006 
_{Jan}
(460) 
_{Feb}
(683) 
_{Mar}
(589) 
_{Apr}
(559) 
_{May}
(445) 
_{Jun}
(712) 
_{Jul}
(815) 
_{Aug}
(663) 
_{Sep}
(559) 
_{Oct}
(930) 
_{Nov}
(373) 
_{Dec}

S  M  T  W  T  F  S 





1

2
(2) 
3

4

5
(4) 
6
(15) 
7
(8) 
8

9

10
(1) 
11

12
(5) 
13
(5) 
14
(16) 
15
(5) 
16
(1) 
17

18

19

20
(2) 
21
(6) 
22
(1) 
23

24

25

26

27
(3) 
28

29
(2) 
30

31

From: Pearu Peterson <pearu@ce...>  20030529 20:47:06

On Thu, 29 May 2003, Cliff Martin wrote: > This leads me to my second problem. In Matlab there is a function > called fftshift that lets one shift the fft values for more easy > visualization. Numeric doesn't seem to have a function like that > explicitly although I suppose one could use the flip function in the > MatPy module. Has anyone implemented something like the fftshift > function or could you suggest the best way to do this? Thanks for your > help. Oh I'm doing this on a 2D array. scipy.fftpack has fftshift. If you don't care building scipy yourself then the implementation of fftshift can be viewed via CVSview at http://www.scipy.org, look at the file scipy/Lib/fftpack/helper.py HTH, Pearu 
From: Cliff Martin <camartin@sn...>  20030529 20:16:06

All, First I'd like to thank Paul Dubois, Todd Miller and Perry Greenfield for the help you gave me on setting defined values based on index values in an array. Your fixes worked and let me get done what was needed. This leads me to my second problem. In Matlab there is a function called fftshift that lets one shift the fft values for more easy visualization. Numeric doesn't seem to have a function like that explicitly although I suppose one could use the flip function in the MatPy module. Has anyone implemented something like the fftshift function or could you suggest the best way to do this? Thanks for your help. Oh I'm doing this on a 2D array. Cliff Martin 
From: <christopheranderson@sh...>  20030527 20:23:08

Todd, thanks very much for the quick reply. > >Well... the faster part is good. :) Yes indeed. > > > What might be causing this problem? > >It sounds to me like a memory leak (reference counting error) in >numarray. > >Did you run the same code with older versions of numarray, or did you >start with numarray0.5? > >How long does it take to fail? This was my first stab at using numarray, so I haven't tried with prior versions. Not that this means anything, but it usually dies around 500 or so iterations through the optimization routine. For comparison, Numeric will cheerfully run for several thousands of iterations, multiple times, without complaint. >Can you mail me your code (or better, the smallest derivative of it >which reproduces the problem)? This might be a little complicated. I'll try stripping things down to isolate the problem. Chris 
From: Todd Miller <jmiller@st...>  20030527 16:47:57

On Tue, 20030527 at 11:52, christopheranderson@... wrote: > Hi, > > I've started to move some of my Numeric code to numarray 0.5 and have run into a problem. This is a relatively large optimization problem that requires finite difference estimation of gradients. > > The implementation uses either Numeric/LinearAlgebra/RandomArray or numarray/LinearAlgebra2/RandomArray2; these are the only extensions used. The code itself is fairly straightforward and ports from Numeric to numarray with no modifications. Typical run times are on the order of hours with Numeric. > > When using Numeric, I am able to repeat multiple optimization runs (> 10) with no problems. When the same code is run with numarray instead, I get this error part of the way through the first run: > > Traceback (most recent call last): > File "runFD2.py", line 81, in ? > W1, W2 = estimFD2(inputs, outputs, Nh, Ne, alpha) > File "optimFD2.py", line 171, in estimFD2 > gradW1, gradW2 = dEdW(inputs, outputs, MSE, alpha, W1, W2, f1, f2) > > File "optimFD2.py", line 83, in dEdW > y_plus = evalFD2(inputs, W1_plus, W2, f1, f2) > File "optimFD2.py", line 63, in evalFD2 > h1 = dot(inputs, W1) > File "F:\Python22\Lib\sitepackages\numarray\numarray.py", line 940, in dot > return innerproduct(a, swapaxes(inputarray(b), 1, 2)) > File "F:\Python22\Lib\sitepackages\numarray\ufunc.py", line 1866, in innerproduct > a = a.astype(rtype) > File "F:\Python22\Lib\sitepackages\numarray\numarray.py", line 478, in astype > > return self.copy() > File "F:\Python22\Lib\sitepackages\numarray\numarray.py", line 553, in copy > c = ndarray.NDArray.copy(self) > File "F:\Python22\Lib\sitepackages\numarray\ndarray.py", line 571, in copy > arr._data = memory.new_memory(arr._itemsize * arr.nelements()) > MemoryError > >>> > > This is a little frustrating, because numarray is clearly much faster. Checking the results along the way shows that numarray gives the same outputs as Numeric, so it doesn't appear to be a porting issue. The datasets involved are large (~10000x20). Initial memory usage with Numeric and numarray is similar, and is well below the machine's limit. The code is running on a Windows 2000 machine using the binary release of numarray. > Well... the faster part is good. :) > What might be causing this problem? It sounds to me like a memory leak (reference counting error) in numarray. Did you run the same code with older versions of numarray, or did you start with numarray0.5? How long does it take to fail? Can you mail me your code (or better, the smallest derivative of it which reproduces the problem)? > Should numarray be less tolerant than Numeric in a situation like this? No, it's just newer and less mature. > > Thanks. You're welcome! Todd > > Chris > > > > >  > This SF.net email is sponsored by: ObjectStore. > If flattening out C++ or Java code to make your application fit in a > relational database is painful, don't do it! Check out ObjectStore. > Now part of Progress Software. http://www.objectstore.net/sourceforge > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion  Todd Miller jmiller@... STSCI / ESS / SSB 
From: <christopheranderson@sh...>  20030527 15:54:53

Hi, I've started to move some of my Numeric code to numarray 0.5 and have run into a problem. This is a relatively large optimization problem that requires finite difference estimation of gradients. The implementation uses either Numeric/LinearAlgebra/RandomArray or numarray/LinearAlgebra2/RandomArray2; these are the only extensions used. The code itself is fairly straightforward and ports from Numeric to numarray with no modifications. Typical run times are on the order of hours with Numeric. When using Numeric, I am able to repeat multiple optimization runs (> 10) with no problems. When the same code is run with numarray instead, I get this error part of the way through the first run: Traceback (most recent call last): File "runFD2.py", line 81, in ? W1, W2 = estimFD2(inputs, outputs, Nh, Ne, alpha) File "optimFD2.py", line 171, in estimFD2 gradW1, gradW2 = dEdW(inputs, outputs, MSE, alpha, W1, W2, f1, f2) File "optimFD2.py", line 83, in dEdW y_plus = evalFD2(inputs, W1_plus, W2, f1, f2) File "optimFD2.py", line 63, in evalFD2 h1 = dot(inputs, W1) File "F:\Python22\Lib\sitepackages\numarray\numarray.py", line 940, in dot return innerproduct(a, swapaxes(inputarray(b), 1, 2)) File "F:\Python22\Lib\sitepackages\numarray\ufunc.py", line 1866, in innerproduct a = a.astype(rtype) File "F:\Python22\Lib\sitepackages\numarray\numarray.py", line 478, in astype return self.copy() File "F:\Python22\Lib\sitepackages\numarray\numarray.py", line 553, in copy c = ndarray.NDArray.copy(self) File "F:\Python22\Lib\sitepackages\numarray\ndarray.py", line 571, in copy arr._data = memory.new_memory(arr._itemsize * arr.nelements()) MemoryError >>> This is a little frustrating, because numarray is clearly much faster. Checking the results along the way shows that numarray gives the same outputs as Numeric, so it doesn't appear to be a porting issue. The datasets involved are large (~10000x20). Initial memory usage with Numeric and numarray is similar, and is well below the machine's limit. The code is running on a Windows 2000 machine using the binary release of numarray. What might be causing this problem? Should numarray be less tolerant than Numeric in a situation like this? Thanks. Chris 
From: Jeffery D. Collins <jcollins@bo...>  20030522 01:02:12

Kasper Souren wrote: >Hi, > >I was happy to see solutions coming up for Cliff's problems and it made me >think: maybe I can mention my problem here as well. > >>From an array of feature vectors I want to calculate it's distance matrix. >Something like [1]. Currently it can take quite a while to calculate the >stuff for a long array. > >Some questions: > >1) Is there a smart speed up possible? Like, a way to avoid the double loop? >It's no problem if this would lead to less generality (like the choice for a >distance function). I know there is a little speed up to be gained by leaving >out the array(f) thing, but that's not what I'm looking for. > >2) Is it possible (in Numeric or numarray) to define a class DiagonalMatrix >that at least saves half of the memory? > >3) If 1) is not possible, what would be the way to go for speeding it up by >writing it in C? weave because of its availability in scipy or would pyrex be >more interesting, or are there even more options..? > >bye, >Kasper > > > >[1] Example program: > >import Numeric > >def euclidean_dist(a, b): > diff = a  b > return Numeric.dot(diff, diff) > >def calc_dist_matrix(f, distance_function): > W = Numeric.array(f) > length = W.shape[0] > S = Numeric.zeros((length, length)) * 1.0 > > for i in range(length): > for j in range(i): > S[j, i] = S[i, j] = distance_function(W[i], W[j]) > return S > >print calc_dist_matrix(Numeric.sin(Numeric.arange(30)), euclidean_dist) > > The following is a bit faster (at the cost of some memory): def euclidean_dist(a, b): diff = a  b return diff**2 def calc_dist_matrix2(f, distance_function): W = Numeric.array(f) i,j = indices((len(W),)*2) S = distance_function(take(W,i), take(W,j)) return S  Jeffery Collins (http://www.boulder.net/~jcollins) 
From: Perry Greenfield <perry@st...>  20030521 21:09:30

>=20 > x,y =3D Numeric.indices((512,512)) > r =3D x**2 + y**2 > c =3D r < N=20 >=20 > In numarray you can then say: >=20 > a[ c ] =3D 1 >=20 > In Numeric, I think you say: >=20 > a =3D a.flat > c =3D c.flat > Numeric.put(a, Numeric.nonzero(c), 1) > a.shape =3D (512,512) >=20 >=20 > Todd >=20 Yeah, that's what I would do in Numeric :) I was thinking that the solution I gave was wasteful of memory if N was generally much smaller than the dimensions of data. Something like this (also untested) would be better in that regard (again for numarray) y, x =3D indices((2*N+1, 2*N+1)) yind, xind =3D nonzero(((xN)**2+(yN)**2) < N**2)=20 data[yind+y0N, xind+x0N] =3D 1 Note that this doesn't check to see if x0 and y0 are at least N away from the boundaries of data, and also note my convention for x and y is different than what Todd used (it depends on how you interpret data in arrays, the convention I use is more common for image data if you think of x corresponding to the most rapidly varying index). The advantage of this approach is that the x and y arrays are small if N is small, and computing xind, yind take much less time than for very large arrays. Doing this with Numeric would be much messier I believe (but a clever person could prove me wrong). Perry Greenfield 
From: Kasper Souren <Kasper.S<ouren@ir...>  20030521 19:55:20

Hi, I was happy to see solutions coming up for Cliff's problems and it made me= =20 think: maybe I can mention my problem here as well. =46rom an array of feature vectors I want to calculate it's distance matrix= =2E=20 Something like [1]. Currently it can take quite a while to calculate the=20 stuff for a long array. Some questions: 1) Is there a smart speed up possible? Like, a way to avoid the double loop= ?=20 It's no problem if this would lead to less generality (like the choice for = a=20 distance function). I know there is a little speed up to be gained by leavi= ng=20 out the array(f) thing, but that's not what I'm looking for. 2) Is it possible (in Numeric or numarray) to define a class DiagonalMatrix= =20 that at least saves half of the memory? 3) If 1) is not possible, what would be the way to go for speeding it up by= =20 writing it in C? weave because of its availability in scipy or would pyrex = be=20 more interesting, or are there even more options..? bye, Kasper [1] Example program: import Numeric def euclidean_dist(a, b): diff =3D a  b return Numeric.dot(diff, diff) def calc_dist_matrix(f, distance_function): W =3D Numeric.array(f) length =3D W.shape[0] S =3D Numeric.zeros((length, length)) * 1.0 for i in range(length): for j in range(i): S[j, i] =3D S[i, j] =3D distance_function(W[i], W[j]) return S print calc_dist_matrix(Numeric.sin(Numeric.arange(30)), euclidean_dist)=20 
From: Magnus Lie Hetland <magnus@he...>  20030521 18:52:20

Perry Greenfield <perry@...>: [snip] > (in a future version of numarray this will also work and the meaning > should be clearer: data[where(((xx0)**2+(yy0)**2) < N**2)] = 1 Cool... Do you know which version?  Magnus Lie Hetland "In this house we obey the laws of http://hetland.org thermodynamics!" Homer Simpson 
From: Todd Miller <jmiller@st...>  20030521 15:56:36

On Wed, 20030521 at 11:08, Cliff Martin wrote: > Hi, >=20 > I have a problem where I want to set values in a 2D array based on=20 > conditions I=E2=80=99ve established. The variables are the NA of the syst= em, its=20 > wavelength and the correct units on the device I=E2=80=99m modeling. Usin= g these=20 > variables I define a variable N. I set up a 512 by 512 array [x,y]. Then=20 > I set up a radius , r =3D x^2 +y^2 and ask it to give me all r=E2=80=99s(= to the=20 > closest integer) <=3D N(the variable I=E2=80=99ve defined above). This gi= ves all=20 > the index values in that radius and then I set all those locations to a=20 > value of 1. After this I do some FFT =E2=80=98s, etc. So how do I do this= in=20 > Numerical Python without having to index through i,j steps which would=20 > be incredibly tedious. This works fairly well in MatLab but I want to=20 > port my program to Python(for lots of reasons). If you=E2=80=99d rather I= write=20 > the MatLab code snippet I can do that. Thanks. >=20 > Cliff Martin I think part of your code looks like this: x,y =3D Numeric.indices((512,512)) r =3D x**2 + y**2 c =3D r < N=20 In numarray you can then say: a[ c ] =3D 1 In Numeric, I think you say: a =3D a.flat c =3D c.flat Numeric.put(a, Numeric.nonzero(c), 1) a.shape =3D (512,512) Todd 
From: Perry Greenfield <perry@st...>  20030521 15:41:45

> > I have a problem where I want to set values in a 2D array based on > conditions I’ve established. The variables are the NA of the system, its > wavelength and the correct units on the device I’m modeling. Using these > variables I define a variable N. I set up a 512 by 512 array [x,y]. Then > I set up a radius , r = x^2 +y^2 and ask it to give me all r’s(to the > closest integer) <= N(the variable I’ve defined above). This gives all > the index values in that radius and then I set all those locations to a > value of 1. After this I do some FFT ‘s, etc. So how do I do this in > Numerical Python without having to index through i,j steps which would > be incredibly tedious. This works fairly well in MatLab but I want to > port my program to Python(for lots of reasons). If you’d rather I write > the MatLab code snippet I can do that. Thanks. > > Cliff Martin > One way should be something like this (I haven't tested this) This only works in numarray. If data is the 2d array where you want to set values within a radius of N (of x0, y0 I presume) y, x = indices(data.shape) data[nonzero(((xx0)**2+(yy0)**2) < N**2)] = 1 (in a future version of numarray this will also work and the meaning should be clearer: data[where(((xx0)**2+(yy0)**2) < N**2)] = 1 It's a little more involved for Numeric but still possible to do without resorting to loops (I'll leave it to someone else to show that). Does this do what you wanted? Perry Greenfield. 
From: Cliff Martin <camartin@sn...>  20030521 15:08:22

Hi, I have a problem where I want to set values in a 2D array based on conditions I’ve established. The variables are the NA of the system, its wavelength and the correct units on the device I’m modeling. Using these variables I define a variable N. I set up a 512 by 512 array [x,y]. Then I set up a radius , r = x^2 +y^2 and ask it to give me all r’s(to the closest integer) <= N(the variable I’ve defined above). This gives all the index values in that radius and then I set all those locations to a value of 1. After this I do some FFT ‘s, etc. So how do I do this in Numerical Python without having to index through i,j steps which would be incredibly tedious. This works fairly well in MatLab but I want to port my program to Python(for lots of reasons). If you’d rather I write the MatLab code snippet I can do that. Thanks. Cliff Martin 
From: Todd Miller <jmiller@st...>  20030520 17:07:45

On Tue, 20030520 at 12:19, Sebastian Haase wrote: > Hi All, > After I read this thread I thought I would wait a bit before upgrading my > numarray (from 0.4)  numarray0.6 is probably not far off; if you're on the fence, you might want to sit this out. However, there is a new API function which I added in response to your last post: NA_NewAllFromBuffer() which enables you to create arrays in C from existing buffer objects rather than just C arrays. Also, I modified setup.py to support independent build and install. Both of these new features are under tested. Let me know how it works out. > Are the mentioned fixes available somewhere ? All of the problems Peter found have been fixed as of yesterday. The fixes are checked into CVS on Source Forge in the numarray component of the numpy project. Instructions for doing an anonymous numarray cvs checkout are here: http://sourceforge.net/cvs/?group_id=1369 These basically say to: % cvs d:pserver:anonymous@...:/cvsroot/numpy login % cvs z3 d:pserver:anonymous@...:/cvsroot/numpy co numarray > Or actually: Is the CVS version publicly readable and  if so  would you > recommend using that ? If you check out today, you'll be OK. I tagged "now" as v0_5_2 which will never be "officially released" but which just passed all selftests under i386Linux. Todd  Todd Miller jmiller@... STSCI / ESS / SSB 
From: Sebastian Haase <haase@ms...>  20030520 16:18:11

Hi All, After I read this thread I thought I would wait a bit before upgrading my numarray (from 0.4)  Are the mentioned fixes available somewhere ? Or actually: Is the CVS version publicly readable and  if so  would you recommend using that ? Thanks, Sebastian Haase  Original Message  From: "Todd Miller" <jmiller@...> To: "Peter Verveer" <verveer@...> Cc: <numpydiscussion@...> Sent: Tuesday, May 06, 2003 3:02 PM Subject: Re: [Numpydiscussion] Numarray 0.5 > On Tue, 20030506 at 08:24, Peter Verveer wrote: > > Hi All, > > > > I found the following problems after testing my software with numarray 0.5: > > > > 1) Following works fine if both a and b are arrays: > > > > >>> a = array([2]) > > >>> b = array([1, 2]) > > >>> print a + b > > [3 4] > > > > However, if b is an python sequence: > > > > >>> a = array([2]) > > >>> b = [1, 2] > > >>> print a + b > > [3] > > > > Apparently broadcasting does not work with python sequeces anymore. This used > > to work fine in version 0.4. Is this a bug? > > Yes, unfortunately. Thanks for reporting it! > > > > > 2) It is not possible to compare an array type object to the 'Any' object: > > > > >>> Float64 == Any > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > File "/usr/local/lib/python2.2/sitepackages/numarray/numerictypes.py", line > > 112, in __cmp__ > > return (genericTypeRank.index(self.name)  > > ValueError: list.index(x): x not in list > > > > I am not sure if this is a bug, or intended behaviour, but the possibilty to > > compare an array type object to 'Any' would be very useful for me. > > > > 'Any' grew up from the CAPI, rather than down from the Python design, > so it's not very well thought out. Right now, it is a placeholder used > to mark arrays with undefined types and to indicate "no type constraint" > in C API calls. In normal contexts, you can't make an array of type > 'Any'. I think there are two reasonable behaviors for comparisons with > 'Any', both used in C. The first behavior is literal comparison; here > comparison to Any would generally return "not equal". The second > behavior is wildcard matching; here, comparison to Any would generally > return "equal". Which makes sense to you? How do you want to use > this? > > > 3) The NA_typeNoToTypeObject() function fails if it is called before any > > arrays are created. It looks to me as if the pNumType array in > > libnumarraymodule.c is not initialized until an array is created. I don't > > know if any other functions are affected in the same way. Could this be > > fixed? > > Yes. This is fixed now in CVS. Thanks again! > > > > > Cheers, Peter > > > >  > > Dr. Peter J. Verveer > > Cell Biology and Cell Biophysics Programme > > EMBL > > Meyerhofstrasse 1 > > D69117 Heidelberg > > Germany > > Tel. : +49 6221 387245 > > Fax : +49 6221 387242 > > Email: Peter.Verveer@... > > > > > > > > > >  > > This sf.net email is sponsored by:ThinkGeek > > Welcome to geek heaven. > > http://thinkgeek.com/sf > > _______________________________________________ > > Numpydiscussion mailing list > > Numpydiscussion@... > > https://lists.sourceforge.net/lists/listinfo/numpydiscussion >  > Todd Miller jmiller@... > STSCI / ESS / SSB > > > >  > Enterprise Linux Forum Conference & Expo, June 46, 2003, Santa Clara > The only event dedicated to issues related to Linux enterprise solutions > http://www.enterpriselinuxforum.com > > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion > 
From: Todd Miller <jmiller@st...>  20030516 18:36:43

On Fri, 20030516 at 13:03, Paul Dubois wrote: > C:\numpy\numarray\Packages\MA2>python > Python 2.3b1 (#40, Apr 25 2003, 19:06:24) [MSC v.1200 32 bit (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numarray > >>> x=numarray.arange(5,3,2) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "c:\python23\Lib\sitepackages\numarray\numarray.py", line 924, in > arange > r = _fillarray(size, start, stride, type) > File "c:\python23\Lib\sitepackages\numarray\numarray.py", line 120, in > _fillarray > outarr = NumArray((size,), outtype) > libnumarray.error: NA_updateDataPtr: error getting read buffer data ptr > >>> import Numeric > >>> Numeric.arange(5,3,2) > zeros((0,), 'l') > >>> > > Is this change intentional? No. It's bugs in arange and also the memory object which are creating a negative length buffer. The buffer API then reports a negative length which is fortuitously interpreted as an error. Ya gotta smile... It's fixed in CVS. I modified the memory module so that: >>> import memory >>> memory.new_memory(10) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: new_memory: invalid region size: 10. And I modified arange so that negative sizes are clipped to 0. So now: >>> import numarray >>> numarray.arange(5,3,2) array([]) >  Todd Miller jmiller@... STSCI / ESS / SSB 
From: Todd Miller <jmiller@st...>  20030515 19:37:14

Based on the comments I have received, the numarray repackaging is shaping up as shown below: Here are the planned renamings for the numarray modules as we transition to a package: numarray > numarray.numeric recarray > numarray.records chararray > numarray.strings ndarray > numarray.generic * > numarray.* Note that class and function names will remain unchanged, so recarray.RecArray will become numarray.records.RecArray. Here are the renamings for the current and planned addon packages consolidated into a single addons distribution: LinearAlgrebra2 > numarray.linear_algebra FFT2 > numarray.fft RandomArray2 > numarray.random_array Convolve > numarray.convolve Convolve.Image > numarray.image MA2 > numarray.ma Future 3rd party packages that require numarray specific C extensions should locate themselves directly in the numarray package. Doing so makes it easy to figure out what to delete when extensions need to be rebuilt. If you don't want to, that's OK too. Barring a sudden burst of interest or superior name choice, I think the top level package name should just remain "numarray". Stub modules will be included in the first few repackaged releases to make it possible to import the primary modules (numarray, recarray, chararray, and ndarray) as you do with numarray0.5. This will relax the requirement for synchronous change of numarray and software which uses numarray. The stub modules will be activated either through the creation of a .pth file or via additions to PYTHONPATH to make them visible. This is a backwards compatability mode only, not recommended usage following the repackaging. Thanks to everyone who responded. Final comments?  Todd Miller jmiller@... STSCI / ESS / SSB 
From: Todd Miller <jmiller@st...>  20030515 13:43:11

On Wed, 20030514 at 18:35, verveer@... wrote: > > As has been mentioned before, we're planning to repackage numarray as > > a package rather than as a collection of modules. We're soliciting > > comments now because we only want to do this once. > > [some stuff deleted] > > > Convolve.Image > numarray.Image > > Should this not become "numarray.Convolve.Image" instead of "numarray.Image"? Currently there's one function in Image: translate. I was thinking that there are probably going to be more Image operations for work at STSCI, not necessarily all built on convolution (e.g. rotate). Since we're renaming stuff, I thought we should probably change this now. Todd > > Peter > > > >  > Enterprise Linux Forum Conference & Expo, June 46, 2003, Santa Clara > The only event dedicated to issues related to Linux enterprise solutions > http://www.enterpriselinuxforum.com > > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion  Todd Miller jmiller@... STSCI / ESS / SSB 
From: Perry Greenfield <perry@st...>  20030515 13:15:52

Andrew P. Lentvorski, Jr. wrote: > On Wed, 14 May 2003, Perry Greenfield wrote: > > > > >>> import numarray > > > >>> a = numarray.arange(6) > > > >>> a > > > array([0, 1, 2, 3, 4, 5]) > > > >>> a[:] = 0.0 > > > >>> a > > > array([0, 0, 0, 0, 0, 0]) > > > > > Is there any reason for anything else if this does the job? > > This is how I would recommend the action be performed. > > It depends. How does the slicing code work? > > If it iterates across the matrix indicies (requiring multiply and add > operations to index specific memory locations), this will be much slower > than doing a contiguous memory fill (1 add and 1 multiply per index vs. > increment). This is not a small hit, we're talking factors of 2x, 4x, 6x. > > If the slicing code special cases "array[:] = n" to be a memory fill, then > probably not. > Well, there are two issues the question relates to: 1) what should the user interface be for filling arrays with a constant value. I'd argue that a slice assignment is both sufficent and idiomatic. 2) is it efficient. As to 2, the current implementation handles scalar assignment by broadcasting a oneelement array across the array being assigned to (that's the simplest way to code it). That isn't the fastest implementation when the array being assigned to is congtiguous. It could be made faster. But frankly, it isn't very high on our list of priorities. And I'm a little skeptical that this performance improvement is an important one. Is your program perfromance dominated by the time it takes to zero an array? I have trouble thinking of many realistic problems where that would be true (not to say that there aren't any). If someone is willing to write special code to optimize this, that would be fine with us. Perry 
From: Jens Jorgen Mortensen <jensj@fy...>  20030515 06:29:54

On Torsdag den 15. maj 2003 01:49, John A. Turner wrote: > ImportError: dlopen: > /users/turner/Tools/T64U/lib/python2.2/sitepackages/Numeric/lapack_lit= e.so >: symbol "zgelsd_" unresolved Perhaps libcxml does not does not have the zgelsd_ function? Try nm /usr/lib/libcxml_ev6.a  grep zgelds to see if it is there. Jens J=F8rgen 
From: Andrew P. Lentvorski, Jr. <bsder@al...>  20030515 01:07:36

On Wed, 14 May 2003, Perry Greenfield wrote: > > >>> import numarray > > >>> a = numarray.arange(6) > > >>> a > > array([0, 1, 2, 3, 4, 5]) > > >>> a[:] = 0.0 > > >>> a > > array([0, 0, 0, 0, 0, 0]) > > > Is there any reason for anything else if this does the job? > This is how I would recommend the action be performed. It depends. How does the slicing code work? If it iterates across the matrix indicies (requiring multiply and add operations to index specific memory locations), this will be much slower than doing a contiguous memory fill (1 add and 1 multiply per index vs. increment). This is not a small hit, we're talking factors of 2x, 4x, 6x. If the slicing code special cases "array[:] = n" to be a memory fill, then probably not. a 
From: John A. Turner <turner@la...>  20030514 23:49:01

is anyone using the Compaq Extended Math Libraries (CXML) on a Alpha running Tru64Unix? although Numeric23.0 builds outofthebox and runs fine with its own BLAS/LAPACK, I wanted to try making it use the (significantly faster) CXML implementations I modified this bit of setup.py: # delete all but the first one in this list if using your own LAPACK/BLAS # sourcelist = [os.path.join('Src', 'lapack_litemodule.c'), # os.path.join('Src', 'blas_lite.c'), # os.path.join('Src', 'f2c_lite.c'), # os.path.join('Src', 'zlapack_lite.c'), # os.path.join('Src', 'dlapack_lite.c') # ] sourcelist = [os.path.join('Src', 'lapack_litemodule.c')] # set these to use your own BLAS; library_dirs_list = ['/usr/opt/XMDLIB6510'] libraries_list = ['cxml_ev6'] # if you also set `use_dotblas` (see below), you'll need: # ['lapack', 'cblas', 'f77blas', 'atlas', 'g2c'] (I had to do some hunting around to find where the libs actually live  when using CXML with Fortran all you do is put lcxml on the command line and it links with the appropriate lib) but this doesn't seem to have done it  after I build and install I see: qsc4% python cg.py Traceback (most recent call last): File "cg.py", line 98, in ? from RandomArray import * File "/users/turner/Tools/T64U/lib/python2.2/sitepackages/Numeric/RandomArray.py", line 3, in ? import LinearAlgebra File "/users/turner/Tools/T64U/lib/python2.2/sitepackages/Numeric/LinearAlgebra.py", line 8, in ? import lapack_lite ImportError: dlopen: /users/turner/Tools/T64U/lib/python2.2/sitepackages/Numeric/lapack_lite.so: symbol "zgelsd_" unresolved I'm pretty sure I'm missing something important... thanks in advance... John Turner Los Alamos Natl. Lab. Adv. Sci. Sim. (CCS2) 
From: <verveer@em...>  20030514 22:35:14

> As has been mentioned before, we're planning to repackage numarray as > a package rather than as a collection of modules. We're soliciting > comments now because we only want to do this once. [some stuff deleted] > Convolve.Image > numarray.Image Should this not become "numarray.Convolve.Image" instead of "numarray.Image"? Peter 
From: Perry Greenfield <perry@st...>  20030514 20:38:25

> Currently, I am just using: > > >>> import numarray > >>> a = numarray.arange(6) > >>> a > array([0, 1, 2, 3, 4, 5]) > >>> a[:] = 0.0 > >>> a > array([0, 0, 0, 0, 0, 0]) > Is there any reason for anything else if this does the job? This is how I would recommend the action be performed. Perry Greenfield 
From: Todd Miller <jmiller@st...>  20030514 20:26:36

On Wed, 20030514 at 16:21, Andrew P. Lentvorski, Jr. wrote: > What is the official way to zero out an array in numarray/Numeric? > I'm don't think there is one in numarray; which is to say, I'd do what you did below. > While I can create a new array of all zeros and then assign it to the old > variable, this is extremely wasteful of memory. > > Currently, I am just using: > > >>> import numarray > >>> a = numarray.arange(6) > >>> a > array([0, 1, 2, 3, 4, 5]) > >>> a[:] = 0.0 > >>> a > array([0, 0, 0, 0, 0, 0]) > > I looked through the manual for a function or array mathod which would > accomplish the same thing, but I didn't find an obvious one. Did I miss > something obvious? I don't think so. Todd 
From: Andrew P. Lentvorski, Jr. <bsder@al...>  20030514 20:18:16

What is the official way to zero out an array in numarray/Numeric? While I can create a new array of all zeros and then assign it to the old variable, this is extremely wasteful of memory. Currently, I am just using: >>> import numarray >>> a = numarray.arange(6) >>> a array([0, 1, 2, 3, 4, 5]) >>> a[:] = 0.0 >>> a array([0, 0, 0, 0, 0, 0]) I looked through the manual for a function or array mathod which would accomplish the same thing, but I didn't find an obvious one. Did I miss something obvious? Thanks, a 