You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Sasha <nd...@ma...> - 2006-06-16 13:48:14
|
On 6/16/06, Sven Schreiber <sve...@gm...> wrote: > .... > Abbreviations will emerge anyway, the question is merely: Will numpy > provide/recommend them (in addition to having long names maybe), or will > it have to be done by somebody else, possibly resulting in many > different sets of abbreviations for the same purpose. > This is a valid point. In my experience ad hoc abbreviations are more popular among scientists who are not used to writing large programs. They use numpy either interactively or write short throw-away scripts that are rarely reused. Programmers who write reusable code almost universally hate ad hoc abbreviations. (There are exceptions: <http://www.kuro5hin.org/story/2002/8/30/175531/763>.) If numpy is going to compete with MATLAB, we should not ignore non-programmer user base. I like the idea of providing recommended abbreviations. There is a precedent for doing that: GNU command line utilities provide long/short alternatives for most options. Long options are recommended for use in scripts while short are indispensable at the command line. I would like to suggest the following guidelines: 1. Numpy should never invent abbreviations, but may reuse abbreviations used in the art. 2. When alternative names are made available, there should be one simple rule for reducing the long name to short. For example, use of acronyms may provide one such rule: singular_value_decomposition -> svd. Unfortunately that would mean linear_least_squares -> lls, not ols and conflict with rule #1 (rename lstsq -> ordinary_least_squares?). The second guideline may be hard to follow, but it is very important. Without a rule like this, there will be confusion on whether linear_least_squares and lsltsq are the same or just "similar". |
From: Tim H. <tim...@co...> - 2006-06-16 13:18:08
|
Sebastian Beca wrote: >Hi, >I'm working with NumPy/SciPy on some algorithms and i've run into some >important speed differences wrt Matlab 7. I've narrowed the main speed >problem down to the operation of finding the euclidean distance >between two matrices that share one dimension rank (dist in Matlab): > >Python: >def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > >Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > >Running both of these 100 times, I've found the python version to run >between 10-20 times slower. My question is if there is a faster way to >do this? Perhaps I'm not using the correct functions/structures? Or >this is as good as it gets? > > Here's one faster way. from numpy import * import timeit A = random.random( [4,2]) B = random.random( [1000,2]) def d1(): d = zeros([4, 1000], dtype=float) for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d def d2(): d = zeros([4, 1000], dtype=float) for i in range(4): xy = A[i] - B d[i] = hypot(xy[:,0], xy[:,1]) return d if __name__ == "__main__": t1 = timeit.Timer('d1()', 'from scratch import d1').timeit(100) t2 =timeit.Timer('d2()', 'from scratch import d2').timeit(100) print t1, t2, t1 / t2 In this case, d2 is 50x faster than d1 on my box. Making some extremely dubious assumptions about transitivity of measurements, that would implt that d2 is twice as fast as matlab. Oh, and I didn't actually test that the output is correct.... -tim >Thanks on beforehand, > >Sebastian Beca >Department of Computer Science Engineering >University of Chile > >PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have >ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Matt H. <hy...@ma...> - 2006-06-16 13:04:47
|
I was trying to build matplotlib after installing the latest svn version of numpy (r2426), and compilation bailed on missing headers. It seems that the headers from build/src.linux*/numpy/core/ are not properly being installed during setup.py's install phase to $PYTHON_SITE_LIB/site-packages/numpy/core/include/numpy Have I stumbled upon a bug, or do I need to do something other than "setup.py install"? The files that do make it in are: arrayobject.h arrayscalars.h ufuncobject.h The files that do not make it in are: config.h __multiarray_api.h __ufunc_api.h The compilation problem was that arrayobject.h includes both config.h and __multiarray_api.h, but the files were not in place. Thanks, Matt -- Matt Hyclak Department of Mathematics Department of Social Work Ohio University (740) 593-1263 |
From: Tim H. <tim...@co...> - 2006-06-16 13:00:05
|
I don't have anything constructive to add at the moment, so I'll just throw out an unelucidated opinion: +1 for longish names. -1 for two sets of names. -tim |
From: Sven S. <sve...@gm...> - 2006-06-16 12:49:20
|
Alexandre Fayolle schrieb: > On Fri, Jun 16, 2006 at 10:43:42AM +0200, Sven Schreiber wrote: >>> Again, there is no defense for abbreviating linear_least_squares >>> because it is unlikely to appear in an expression and waste valuable >>> horisontal space. >> not true imho; btw, I would suggest "ols" (ordinary least squares), >> which is in every textbook. > > Please, keep the zen of python in mind : Explicit is better than > implicit. > > True, but horizontal space *is* valuable (copied from above), and some of the suggested long names were a bit too long for my taste. Abbreviations will emerge anyway, the question is merely: Will numpy provide/recommend them (in addition to having long names maybe), or will it have to be done by somebody else, possibly resulting in many different sets of abbreviations for the same purpose. Thanks, Sven |
From: Alexandre F. <ale...@lo...> - 2006-06-16 12:11:26
|
On Fri, Jun 16, 2006 at 10:43:42AM +0200, Sven Schreiber wrote: > > Again, there is no defense for abbreviating linear_least_squares > > because it is unlikely to appear in an expression and waste valuable > > horisontal space. =20 >=20 > not true imho; btw, I would suggest "ols" (ordinary least squares), > which is in every textbook. Please, keep the zen of python in mind : Explicit is better than implicit.=20 --=20 Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D=E9veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science |
From: Pierre B. de R. <pb...@cm...> - 2006-06-16 09:20:51
|
Hi, I have an extension library which I wanted to interface with NumPy ... So I added the import_array() and all the needed stuff so that it now compiles. However, when I load the library I obtain : ImportError: No module named core.multiarray I didn't find anything on the net about it, what could be the problem ? Thanks, Pierre |
From: Sebastian B. <seb...@gm...> - 2006-06-16 09:19:09
|
Please ignore if you recieve this. |
From: Sebastian B. <seb...@gm...> - 2006-06-16 08:56:24
|
Hi, I'm working with NumPy/SciPy on some algorithms and i've run into some important speed differences wrt Matlab 7. I've narrowed the main speed problem down to the operation of finding the euclidean distance between two matrices that share one dimension rank (dist in Matlab): Python: def dtest(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d Matlab: A = rand( [4,2]) B = rand( [1000,2]) d = dist(A, B') Running both of these 100 times, I've found the python version to run between 10-20 times slower. My question is if there is a faster way to do this? Perhaps I'm not using the correct functions/structures? Or this is as good as it gets? Thanks on beforehand, Sebastian Beca Department of Computer Science Engineering University of Chile PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. |
From: Sven S. <sve...@gm...> - 2006-06-16 08:44:01
|
Alexander Belopolsky schrieb: > In my view it is more important that code is easy to read rather than > easy to write. Interactive users will disagree, but in programming you > write once and read/edit forever :). The insight about this disagreement imho suggests a compromise (or call it a dual solution): Have verbose names, but also have good default abbreviations for those who prefer them. It would be unfortunate if numpy users were required to cook up their own abbreviations if they wanted to, because 1. it adds overhead, and 2. it would make other people's code more difficult to read. > > Again, there is no defense for abbreviating linear_least_squares > because it is unlikely to appear in an expression and waste valuable > horisontal space. not true imho; btw, I would suggest "ols" (ordinary least squares), which is in every textbook. Cheers, Sven |
From: David D. <dav...@lo...> - 2006-06-16 07:53:51
|
Hi, On Fri, Jun 16, 2006 at 08:28:18AM +0200, Johannes Loehnert wrote: > Hi, >=20 > def dtest(): > =A0 =A0 A =3D random( [4,2]) > =A0 =A0 B =3D random( [1000,2]) >=20 > # drawback: memory usage temporarily doubled > # solution see below > d =3D A[:, newaxis, :] - B[newaxis, :, :] Unless I'm wrong, one can simplify a (very) little bit this line:=20 d =3D A[:, newaxis, :] - B > # written as 3 expressions for more clarity > d =3D sqrt((d**2).sum(axis=3D2)) > return d >=20 --=20 David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D=E9veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science |
From: Konrad H. <kon...@la...> - 2006-06-16 06:54:05
|
> We are using Python's distutils, and I'm trying to figure out if > there's a way in which I can have both distributions installed to one > package directory, and then the __init__.py file would try to figure > out which one to import on behalf of the user (i.e. it would try to > figure out if the user had already imported NumPy, and if so, import > the NumPy version of the module; otherwise, it will import the Numeric > version of the module). > > This is turning out to be a bigger pain than I expected, so I'm > turning to this group to see if anybody has a better idea, or should I > just give up and release these two distributions separately? I think that what you are aiming at can be done, but I'd rather not do it. Imagine a user who has both Numeric and NumPy installed, plus additional packages that use either one, without the user necessarily being aware of who imports what. For such a user, your package would appear to behave randomly, returning different array types depending on the order of imports of seemingly unrelated modules. If you think it is useful to have both versions available at the same time, a better selection method would be the use of a suitable environment variable. Konrad. |
From: Johannes L. <a.u...@gm...> - 2006-06-16 06:28:31
|
Hi, def dtest(): =A0 =A0 A =3D random( [4,2]) =A0 =A0 B =3D random( [1000,2]) # drawback: memory usage temporarily doubled # solution see below d =3D A[:, newaxis, :] - B[newaxis, :, :] # written as 3 expressions for more clarity d =3D sqrt((d**2).sum(axis=3D2)) return d def dtest_lowmem(): A =3D random( [4,2]) B =3D random( [1000,2]) d =3D zeros([4, 1000], dtype=3D'f') # stores result for i in range(len(A)): # the loop should not impose much loss in speed dtemp =3D A[i, newaxis, :] - B[:, :] dtemp =3D sqrt((dtemp**2).sum(axis=3D1)) d[i] =3D dtemp return d (both functions untested....) HTH, Johannes |
From: Michael S. <mic...@gm...> - 2006-06-16 06:26:39
|
Hi Sebastian, I am not sure if there is a function already defined in numpy, but something like this may be what you are after def distance(a1, a2): return sqrt(sum((a1[:,newaxis,:] - a2[newaxis,:,:])**2, axis=2)) The general idea is to avoid loops if you want the code to execute fast. I hope this helps. Mike On 6/16/06, Sebastian Beca <seb...@gm...> wrote: > Hi, > I'm working with NumPy/SciPy on some algorithms and i've run into some > important speed differences wrt Matlab 7. I've narrowed the main speed > problem down to the operation of finding the euclidean distance > between two matrices that share one dimension rank (dist in Matlab): > > Python: > def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > > Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > > Running both of these 100 times, I've found the python version to run > between 10-20 times slower. My question is if there is a faster way to > do this? Perhaps I'm not using the correct functions/structures? Or > this is as good as it gets? > > Thanks on beforehand, > > Sebastian Beca > Department of Computer Science Engineering > University of Chile > > PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have > ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Sebastian B. <seb...@gm...> - 2006-06-16 06:09:04
|
Hi, I'm working with NumPy/SciPy on some algorithms and i've run into some important speed differences wrt Matlab 7. I've narrowed the main speed problem down to the operation of finding the euclidean distance between two matrices that share one dimension rank (dist in Matlab): Python: def dtest(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d Matlab: A = rand( [4,2]) B = rand( [1000,2]) d = dist(A, B') Running both of these 100 times, I've found the python version to run between 10-20 times slower. My question is if there is a faster way to do this? Perhaps I'm not using the correct functions/structures? Or this is as good as it gets? Thanks on beforehand, Sebastian Beca Department of Computer Science Engineering University of Chile PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. |
From: David M. C. <co...@ph...> - 2006-06-16 05:54:41
|
On Wed, Jun 14, 2006 at 11:46:27PM -0400, Sasha wrote: > On 6/14/06, David M. Cooke <co...@ph...> wrote: > > After working with them for a while, I'm going to go on record and say that I > > prefer the long names from Numeric and numarray (like linear_least_squares, > > inverse_real_fft, etc.), as opposed to the short names now used by default in > > numpy (lstsq, irefft, etc.). I know you can get the long names from > > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > > defaults. > > > > I agree in spirit, but note that inverse_real_fft is still short for > inverse_real_fast_fourier_transform. Presumably, fft is a proper noun > in many people vocabularies, but so may be lstsq depending who you > ask. I say "FFT", but I don't say "lstsq". I can find "FFT" in the index of a book of algorithms, but not "lstsq" (unless it was a specific implementation). Those are my two guiding ideas for what makes a good name here. > I am playing devil's advocate here a little because personally, I > always recommend the following as a compromize: > > sinh = hyperbolic_sinus > ... > tanh(x) = sinh(x)/cosh(x) > > But the next question is where to put "sinh = hyperbolic_sinus": right > before the expression using sinh? at the top of the module (import > hyperbolic_sinus as sinh)? in the math library? If you pick the last > option, do you need hyperbolic_sinus to begin with? If you pick any > other option, how do you prevent others from writing sh = > hyperbolic_sinus instead of sinh? Pish. By the same reasoning, we don't need the number 2: we can write it as the successor of the successor of the additive identity :-) > > Also, Numeric and numarray compatibility is increased by using the long > > names: those two don't have the short ones. > > > > Fitting names into 6 characters when out of style decades ago. (I think > > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > > > Short names are still popular in scientific programming: > <http://www.nsl.com/papers/style.pdf>. That's 11 years old. The web was only a few years old at that time! There's been much work done on what makes a good programming style (Steve McConnell's "Code Complete" for instance is a good start). > I am still +1 for keeping linear_least_squares and inverse_real_fft, > but not just because abreviations are bad as such - if an established > acronym such as fft exists we should be free to use it. Ok, in summary, I'm seeing a bunch of "yes, long names please", but only your devil's advocate stance for no (and +1 for real). I see that Travis fixed the real fft names back to 'irfft' and friends. So, concrete proposal time: - go back to the long names in numpy.linalg (linear_least_squares, eigenvalues, etc. -- those defined in numpy.linalg.old) - of the new names, I could see keeping 'det' and 'svd': those are commonly used, although maybe 'SVD' instead? - anybody got a better name than Heigenvalues? That H looks weird at the beginning. - for numpy.dft, use the old names again. I could probably be persuaded that 'rfft' is ok. 'hfft' for the Hermite FFT is right out. - numpy.random is other "old package replacement", but's fine (and better). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: David M. C. <co...@ph...> - 2006-06-16 05:29:34
|
On Thu, Jun 15, 2006 at 09:39:58PM -0500, Ted Horst wrote: > The depreacted function in numpy.lib.utils is throwing a readonly > attribute exception in the latest svn (2627). This is on the Mac OSX > (10.4.6) using the builtin python (2.3.5) during the import of > fftpack. I'm guessing its a 2.3/2.4 difference. > > Ted Who gets the award for "breaks the build most often"? That'd be me! Sorry, I hardly ever test with 2.3. But, I fixed it (and found a generator that had snuck in :) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Sebastian B. <seb...@gm...> - 2006-06-16 04:32:40
|
Test post. Something isn't working.... |
From: Ted H. <ted...@ea...> - 2006-06-16 02:40:05
|
The depreacted function in numpy.lib.utils is throwing a readonly attribute exception in the latest svn (2627). This is on the Mac OSX (10.4.6) using the builtin python (2.3.5) during the import of fftpack. I'm guessing its a 2.3/2.4 difference. Ted |
From: Andrew S. <str...@as...> - 2006-06-16 02:21:42
|
Dear Mary, I suggest using numpy and at the boundaries use numpy.asarray(yourinput), which will be a quick way to view the data as a numpy array, regardless of its original type. Otherwise, you could look at the matplotlib distribution to see how it's done to really support multiple array packages simultaneously. Mary Haley wrote: > Hi all, > > We are getting ready to release some Python software that supports > both NumPy and Numeric. > > As we have it now, if somebody wanted to use our software with NumPY, > they would have to download the binary distribution that was built > with NumPy and install that. Otherwise, they have to download the > binary distribution that was built with Numeric and install that. > > We are using Python's distutils, and I'm trying to figure out if > there's a way in which I can have both distributions installed to one > package directory, and then the __init__.py file would try to figure > out which one to import on behalf of the user (i.e. it would try to > figure out if the user had already imported NumPy, and if so, import > the NumPy version of the module; otherwise, it will import the Numeric > version of the module). > > This is turning out to be a bigger pain than I expected, so I'm > turning to this group to see if anybody has a better idea, or should I > just give up and release these two distributions separately? > > Thanks, > > --Mary > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Simon B. <si...@ar...> - 2006-06-16 00:42:00
|
On Thu, 15 Jun 2006 15:56:56 -0700 (PDT) JJ <jos...@ya...> wrote: > In matlab > the command is quite simple: > > rank([d(:,i),d(:,j)]) you could use the cauchy-schwartz inequality, which becomes an equality iff the rank above is 1: http://planetmath.org/encyclopedia/CauchySchwarzInequality.html Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com |
From: Tim H. <tim...@co...> - 2006-06-16 00:27:54
|
JJ wrote: >Hello. I am a matlab user learning the syntax of >numpy. Id like to check that I am not missing some >easy steps on column selection and concatenation. The >example task is to determine if two columns selected >out of an array are of full rank (rank 2). Lets say >we have an array d that is size (10,10) and we select >the ith and jth columns to test their rank. In matlab >the command is quite simple: > >rank([d(:,i),d(:,j)]) > >In numpy, the best I have thought of so far is: > >linalg.lstsq(transpose(vstack((d[:,i],d[:,j]))), \ >ones((shape(transpose(vstack((d[:,i],d[:,j])))) \ >[0],1),'d'))[2] > >Im thinking there must be a less awkward way. Any >ideas? > > This isn't really my field, so this could be wrong, but try: linalg.lstsq(d[:,[i,j]], ones_like(d[:,[i,j]]))[2] and see if that works for you. -tim >JJ > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com > > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: JJ <jos...@ya...> - 2006-06-15 22:57:02
|
Hello. I am a matlab user learning the syntax of numpy. Id like to check that I am not missing some easy steps on column selection and concatenation. The example task is to determine if two columns selected out of an array are of full rank (rank 2). Lets say we have an array d that is size (10,10) and we select the ith and jth columns to test their rank. In matlab the command is quite simple: rank([d(:,i),d(:,j)]) In numpy, the best I have thought of so far is: linalg.lstsq(transpose(vstack((d[:,i],d[:,j]))), \ ones((shape(transpose(vstack((d[:,i],d[:,j])))) \ [0],1),'d'))[2] Im thinking there must be a less awkward way. Any ideas? JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: Mary H. <ha...@uc...> - 2006-06-15 21:38:10
|
Hi all, We are getting ready to release some Python software that supports both NumPy and Numeric. As we have it now, if somebody wanted to use our software with NumPY, they would have to download the binary distribution that was built with NumPy and install that. Otherwise, they have to download the binary distribution that was built with Numeric and install that. We are using Python's distutils, and I'm trying to figure out if there's a way in which I can have both distributions installed to one package directory, and then the __init__.py file would try to figure out which one to import on behalf of the user (i.e. it would try to figure out if the user had already imported NumPy, and if so, import the NumPy version of the module; otherwise, it will import the Numeric version of the module). This is turning out to be a bigger pain than I expected, so I'm turning to this group to see if anybody has a better idea, or should I just give up and release these two distributions separately? Thanks, --Mary |
From: <hu...@ya...> - 2006-06-15 21:06:21
|
Just a guess, you're reading some fits file with pyfits but you didn't decl= are=20 the variable NUMERIX for numpy (with the beta version of pyfits) or you=20 script are calling another script who are using numarray. I had both proble= m=20 last week. Pyfits with a mix of numarray/numpy and a script to read some da= ta=20 and return it like an array. N. Le jeudi 15 juin 2006 06:35, Eric Emsellem a =E9crit=A0: > Hi, > > I have written a number of small modules where I now systematically use > numpy. > > I have in principle used the latest versions of the different > array/Science modules (scipy, numpy, ..) but still at some point during > a selection, it crashes on numpy because it seems that the array > correspond to "numarray" arrays. > > e.g.: > ################################## > selection =3D (rell >=3D 1.) * (rell < ES0.maxEFFR[indgal]) > ################################## > ### rell is an array of reals and ES0.maxEFFR[indgal] is a real number. > > gives the error: > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > /usr/local/lib/python2.4/site-packages/numarray/numarraycore.py:376: > UserWarning: __array__ returned non-NumArray instance > _warnings.warn("__array__ returned non-NumArray instance") > /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in > _cache_miss2(self, n1, n2, out) > 919 (in1, in2), inform, scalar =3D _inputcheck(n1, n2) > 920 > --> 921 mode, win1, win2, wout, cfunc, ufargs =3D \ > 922 self._setup(in1, in2, inform, out) > 923 > > /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _setup(self, > in1, in2, inform, out) > 965 if out is None: wout =3D in2.new(outtypes[0]) > 966 if inform =3D=3D "vv": > --> 967 intypes =3D (in1._type, in2._type) > 968 inarr1, inarr2 =3D in1._dualbroadcast(in2) > 969 fform, convtypes, outtypes, cfunc =3D > self._typematch_N(intypes, inform) > > AttributeError: 'numpy.ndarray' object has no attribute '_type' > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > QUESTION 1: Any hint on where numarray could still be appearing? > > QUESTION 2: how would you make a selection using "and" and "or" such as: > selection =3D (condition 1) "and" (condition2 "or" > condition3) so that "selection" contains 0 and 1 according to the right > hand side. > > Thanks, > > Eric > P.S.: > my config is: > > matplotlib version 0.87.3 > verbose.level helpful > interactive is False > platform is linux2 > numerix numpy 0.9.9.2624 > font search path > ['/usr/local/lib/python2.4/site-packages/matplotlib/mpl-data'] > backend GTKAgg version 2.8.2 > Python 2.4.2 (#1, May 2 2006, 08:13:46) > IPython 0.7.2 -- An enhanced Interactive Python. > > I am using numerix =3D numpy in matplotlibrc. I am also using NUMERIX =3D > numpy when building pyfits. |