You can subscribe to this list here.
2000 
_{Jan}
(8) 
_{Feb}
(49) 
_{Mar}
(48) 
_{Apr}
(28) 
_{May}
(37) 
_{Jun}
(28) 
_{Jul}
(16) 
_{Aug}
(16) 
_{Sep}
(44) 
_{Oct}
(61) 
_{Nov}
(31) 
_{Dec}
(24) 

2001 
_{Jan}
(56) 
_{Feb}
(54) 
_{Mar}
(41) 
_{Apr}
(71) 
_{May}
(48) 
_{Jun}
(32) 
_{Jul}
(53) 
_{Aug}
(91) 
_{Sep}
(56) 
_{Oct}
(33) 
_{Nov}
(81) 
_{Dec}
(54) 
2002 
_{Jan}
(72) 
_{Feb}
(37) 
_{Mar}
(126) 
_{Apr}
(62) 
_{May}
(34) 
_{Jun}
(124) 
_{Jul}
(36) 
_{Aug}
(34) 
_{Sep}
(60) 
_{Oct}
(37) 
_{Nov}
(23) 
_{Dec}
(104) 
2003 
_{Jan}
(110) 
_{Feb}
(73) 
_{Mar}
(42) 
_{Apr}
(8) 
_{May}
(76) 
_{Jun}
(14) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(108) 
_{Oct}
(82) 
_{Nov}
(89) 
_{Dec}
(94) 
2004 
_{Jan}
(117) 
_{Feb}
(86) 
_{Mar}
(75) 
_{Apr}
(55) 
_{May}
(75) 
_{Jun}
(160) 
_{Jul}
(152) 
_{Aug}
(86) 
_{Sep}
(75) 
_{Oct}
(134) 
_{Nov}
(62) 
_{Dec}
(60) 
2005 
_{Jan}
(187) 
_{Feb}
(318) 
_{Mar}
(296) 
_{Apr}
(205) 
_{May}
(84) 
_{Jun}
(63) 
_{Jul}
(122) 
_{Aug}
(59) 
_{Sep}
(66) 
_{Oct}
(148) 
_{Nov}
(120) 
_{Dec}
(70) 
2006 
_{Jan}
(460) 
_{Feb}
(683) 
_{Mar}
(589) 
_{Apr}
(559) 
_{May}
(445) 
_{Jun}
(712) 
_{Jul}
(815) 
_{Aug}
(663) 
_{Sep}
(559) 
_{Oct}
(930) 
_{Nov}
(373) 
_{Dec}

S  M  T  W  T  F  S 





1
(27) 
2
(39) 
3
(8) 
4
(9) 
5
(6) 
6
(21) 
7
(12) 
8
(35) 
9
(35) 
10
(16) 
11
(3) 
12
(17) 
13
(21) 
14
(23) 
15
(32) 
16
(45) 
17
(13) 
18
(6) 
19
(16) 
20
(19) 
21
(36) 
22
(32) 
23
(30) 
24
(20) 
25
(5) 
26
(9) 
27
(18) 
28
(44) 
29
(25) 
30
(90) 

From: <joris@st...>  20060602 23:03:59

[DC]: My only concern with this is numpy is positioned for a wider audience: [DC]: everybody who needs arrays, and the extra speed that numpy gives, but [DC]: doesn't need what scipy gives. So merging the two could lead to [DC]: confusion on what provides what, and what you need to do which. I completely agree with this. SciPy and NumPy on one site, yes, but not so interweaven that it gets confusing or even plain useless for NumPyonly users. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm 
From: Travis Oliphant <oliphant.travis@ie...>  20060602 22:28:40

I've been busy with NumPy and it has resulted in some CAPI changes. So, after checking out a new SVN version of NumPy you will need to rebuild extension modules (It stinks for me too  SciPy takes a while to build). The API changes have made it possible to allow userdefined datatypes to optionally participate in the coercion and casting infrastructure. Previously, casting was limited to builtin datatypes. Now, there is a mechanism for users to define casting to and from their own datatype (and whether or not it can be done safely and whether or not a particular kind of userdefined scalar can be cast  remember a scalar mixed with an array has a different set of casting rules). This should make userdefined datatypes much more useful, but the facility needs to be tested. Does anybody have a datatype they want to add to try out the new system. The restriction on adding another datatype is that it must have a fixed element size (a variableprecision float for example would have to use a pointer to the actual structure as the "datatype"). Travis 
From: Christopher Barker <Chris.B<arker@no...>  20060602 22:09:39

Rob Hooft wrote: > Christopher Barker wrote: >  Did you time them? And yours only handles integers. > > Yes I did, check the attachment of my previous message for a python > module to time the three, Sorry about that, I don't notice that. > with completely different results from yours > (I'm using Numeric). I ran it and got similar results to mine. Frankly, for the size problems I'm dealing with, they are all about the same, except for under Numarray, where mine is fastest, your second, and Robert third  by a wide margin! Another reason I'm glad numpy is built on the Numeric code: Using numarray My way took: 0.394555 seconds Robert's way took: 20.590545 seconds Rob's way took: 4.802346 seconds Number of X: 201 Number of Y: 241 Using Numeric My way took: 0.593319 seconds Robert's way took: 0.523235 seconds Rob's way took: 0.579756 seconds Robert's way has a pretty decent edge under numpy: Using numpy My way took: 0.686741 seconds Robert's way took: 0.357887 seconds Rob's way took: 0.796977 seconds And I'm using time(), rather than clock() now, though it dint' really change anything. I suppose I should figure out timeit.py Thanks for all your help on this, Chris  Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 5266959 voice 7600 Sand Point Way NE (206) 5266329 fax Seattle, WA 98115 (206) 5266317 main reception Chris.Barker@... 
From: Rob Hooft <rob@ho...>  20060602 21:06:34

BEGIN PGP SIGNED MESSAGE Hash: SHA1 Christopher Barker wrote:  Did you time them? And yours only handles integers. Yes I did, check the attachment of my previous message for a python module to time the three, with completely different results from yours (I'm using Numeric). The attachment also contains a floatified version of my demonstration. Rob   Rob W.W. Hooft  rob@...  http://www.hooft.net/people/rob/ BEGIN PGP SIGNATURE Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird  http://enigmail.mozdev.org iD8DBQFEgKhRH7J/Cv8rb3QRAlk1AJ4vyt1F1Lr54sGMjHkp1hGXzcowJwCeMD5O CqkaDTpKOdDrAy7+v3Py7kw= =jnqb END PGP SIGNATURE 
From: Robert Kern <robert.kern@gm...>  20060602 20:42:54

Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>My point is that there is no need to change rand() and randn() to the "new" >>interface. The "new" interface is already there: random.random() and >>random.standard_normal(). > > Yes of course; that has always been your point. > In an earlier post, I indicated that this is your usual response. > > What your point does not addres: > the question about rand and randn keeps cropping up on this list. > > My point is: > numpy should take a step so that this question goes away, > rather than maintain the status quo and see it crop up continually. > (I.e., its recurrence should be understood to signal a problem.) I'll check in a change to the docstring later today.  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: Alan G Isaac <aisaac@am...>  20060602 20:12:35

>> On Fri, 02 Jun 2006, Robert Kern apparently wrote:=20 >>> Changing the API of rand() and randn() doesn't solve any=20 >>> problem. Removing them might. > Alan G Isaac wrote:=20 >> I think this is too blunt an argument. For example,=20 >> use of the old interface might issue a deprecation warning.=20 >> This would make it very likely that all new code use the new=20 >> interface.=20 On Fri, 02 Jun 2006, Robert Kern apparently wrote:=20 > My point is that there is no need to change rand() and randn() to the "ne= w"=20 > interface. The "new" interface is already there: random.random() and=20 > random.standard_normal().=20 Yes of course; that has always been your point. In an earlier post, I indicated that this is your usual response. What your point does not addres: the question about rand and randn keeps cropping up on this list. My point is: numpy should take a step so that this question goes away, rather than maintain the status quo and see it crop up continually. (I.e., its recurrence should be understood to signal a problem.) Cheers, Alan PS I'll shut up about this now. 
From: Robert Kern <robert.kern@gm...>  20060602 19:57:16

Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>Changing the API of rand() and randn() doesn't solve any >>problem. Removing them might. > > I think this is too blunt an argument. For example, > use of the old interface might issue a deprecation warning. > This would make it very likely that all new code use the new > interface. My point is that there is no need to change rand() and randn() to the "new" interface. The "new" interface is already there: random.random() and random.standard_normal().  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: David M. Cooke <cookedm@ph...>  20060602 19:56:34

On Fri, 2 Jun 2006 19:09:01 +0200 Joris De Ridder <joris@...> wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? Using a better default dgemm for matrix multiplication when an optimized BLAS isn't available has been on my todo list for a while. I think it can be speed up by a large amount on a generic machine by using blocking of the matrices. Personally, I perceive no difference between my g77compiled LAPACK, and the gcccompiled f2c'd routines in lapack_lite, if an optimized BLAS is used. And lapack_lite has fewer bugs than the version of LAPACK available off of netlib.org, as I used the latest patches I could scrounge up (mostly from Debian).  >\/< /\ David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ cookedm@... 
From: David M. Cooke <cookedm@ph...>  20060602 19:47:03

On Fri, 2 Jun 2006 10:27:45 +0200 Joris De Ridder <joris@...> wrote: > [CB]: I was reacting to a post a while back that suggested > pointing people [CB]: searching for numpy to the main scipy page, > which I did not think was a [CB]: good idea. > > That would be my post :o) > > The reasons why I suggested this are > > 1) http://www.scipy.org is at the moment the most informative site on numpy > 2) of all sites http://www.scipy.org looks currently most professional > 3) a wikistyle site where everyone can contribute is really great > 4) I like information to be centralized. Having to check pointers, > docs and cookbooks on two different sites is inefficient > 5) Two different sites inevitably implies some duplication of the work > > Just as you, I am not (yet) a scipy user, I only have numpy installed > at the moment. The principal reason is the same as the one you > mentioned. But for me this is an extra motivation to merge scipy.org > and numpy.org: > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > SciPy community and this in turn hopefully leads to userfriendly > installation procedures. My only concern with this is numpy is positioned for a wider audience: everybody who needs arrays, and the extra speed that numpy gives, but doesn't need what scipy gives. So merging the two could lead to confusion on what provides what, and what you need to do which. For instance, I don't want potential numpy users to be directed to scipy.org, and be turned off with all the extra stuff it seems to have (that scipy, not numpy, provides). But I think this can be handled if we approach scipy.org as serving both purposes. But I think is this the best option, considering how much crossover there is.  >\/< /\ David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ cookedm@... 
From: Alan G Isaac <aisaac@am...>  20060602 19:26:53

On Fri, 02 Jun 2006, Robert Kern apparently wrote:=20 > Changing the API of rand() and randn() doesn't solve any=20 > problem. Removing them might.=20 I think this is too blunt an argument. For example, use of the old interface might issue a deprecation warning. This would make it very likely that all new code use the new=20 interface. I would also be fine with demoting these to the Numeric=20 compatability module, although I find that the inferior=20 choice (since it means a loss of convenience). Unless one of these changes is made, new users will=20 **forever** be asking this same question. And either way,=20 making the sacrifices needed for greater consistency seems=20 like a good idea *before* 1.0. Cheers, Alan 
From: Robert Kern <robert.kern@gm...>  20060602 18:51:36

Alan G Isaac wrote: > On Fri, 02 Jun 2006, Sven Schreiber apparently wrote: > >>why doesn't rand accept a shape tuple as argument? I find >>the difference between the argument types of rand and (for >>example) zeros somewhat confusing. ... Can anybody offer >>an intuition/explanation? > > Backward compatability, I believe. You are not alone in > finding this odd and inconsistent. I am hoping for a change > by 1.0, but I am not very hopeful. > > Robert always points out that if you want the consistent > interface, you can always import functions from the 'random' > module. I have never been able to understand this as > a response to the point you are making. > > I take it the core argument goes something like this: >  rand and randn are convenience functions > * if you do not find them convenient, don't use them >  they are in wide use, so it is too late to change them >  testing the first argument to see whether it is a tuple or > an int so aesthetically objectionable that its ugliness > outweighs the benefits users might get from access to > a more consistent interface My argument does not include the last two points.  They are in wide use because they are convenient and useful.  Changing rand() and randn() to accept a tuple like random.random() and random.standard_normal() does not improve anything. Instead, it adds confusion for users who are reading code and seeing the same function being called in two different ways.  Users who want to see numpy *only* expose a single calling scheme for toplevel functions should instead ask for rand() and randn() to be removed from the top numpy namespace. * Backwards compatibility might prevent this. > This is one place where I believe a forward looking (i.e., > think about new users) vision would force a small change in > these *convenience* functions that will have payoffs both in > ease of use and in eliminating this recurrent question from > discussion lists. *Changing* the API of rand() and randn() doesn't solve any problem. *Removing* them might.  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: Robert Kern <robert.kern@gm...>  20060602 18:35:54

Christopher Barker wrote: > Robert Kern wrote: >> x = repeat(x, ny) >> y = concatenate([y] * nx) >> points = transpose([x, y]) > > Somehow I never think to use repeat. And why use repeat for x and > concatenate for y? I guess you could use repeat() on y[newaxis] and then flatten it. y = repeat(y[newaxis], nx).ravel() > Using numpy > The Numpy way took: 0.020000 seconds > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using Numeric > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using numarray > My way took: 0.070000 seconds > Robert's way took: 0.120000 seconds > Number of X: 4 > Number of Y: 3 Those timings look real funny. I presume you are using a UNIX and time.clock(). Don't do that. It's a very poor timer on UNIX. Use time.time() on UNIX and time.clock() on Windows(). Even better, please use timeit.py instead. Tim Peters did a lot of work to make timeit.py do the right thing.  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: Jonathan Taylor <jonathan.taylor@st...>  20060602 18:08:08

I was wondering if there was an easy way to get searchsorted to be "rightcontinuous" instead of "leftcontinuous". By continuity, I am talking about the continuity of the function "count" below... >>> import numpy as N >>> >>> x = N.arange(20) >>> x.searchsorted(9) 9 >>> import numpy as N >>> >>> x = N.arange(20) >>> >>> def count(u): ... return x.searchsorted(u) ... >>> count(9) 9 >>> count(9.01) 10 >>> Thanks, Jonathan   I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!!  Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 wwwstat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 
From: Alan G Isaac <aisaac@am...>  20060602 17:34:15

On Fri, 02 Jun 2006, Sven Schreiber apparently wrote:=20 > why doesn't rand accept a shape tuple as argument? I find=20 > the difference between the argument types of rand and (for=20 > example) zeros somewhat confusing. ... Can anybody offer=20 > an intuition/explanation?=20 Backward compatability, I believe. You are not alone in=20 finding this odd and inconsistent. I am hoping for a change=20 by 1.0, but I am not very hopeful. Robert always points out that if you want the consistent=20 interface, you can always import functions from the 'random'=20 module. I have never been able to understand this as=20 a response to the point you are making. I take it the core argument goes something like this:  rand and randn are convenience functions * if you do not find them convenient, don't use them  they are in wide use, so it is too late to change them  testing the first argument to see whether it is a tuple or=20 an int so aesthetically objectionable that its ugliness=20 outweighs the benefits users might get from access to=20 a more consistent interface This is one place where I believe a forward looking (i.e.,=20 think about new users) vision would force a small change in=20 these *convenience* functions that will have payoffs both in=20 ease of use and in eliminating this recurrent question from=20 discussion lists. Cheers, Alan Isaac 
From: Travis Oliphant <oliphant.travis@ie...>  20060602 17:31:25

Eric Jonas wrote: > Is there some way, either within numpy or at buildtime, to verify > you're using BLAS/LAPACK? Is there one we should be using? > > Check to see if the id of numpy.dot is the same as numpy.core.multiarray.dot Travis 
From: Eric Jonas <jonas@mw...>  20060602 17:28:22

Is there some way, either within numpy or at buildtime, to verify you're using BLAS/LAPACK? Is there one we should be using? ...Eric On Fri, 20060602 at 11:19 0600, Travis Oliphant wrote: > Joris De Ridder wrote: > > Just to be sure, what exactly is affected when one uses the slower > > algorithms when neither BLAS or LAPACK is installed? For sure it > > will affect almost every function in numpy.linalg, as they use > > LAPACK_lite. And I guess that in numpy.core the dot() function > > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > > numpy functions that are affected? > > > convolve could also be affected (the basic internal _dot function gets > replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the > only function that uses dot internally. > > In the future we hope to be optimizing ufuncs as well. > > Travis > > > > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion 
From: Francesc Altet <faltet@ca...>  20060602 17:19:50

A Divendres 02 Juny 2006 19:07, Travis Oliphant va escriure: > Robert Kern wrote: > > Filip Wasilewski wrote: > >> So the next question is what's the difference between matrixmultiply a= nd > >> dot in NumPy? > > > > matrixmultiply is a deprecated compatibility name. Always use dot. dot > > will get replaced with the optimized dotblas implementation when an > > optimized BLAS is available. matrixmultiply will not (probably not > > intentionally, but I'm happy with the current situation). > > It's true that matrixmultiply has been deprecated for some time (at > least 8 years...) The basic dot function gets overwritten with a > BLASoptimized version but the matrixmultiply does not get changed. So > replace matrixmultiply with dot. It wasn't an intentional thing, but > perhaps it will finally encourage people to always use dot. So, why not issuing a DeprecationWarning on a matrixmultiply function use? =2D=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "" 
From: Travis Oliphant <oliphant.travis@ie...>  20060602 17:19:17

Joris De Ridder wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? > convolve could also be affected (the basic internal _dot function gets replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the only function that uses dot internally. In the future we hope to be optimizing ufuncs as well. Travis 
From: Joris De Ridder <joris@st...>  20060602 17:09:13

Just to be sure, what exactly is affected when one uses the slower algorithms when neither BLAS or LAPACK is installed? For sure it will affect almost every function in numpy.linalg, as they use LAPACK_lite. And I guess that in numpy.core the dot() function uses the lite numpy/core/blasdot/_dotblas.c routine? Any other numpy functions that are affected? Joris On Friday 02 June 2006 16:16, George Nurser wrote: [GN]: Yes, using numpy.dot I get 250ms, numpy.matrixmultiply 11.8s. [GN]: [GN]: while a sansBLAS Numeric.matrixmultiply takes 12s. [GN]: [GN]: The first 100 results from numpy.dot and numpy.matrixmultiply are identical .... [GN]: [GN]: Use dot;) [GN]: [GN]: George. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm 
From: Travis Oliphant <oliphant.travis@ie...>  20060602 17:08:46

Joris De Ridder wrote: > On Friday 02 June 2006 14:58, Eric Jonas wrote: > [EJ]: Hello! I've been using numeric for a while, and the recent list traffic > [EJ]: prompted me to finally migrate all my old code. On a whim, we were > [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that > [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > [EJ]: but 300 ms in numeric. > > You mean the other way around? > > I also tested numpy vs numarray, and numarray seems to be roughly 3 times > faster than numpy for your particular testcase. > Please post your test cases. We are trying to remove any slowness, but need testers to do it. Travis 
From: Travis Oliphant <oliphant.travis@ie...>  20060602 17:07:40

Robert Kern wrote: > Filip Wasilewski wrote: > > >> So the next question is what's the difference between matrixmultiply and >> dot in NumPy? >> > > matrixmultiply is a deprecated compatibility name. Always use dot. dot will get > replaced with the optimized dotblas implementation when an optimized BLAS is > available. matrixmultiply will not (probably not intentionally, but I'm happy > with the current situation). > It's true that matrixmultiply has been deprecated for some time (at least 8 years...) The basic dot function gets overwritten with a BLASoptimized version but the matrixmultiply does not get changed. So replace matrixmultiply with dot. It wasn't an intentional thing, but perhaps it will finally encourage people to always use dot. Travis 
From: Christopher Barker <Chris.B<arker@no...>  20060602 16:57:29

Robert Kern wrote: >> As I need Numeric and numarray compatibility at this point, it seems the > Ah. It might help if you said that up front. Yes, it would, but that would mean accepting that I need to keep backward compatibility  I'm still hoping! > x = arange(minx, maxx+step, step) # oy. > y = arange(miny, maxy+step, step) > > nx = len(x) > ny = len(y) > > x = repeat(x, ny) > y = concatenate([y] * nx) > points = transpose([x, y]) Somehow I never think to use repeat. And why use repeat for x and concatenate for y? Rob Hooft wrote: > How about something like: > > >>> k=Numeric.repeat(range(0,5+1),Numeric.ones(6)*7) > >>> l=Numeric.resize(range(0,6+1),[42]) > >>> > zone=Numeric.concatenate((k[:,Numeric.NewAxis],l[:,Numeric.NewAxis]),axis=1) > This is the same speed as Robert Kern's solution for large arrays, a bit > slower for small arrays. Both are a little faster than yours. Did you time them? And yours only handles integers. This is my timing: For small arrays: Using numpy The Numpy way took: 0.020000 seconds My way took: 0.010000 seconds Robert's way took: 0.020000 seconds Using Numeric My way took: 0.010000 seconds Robert's way took: 0.020000 seconds Using numarray My way took: 0.070000 seconds Robert's way took: 0.120000 seconds Number of X: 4 Number of Y: 3 So my way was faster with all three packages for small arrays. For Medium arrays ( the size I'm most likely to be using ): Using numpy The Numpy way took: 0.120000 seconds My way took: 0.040000 seconds Robert's way took: 0.030000 seconds Using Numeric My way took: 0.040000 seconds Robert's way took: 0.030000 seconds Using numarray My way took: 0.090000 seconds Robert's way took: 1.070000 seconds Number of X: 21 Number of Y: 41 Now we're getting close, with mine faster with numarray, but Robert's faster with Numeric and numpy. For Large arrays: (still not very big, but as big as I'm likely to need) Using numpy The Numpy way took: 4.200000 seconds My way took: 0.660000 seconds Robert's way took: 0.340000 seconds Using Numeric My way took: 0.590000 seconds Robert's way took: 0.500000 seconds Using numarray My way took: 0.390000 seconds Robert's way took: 20.340000 seconds Number of X: 201 Number of Y: 241 So Robert's way still is faster with Numeric and numpy, but Much slower with numarray. As it's close with numpy and Numeric, but mine is much faster with numarray, I think I'll stick with mine. I note that while the numpy way, using mgrid(), is nice and clean to write, it is slower across the board. Perhaps mgrid(0 could use some optimization. This is exactly why I had suggested that one thing I wanted for numpy was an aseasytouseaspossible C/C++ API. It would be nice to be able to write as many as possible of these kinds of utility functions in C as we could. In case anyone is interested, I'm using this to draw a grid of dots on the screen for my wxPython FloatCanvas. Every time the image is changed or moved or zoomed, I need to recalculate the points before drawing them, so it's nice to have it fast. I've enclosed my test code. Chris  Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 5266959 voice 7600 Sand Point Way NE (206) 5266329 fax Seattle, WA 98115 (206) 5266317 main reception Chris.Barker@... 
From: Robert Kern <robert.kern@gm...>  20060602 16:20:15

Filip Wasilewski wrote: > So the next question is what's the difference between matrixmultiply and > dot in NumPy? matrixmultiply is a deprecated compatibility name. Always use dot. dot will get replaced with the optimized dotblas implementation when an optimized BLAS is available. matrixmultiply will not (probably not intentionally, but I'm happy with the current situation).  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: Robert Kern <robert.kern@gm...>  20060602 16:17:06

Sven Schreiber wrote: > Hi all, > this may be a stupid question, but why doesn't rand accept a shape tuple > as argument? I find the difference between the argument types of rand > and (for example) zeros somewhat confusing. (See below for > illustration.) Can anybody offer an intuition/explanation? rand() is a convenience function. It's only purpose is to offer this convenient API. If you want a function that takes tuples, use numpy.random.random().  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: Robert Kern <robert.kern@gm...>  20060602 16:09:44

r.demaria@... wrote: > Hi, > > maybe is not what you meant, but presently I'm looking for a sparse > eigenvalue solver. As far as I've understood the ARPACK bindings are > still missing. This library is one of the most used, so I think it > would be very useful to have integrated in numpy. No, that isn't what he meant. He wants to help projects that are currently using Numeric and numarray convert to numpy. In any case, ARPACK certainly won't go into numpy. It might go into scipy if you are willing to contribute wrappers for it.  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 