You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Tim H. <tim...@co...> - 2006-06-03 14:31:33
|
Travis Oliphant wrote: >Tim Hochberg wrote: > > >>Some time ago some people, myself including, were making some noise >>about having 'array' iterate over iterable object producing ndarrays in >>a manner analogous to they way sequences are treated. I finally got >>around to looking at it seriously and once I came to the following three >>conclusions: >> >> 1. All I really care about is the 1D case where dtype is specified. >> This case should be relatively easy to implement so that it's >> fast. Most other cases are not likely to be particularly faster >> than converting the iterators to lists at the Python level and >> then passing those lists to array. >> 2. 'array' already has plenty of special cases. I'm reluctant to add >> more. >> 3. Adding this to 'array' would be non-trivial. The more cases we >> tried to make fast, the more likely that some of the paths would >> be buggy. Regardless of how we did it though, some cases would be >> much slower than other, which would probably be suprising. >> >> >> > >Good job. I just added a called fromiter for this very purpose. Right >now, it's just a stub that calls list(obj) first and then array. Your >code would be a perfect fit for it. I think count could be optional, >though, to handle cases where the count can be determined from the object. > > I'll look at that when I get back. There are two ways to approach this: one is to only allow to count to be optional in those cases that the original object supports either __len__ or __length_hint__. The advantage their is that it's easy and there's no chance of locking up the interpreter by passing an unbounded generator. The other way is to figure out the length based on the generator itself. The "natural" way to do this is to steal stuff from array.array. However, that doesn't export a C-level interface that I can tell (everything is declared static), so you'd be going through the interpreter, which would potentially be slow. I guess another approach would be to hijack PyArray_Resize and steal the resizing pattern from array.array. I'm not sure how well that would work though. I'll look into it... -tim >We'll look forward to your check-in. > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Jonathan T. <jon...@ut...> - 2006-06-03 12:04:23
|
My suggestion would be to have both numpy.org and scipy.org be the exact same page, but make it extremely clear that there are two different projects on the front page. Cheers. Jon. On 6/2/06, David M. Cooke <co...@ph...> wrote: > On Fri, 2 Jun 2006 10:27:45 +0200 > Joris De Ridder <jo...@st...> wrote: > > [CB]: I was reacting to a post a while back that suggested > > pointing people [CB]: searching for numpy to the main scipy page, > > which I did not think was a [CB]: good idea. > > > > That would be my post :o) > > > > The reasons why I suggested this are > > > > 1) www.scipy.org is at the moment the most informative site on numpy > > 2) of all sites www.scipy.org looks currently most professional > > 3) a wiki-style site where everyone can contribute is really great > > 4) I like information to be centralized. Having to check pointers, > > docs and cookbooks on two different sites is inefficient > > 5) Two different sites inevitably implies some duplication of the work > > > > Just as you, I am not (yet) a scipy user, I only have numpy installed > > at the moment. The principal reason is the same as the one you > > mentioned. But for me this is an extra motivation to merge scipy.org > > and numpy.org: > > > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > > SciPy community and this in turn hopefully leads to user-friendly > > installation procedures. > > My only concern with this is numpy is positioned for a wider audience: > everybody who needs arrays, and the extra speed that numpy gives, but > doesn't need what scipy gives. So merging the two could lead to > confusion on what provides what, and what you need to do which. > For instance, I don't want potential numpy users to be directed to > scipy.org, and be turned off with all the extra stuff it seems to have > (that scipy, not numpy, provides). But I think this can be handled if > we approach scipy.org as serving both purposes. > > But I think is this the best option, considering how much crossover > there is. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Sven S. <sve...@gm...> - 2006-06-03 09:53:22
|
Robert Kern schrieb: > > My point is that there is no need to change rand() and randn() to the "new" > interface. The "new" interface is already there: random.random() and > random.standard_normal(). > Ok thanks for the responses and sorry for not searching the archives about this. I tend to share Alan's point of view, but I also understand that it may be too late now to change the way rand is called. -Sven |
From: Travis O. <oli...@ie...> - 2006-06-03 07:25:56
|
Tim Hochberg wrote: > Some time ago some people, myself including, were making some noise > about having 'array' iterate over iterable object producing ndarrays in > a manner analogous to they way sequences are treated. I finally got > around to looking at it seriously and once I came to the following three > conclusions: > > 1. All I really care about is the 1D case where dtype is specified. > This case should be relatively easy to implement so that it's > fast. Most other cases are not likely to be particularly faster > than converting the iterators to lists at the Python level and > then passing those lists to array. > 2. 'array' already has plenty of special cases. I'm reluctant to add > more. > 3. Adding this to 'array' would be non-trivial. The more cases we > tried to make fast, the more likely that some of the paths would > be buggy. Regardless of how we did it though, some cases would be > much slower than other, which would probably be suprising. > Good job. I just added a called fromiter for this very purpose. Right now, it's just a stub that calls list(obj) first and then array. Your code would be a perfect fit for it. I think count could be optional, though, to handle cases where the count can be determined from the object. We'll look forward to your check-in. -Travis |
From: Charles R H. <cha...@gm...> - 2006-06-03 03:30:10
|
Jonathan, I had a patch for this that applied to numarray way back when. If folks feel there is a need, I could probably try to get it running on numpy. Bit of a learning curve (for me), though. Chuck On 6/2/06, Jonathan Taylor <jon...@st...> wrote: > > I was wondering if there was an easy way to get searchsorted to be > "right-continuous" instead of "left-continuous". > > By continuity, I am talking about the continuity of the function "count" > below... > > >>> import numpy as N > >>> > >>> x = N.arange(20) > >>> x.searchsorted(9) > 9 > >>> import numpy as N > >>> > >>> x = N.arange(20) > >>> > >>> def count(u): > ... return x.searchsorted(u) > ... > >>> count(9) > 9 > >>> count(9.01) > 10 > >>> > > Thanks, > > Jonathan > > -- > ------------------------------------------------------------------------ > I'm part of the Team in Training: please support our efforts for the > Leukemia and Lymphoma Society! > > http://www.active.com/donate/tntsvmb/tntsvmbJTaylor > > GO TEAM !!! > > ------------------------------------------------------------------------ > Jonathan Taylor Tel: 650.723.9230 > Dept. of Statistics Fax: 650.725.8977 > Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo > 390 Serra Mall > Stanford, CA 94305 > > > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Tim H. <tim...@co...> - 2006-06-03 03:18:20
|
Some time ago some people, myself including, were making some noise about having 'array' iterate over iterable object producing ndarrays in a manner analogous to they way sequences are treated. I finally got around to looking at it seriously and once I came to the following three conclusions: 1. All I really care about is the 1D case where dtype is specified. This case should be relatively easy to implement so that it's fast. Most other cases are not likely to be particularly faster than converting the iterators to lists at the Python level and then passing those lists to array. 2. 'array' already has plenty of special cases. I'm reluctant to add more. 3. Adding this to 'array' would be non-trivial. The more cases we tried to make fast, the more likely that some of the paths would be buggy. Regardless of how we did it though, some cases would be much slower than other, which would probably be suprising. So, with that in mind, I retreated a little and implemented the simplest thing that did the stuff that I cared about: fromiter(iterable, dtype, count) => ndarray of type dtype and length count This is essentially the same interface as fromstring except that the values of dtype and count are always required. Some primitive benchmarking indicates that 'fromiter(generator, dtype, count)' is about twice as fast as 'array(list(generator))' for medium to large arrays. When producing very large arrays, the advantage of fromiter is larger, presumably because 'list(generator)' causes things to start swapping. Anyway I'm about to bail out of town till the middle of next week, so it'll be a while till I can get it clean enough to submit in some form or another. Plenty of time for people to think of why it's a terrible idea ;-) -tim |
From: <jo...@st...> - 2006-06-02 23:03:59
|
[DC]: My only concern with this is numpy is positioned for a wider audience: [DC]: everybody who needs arrays, and the extra speed that numpy gives, but [DC]: doesn't need what scipy gives. So merging the two could lead to [DC]: confusion on what provides what, and what you need to do which. I completely agree with this. SciPy and NumPy on one site, yes, but not so interweaven that it gets confusing or even plain useless for NumPy-only users. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm |
From: Travis O. <oli...@ie...> - 2006-06-02 22:28:40
|
I've been busy with NumPy and it has resulted in some C-API changes. So, after checking out a new SVN version of NumPy you will need to re-build extension modules (It stinks for me too --- SciPy takes a while to build). The API changes have made it possible to allow user-defined data-types to optionally participate in the coercion and casting infrastructure. Previously, casting was limited to built-in data-types. Now, there is a mechanism for users to define casting to and from their own data-type (and whether or not it can be done safely and whether or not a particular kind of user-defined scalar can be cast --- remember a scalar mixed with an array has a different set of casting rules). This should make user-defined data-types much more useful, but the facility needs to be tested. Does anybody have a data-type they want to add to try out the new system. The restriction on adding another data-type is that it must have a fixed element size (a variable-precision float for example would have to use a pointer to the actual structure as the "data-type"). -Travis |
From: Christopher B. <Chr...@no...> - 2006-06-02 22:09:39
|
Rob Hooft wrote: > Christopher Barker wrote: > | Did you time them? And yours only handles integers. > > Yes I did, check the attachment of my previous message for a python > module to time the three, Sorry about that, I don't notice that. > with completely different results from yours > (I'm using Numeric). I ran it and got similar results to mine. Frankly, for the size problems I'm dealing with, they are all about the same, except for under Numarray, where mine is fastest, your second, and Robert third -- by a wide margin! Another reason I'm glad numpy is built on the Numeric code: Using numarray My way took: 0.394555 seconds Robert's way took: 20.590545 seconds Rob's way took: 4.802346 seconds Number of X: 201 Number of Y: 241 Using Numeric My way took: 0.593319 seconds Robert's way took: 0.523235 seconds Rob's way took: 0.579756 seconds Robert's way has a pretty decent edge under numpy: Using numpy My way took: 0.686741 seconds Robert's way took: 0.357887 seconds Rob's way took: 0.796977 seconds And I'm using time(), rather than clock() now, though it dint' really change anything. I suppose I should figure out timeit.py Thanks for all your help on this, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Rob H. <ro...@ho...> - 2006-06-02 21:06:34
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Christopher Barker wrote: | Did you time them? And yours only handles integers. Yes I did, check the attachment of my previous message for a python module to time the three, with completely different results from yours (I'm using Numeric). The attachment also contains a floatified version of my demonstration. Rob - -- Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEgKhRH7J/Cv8rb3QRAlk1AJ4vyt1F1Lr54sGMjHkp1hGXzcowJwCeMD5O CqkaDTpKOdDrAy7+v3Py7kw= =jnqb -----END PGP SIGNATURE----- |
From: Robert K. <rob...@gm...> - 2006-06-02 20:42:54
|
Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>My point is that there is no need to change rand() and randn() to the "new" >>interface. The "new" interface is already there: random.random() and >>random.standard_normal(). > > Yes of course; that has always been your point. > In an earlier post, I indicated that this is your usual response. > > What your point does not addres: > the question about rand and randn keeps cropping up on this list. > > My point is: > numpy should take a step so that this question goes away, > rather than maintain the status quo and see it crop up continually. > (I.e., its recurrence should be understood to signal a problem.) I'll check in a change to the docstring later today. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alan G I. <ai...@am...> - 2006-06-02 20:12:35
|
>> On Fri, 02 Jun 2006, Robert Kern apparently wrote:=20 >>> Changing the API of rand() and randn() doesn't solve any=20 >>> problem. Removing them might. > Alan G Isaac wrote:=20 >> I think this is too blunt an argument. For example,=20 >> use of the old interface might issue a deprecation warning.=20 >> This would make it very likely that all new code use the new=20 >> interface.=20 On Fri, 02 Jun 2006, Robert Kern apparently wrote:=20 > My point is that there is no need to change rand() and randn() to the "ne= w"=20 > interface. The "new" interface is already there: random.random() and=20 > random.standard_normal().=20 Yes of course; that has always been your point. In an earlier post, I indicated that this is your usual response. What your point does not addres: the question about rand and randn keeps cropping up on this list. My point is: numpy should take a step so that this question goes away, rather than maintain the status quo and see it crop up continually. (I.e., its recurrence should be understood to signal a problem.) Cheers, Alan PS I'll shut up about this now. |
From: Robert K. <rob...@gm...> - 2006-06-02 19:57:16
|
Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>Changing the API of rand() and randn() doesn't solve any >>problem. Removing them might. > > I think this is too blunt an argument. For example, > use of the old interface might issue a deprecation warning. > This would make it very likely that all new code use the new > interface. My point is that there is no need to change rand() and randn() to the "new" interface. The "new" interface is already there: random.random() and random.standard_normal(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David M. C. <co...@ph...> - 2006-06-02 19:56:34
|
On Fri, 2 Jun 2006 19:09:01 +0200 Joris De Ridder <jo...@st...> wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? Using a better default dgemm for matrix multiplication when an optimized BLAS isn't available has been on my to-do list for a while. I think it can be speed up by a large amount on a generic machine by using blocking of the matrices. Personally, I perceive no difference between my g77-compiled LAPACK, and the gcc-compiled f2c'd routines in lapack_lite, if an optimized BLAS is used. And lapack_lite has fewer bugs than the version of LAPACK available off of netlib.org, as I used the latest patches I could scrounge up (mostly from Debian). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: David M. C. <co...@ph...> - 2006-06-02 19:47:03
|
On Fri, 2 Jun 2006 10:27:45 +0200 Joris De Ridder <jo...@st...> wrote: > [CB]: I was reacting to a post a while back that suggested > pointing people [CB]: searching for numpy to the main scipy page, > which I did not think was a [CB]: good idea. > > That would be my post :o) > > The reasons why I suggested this are > > 1) www.scipy.org is at the moment the most informative site on numpy > 2) of all sites www.scipy.org looks currently most professional > 3) a wiki-style site where everyone can contribute is really great > 4) I like information to be centralized. Having to check pointers, > docs and cookbooks on two different sites is inefficient > 5) Two different sites inevitably implies some duplication of the work > > Just as you, I am not (yet) a scipy user, I only have numpy installed > at the moment. The principal reason is the same as the one you > mentioned. But for me this is an extra motivation to merge scipy.org > and numpy.org: > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > SciPy community and this in turn hopefully leads to user-friendly > installation procedures. My only concern with this is numpy is positioned for a wider audience: everybody who needs arrays, and the extra speed that numpy gives, but doesn't need what scipy gives. So merging the two could lead to confusion on what provides what, and what you need to do which. For instance, I don't want potential numpy users to be directed to scipy.org, and be turned off with all the extra stuff it seems to have (that scipy, not numpy, provides). But I think this can be handled if we approach scipy.org as serving both purposes. But I think is this the best option, considering how much crossover there is. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Alan G I. <ai...@am...> - 2006-06-02 19:26:53
|
On Fri, 02 Jun 2006, Robert Kern apparently wrote:=20 > Changing the API of rand() and randn() doesn't solve any=20 > problem. Removing them might.=20 I think this is too blunt an argument. For example, use of the old interface might issue a deprecation warning. This would make it very likely that all new code use the new=20 interface. I would also be fine with demoting these to the Numeric=20 compatability module, although I find that the inferior=20 choice (since it means a loss of convenience). Unless one of these changes is made, new users will=20 **forever** be asking this same question. And either way,=20 making the sacrifices needed for greater consistency seems=20 like a good idea *before* 1.0. Cheers, Alan |
From: Robert K. <rob...@gm...> - 2006-06-02 18:51:36
|
Alan G Isaac wrote: > On Fri, 02 Jun 2006, Sven Schreiber apparently wrote: > >>why doesn't rand accept a shape tuple as argument? I find >>the difference between the argument types of rand and (for >>example) zeros somewhat confusing. ... Can anybody offer >>an intuition/explanation? > > Backward compatability, I believe. You are not alone in > finding this odd and inconsistent. I am hoping for a change > by 1.0, but I am not very hopeful. > > Robert always points out that if you want the consistent > interface, you can always import functions from the 'random' > module. I have never been able to understand this as > a response to the point you are making. > > I take it the core argument goes something like this: > - rand and randn are convenience functions > * if you do not find them convenient, don't use them > - they are in wide use, so it is too late to change them > - testing the first argument to see whether it is a tuple or > an int so aesthetically objectionable that its ugliness > outweighs the benefits users might get from access to > a more consistent interface My argument does not include the last two points. - They are in wide use because they are convenient and useful. - Changing rand() and randn() to accept a tuple like random.random() and random.standard_normal() does not improve anything. Instead, it adds confusion for users who are reading code and seeing the same function being called in two different ways. - Users who want to see numpy *only* expose a single calling scheme for top-level functions should instead ask for rand() and randn() to be removed from the top numpy namespace. * Backwards compatibility might prevent this. > This is one place where I believe a forward looking (i.e., > think about new users) vision would force a small change in > these *convenience* functions that will have payoffs both in > ease of use and in eliminating this recurrent question from > discussion lists. *Changing* the API of rand() and randn() doesn't solve any problem. *Removing* them might. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Robert K. <rob...@gm...> - 2006-06-02 18:35:54
|
Christopher Barker wrote: > Robert Kern wrote: >> x = repeat(x, ny) >> y = concatenate([y] * nx) >> points = transpose([x, y]) > > Somehow I never think to use repeat. And why use repeat for x and > concatenate for y? I guess you could use repeat() on y[newaxis] and then flatten it. y = repeat(y[newaxis], nx).ravel() > Using numpy > The Numpy way took: 0.020000 seconds > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using Numeric > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using numarray > My way took: 0.070000 seconds > Robert's way took: 0.120000 seconds > Number of X: 4 > Number of Y: 3 Those timings look real funny. I presume you are using a UNIX and time.clock(). Don't do that. It's a very poor timer on UNIX. Use time.time() on UNIX and time.clock() on Windows(). Even better, please use timeit.py instead. Tim Peters did a lot of work to make timeit.py do the right thing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Jonathan T. <jon...@st...> - 2006-06-02 18:08:08
|
I was wondering if there was an easy way to get searchsorted to be "right-continuous" instead of "left-continuous". By continuity, I am talking about the continuity of the function "count" below... >>> import numpy as N >>> >>> x = N.arange(20) >>> x.searchsorted(9) 9 >>> import numpy as N >>> >>> x = N.arange(20) >>> >>> def count(u): ... return x.searchsorted(u) ... >>> count(9) 9 >>> count(9.01) 10 >>> Thanks, Jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 |
From: Alan G I. <ai...@am...> - 2006-06-02 17:34:15
|
On Fri, 02 Jun 2006, Sven Schreiber apparently wrote:=20 > why doesn't rand accept a shape tuple as argument? I find=20 > the difference between the argument types of rand and (for=20 > example) zeros somewhat confusing. ... Can anybody offer=20 > an intuition/explanation?=20 Backward compatability, I believe. You are not alone in=20 finding this odd and inconsistent. I am hoping for a change=20 by 1.0, but I am not very hopeful. Robert always points out that if you want the consistent=20 interface, you can always import functions from the 'random'=20 module. I have never been able to understand this as=20 a response to the point you are making. I take it the core argument goes something like this: - rand and randn are convenience functions * if you do not find them convenient, don't use them - they are in wide use, so it is too late to change them - testing the first argument to see whether it is a tuple or=20 an int so aesthetically objectionable that its ugliness=20 outweighs the benefits users might get from access to=20 a more consistent interface This is one place where I believe a forward looking (i.e.,=20 think about new users) vision would force a small change in=20 these *convenience* functions that will have payoffs both in=20 ease of use and in eliminating this recurrent question from=20 discussion lists. Cheers, Alan Isaac |
From: Travis O. <oli...@ie...> - 2006-06-02 17:31:25
|
Eric Jonas wrote: > Is there some way, either within numpy or at build-time, to verify > you're using BLAS/LAPACK? Is there one we should be using? > > Check to see if the id of numpy.dot is the same as numpy.core.multiarray.dot -Travis |
From: Eric J. <jo...@mw...> - 2006-06-02 17:28:22
|
Is there some way, either within numpy or at build-time, to verify you're using BLAS/LAPACK? Is there one we should be using? ...Eric On Fri, 2006-06-02 at 11:19 -0600, Travis Oliphant wrote: > Joris De Ridder wrote: > > Just to be sure, what exactly is affected when one uses the slower > > algorithms when neither BLAS or LAPACK is installed? For sure it > > will affect almost every function in numpy.linalg, as they use > > LAPACK_lite. And I guess that in numpy.core the dot() function > > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > > numpy functions that are affected? > > > convolve could also be affected (the basic internal _dot function gets > replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the > only function that uses dot internally. > > In the future we hope to be optimizing ufuncs as well. > > -Travis > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Francesc A. <fa...@ca...> - 2006-06-02 17:19:50
|
A Divendres 02 Juny 2006 19:07, Travis Oliphant va escriure: > Robert Kern wrote: > > Filip Wasilewski wrote: > >> So the next question is what's the difference between matrixmultiply a= nd > >> dot in NumPy? > > > > matrixmultiply is a deprecated compatibility name. Always use dot. dot > > will get replaced with the optimized dotblas implementation when an > > optimized BLAS is available. matrixmultiply will not (probably not > > intentionally, but I'm happy with the current situation). > > It's true that matrixmultiply has been deprecated for some time (at > least 8 years...) The basic dot function gets over-written with a > BLAS-optimized version but the matrixmultiply does not get changed. So > replace matrixmultiply with dot. It wasn't an intentional thing, but > perhaps it will finally encourage people to always use dot. So, why not issuing a DeprecationWarning on a matrixmultiply function use? =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Travis O. <oli...@ie...> - 2006-06-02 17:19:17
|
Joris De Ridder wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? > convolve could also be affected (the basic internal _dot function gets replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the only function that uses dot internally. In the future we hope to be optimizing ufuncs as well. -Travis |
From: Joris De R. <jo...@st...> - 2006-06-02 17:09:13
|
Just to be sure, what exactly is affected when one uses the slower algorithms when neither BLAS or LAPACK is installed? For sure it will affect almost every function in numpy.linalg, as they use LAPACK_lite. And I guess that in numpy.core the dot() function uses the lite numpy/core/blasdot/_dotblas.c routine? Any other numpy functions that are affected? Joris On Friday 02 June 2006 16:16, George Nurser wrote: [GN]: Yes, using numpy.dot I get 250ms, numpy.matrixmultiply 11.8s. [GN]: [GN]: while a sans-BLAS Numeric.matrixmultiply takes 12s. [GN]: [GN]: The first 100 results from numpy.dot and numpy.matrixmultiply are identical .... [GN]: [GN]: Use dot;) [GN]: [GN]: --George. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm |