You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Karol L. <kar...@kn...> - 2006-10-09 18:17:02
|
Can someone give me a hint as to where in numpy the AMD and UMFpack librar= ies=20 are used, if at all? I ask, because they have their respective sections in= =20 site.cfg.example in the trunk. Karol =2D-=20 written by Karol Langner pon pa=BC 9 20:13:25 CEST 2006 |
From: Charles R H. <cha...@gm...> - 2006-10-09 17:59:24
|
On 10/9/06, Travis Oliphant <oli...@ie...> wrote: > > Charles R Harris wrote: > > > > > > On 10/9/06, *Tim Hochberg* <tim...@ie... > > <mailto:tim...@ie...>> wrote: > > > > <snip> > > > > Is this not the same things as numpy.multiply.outer(a, b)? (as > > opposed > > to outer(a, b), which appears to pretend that everything is a > > vector -- > > I'm not sure what the point of that is). > > > > > > Hmmm, yes, multiply.outer does do that. I thought that outer was short > > for multiply.outer and that the behaviour had changed. So the question > > is why outer does what it does. > Unfortunately, I don't know the answer to that. > > numpy.outer is the same as Numeric.outerproduct and does the same > thing. I'm not sure of the reason behind it. Looks sorta like a matrix thing. Maybe it should be called outerflat or some such. Chuck |
From: Travis O. <oli...@ie...> - 2006-10-09 17:50:36
|
Charles R Harris wrote: > > > On 10/9/06, *Tim Hochberg* <tim...@ie... > <mailto:tim...@ie...>> wrote: > > <snip> > > Is this not the same things as numpy.multiply.outer(a, b)? (as > opposed > to outer(a, b), which appears to pretend that everything is a > vector -- > I'm not sure what the point of that is). > > > Hmmm, yes, multiply.outer does do that. I thought that outer was short > for multiply.outer and that the behaviour had changed. So the question > is why outer does what it does. Unfortunately, I don't know the answer to that. numpy.outer is the same as Numeric.outerproduct and does the same thing. I'm not sure of the reason behind it. -Travis |
From: David L G. <Dav...@no...> - 2006-10-09 17:09:56
|
Tim Hochberg wrote: > David Goldsmith wrote: > >> Tim Hochberg wrote: >> >> >>> I periodically need to perform various linalg operations on a large >>> number of small matrices. The normal numpy approach of stacking up the >>> data into an extra dimension and doing the operations in parallel >>> doesn't work because the linalg functions are only designed to work on >>> one matrix at a time. At first glance, this restriction seems harmless >>> enough since the underlying LAPACK functions are also designed to work >>> one matrix at a time and it seem that there would be little to be gained >>> by parallelizing the Python level code. However, this turns out to be >>> the case. >>> >>> I have some code, originally written for numarray, but I recently ported >>> it over to numpy, that rewrites the linalg function determinant, inverse >>> and solve_linear_equations[1] to operate of 2 or 3D arrays, treating 3D >>> arrays as stacks of 2D arrays. For operating on large numbers of small >>> arrays (for example 1000 2x2 arrays), this approach is over an order of >>> magnitude faster than the obvious map(inverse, threeDArray) approach. >>> >>> So, some questions: >>> >>> 1. Is there any interest in this code for numpy.linalg? I'd be >>> willing to clean it up and work on the other functions in linalg so that >>> they were all consistent. Since these changes should all be backwards >>> compatible (unless your counting on catching an error from one of the >>> linalg functions), I don't see this as a problem to put in after 1.0, so >>> there's really no rush on this. I'm only bringing it up now since I just >>> ported it over to numpy and it's fresh in my mind. >>> >>> >>> >> Say "no" to someone offering to make a Python module more feature rich? >> Blasphemy! :-) >> >> But I am very curious as to the source/basis of the performance >> improvement - did you figure this out? >> >> > Yes and no. Here's what the begining of linalg.det looks lik: > > a = asarray(a) > _assertRank2(a) > _assertSquareness(a) > t, result_t = _commonType(a) > a = _fastCopyAndTranspose(t, a) > n = a.shape[0] > if isComplexType(t): > lapack_routine = lapack_lite.zgetrf > else: > lapack_routine = lapack_lite.dgetrf > # ... > > There's quite a few function calls here, since the actual call to dgetrf > probably takes neglible time for a 2x2 array, the cost of computation > here is dominated by function call overhead, creating the return arrays > and such not. Which is the biggest culprit, I'm not sure; by vectorizing > it, much of that overhead is only incurred once rather than hundreds or > thousands of times, so it all becomes negligible at that point. I didn't > bother to track down the worst offender. > > Ah, that all makes sense (including not bothering to profile the code). DG > -tim > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- HMRD/ORR/NOS/NOAA <http://response.restoration.noaa.gov/emergencyresponse/> |
From: Keith G. <kwg...@gm...> - 2006-10-09 17:04:43
|
On 10/9/06, Eric Emsellem <ems...@ob...> wrote: > thanks for the pointer to pida. I installed it successfully but I didn't manage to > makeit really work [snip] I was excited to see that pida can use meld. But when I try to do a diff, pida complains that it cannot find meldembed.py. Has anyone else run into that? I installed pida from binary on debian etch. And meld is installed and working. Hey, wait a minute, isn't this a numpy list? |
From: Karol L. <kar...@kn...> - 2006-10-09 17:03:30
|
On Monday 09 of October 2006 10:20, Bill Baxter wrote: > If you don't mind going commercial, then the WingIDE has been working > very well for me. > And it apparently has a VIM mode (and emacs). I'm an emacs guy, > though, so I haven't tried out the VIM support. I also have WingIDE installed and like it alot! I don't actually use it as = my=20 main ide, but might in the future. Currently I use Kile (the LaTeX front-en= d=20 from KDE), but it doesn't have any python-specific features. Karol =2D-=20 written by Karol Langner pon pa=BC 9 19:00:27 CEST 2006 |
From: Alan G I. <ai...@am...> - 2006-10-09 16:48:04
|
On Mon, 9 Oct 2006, Pierre GM apparently wrote: > Did anybody mention Eclipse+pydev? It used to be a problem to get Eclipse to talk to gvim on Windows (via OLE). Has that changed? Cheers, Alan Isaac |
From: Charles R H. <cha...@gm...> - 2006-10-09 16:04:32
|
On 10/9/06, Pierre GM <pgm...@ma...> wrote: > > On Monday 09 October 2006 04:20, Bill Baxter wrote: > > If you don't mind going commercial, then the WingIDE has been working > > very well for me. > > Did anybody mention Eclipse+pydev ? I've been a regular user for the last > few > months, and I'm quite happy about it. True, Eclipse can be a bit overkill, > but it's quite performant. Actually, I use pydev for the development and > an > interactive shell like ipython for testing/playing around... There is also a vim emulator plugin for eclipse that costs 15 euro. Here: http://www.satokar.com/viplugin/ Chuck ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Pierre GM <pgm...@ma...> - 2006-10-09 15:12:14
|
On Monday 09 October 2006 04:20, Bill Baxter wrote: > If you don't mind going commercial, then the WingIDE has been working > very well for me. Did anybody mention Eclipse+pydev ? I've been a regular user for the last few months, and I'm quite happy about it. True, Eclipse can be a bit overkill, but it's quite performant. Actually, I use pydev for the development and an interactive shell like ipython for testing/playing around... |
From: Perry G. <pe...@st...> - 2006-10-09 15:01:41
|
On Oct 9, 2006, at 10:50 AM, Eric Emsellem wrote: > Hi, > > thanks for the tip: very nice but really too expensive for me... (the > personal version having too many constraints). Thanks again anyway. > Will search for something else... Maybe I'll manage to get pida or spe > to work ? > > cheers > > Eric I believe that they are likely to give you the educational discount if you work in a research institution. |
From: Eric E. <ems...@ob...> - 2006-10-09 14:51:57
|
Hi, thanks for the tip: very nice but really too expensive for me... (the personal version having too many constraints). Thanks again anyway. Will search for something else... Maybe I'll manage to get pida or spe to work ? cheers Eric > From: "Bill Baxter" <wb...@gm...> > Subject: Re: [Numpy-discussion] ide for python/numpy/scipy/mpl, > development ? > To: "Discussion of Numerical Python" > <num...@li...> > Message-ID: > <e86...@ma...> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > If you don't mind going commercial, then the WingIDE has been working > very well for me. > And it apparently has a VIM mode (and emacs). I'm an emacs guy, > though, so I haven't tried out the VIM support. > > Wing is based on GTK so the interface on Windows doesn't look so nice, > or act so much like a native Windows app (F10 pops up menus -- > weird?), but it works well. I tried a bunch of free IDEs first and > gave up out of frustration after a while, too. Wing just worked for > me and worked quite well. Documentation is pretty good too. > www.wingware.com. Free trial so you can see if you like it first. > > --bb -- ==================================================================== Eric Emsellem ems...@ob... Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== |
From: Tim H. <tim...@ie...> - 2006-10-09 14:46:05
|
David Goldsmith wrote: > Tim Hochberg wrote: > >> I periodically need to perform various linalg operations on a large >> number of small matrices. The normal numpy approach of stacking up the >> data into an extra dimension and doing the operations in parallel >> doesn't work because the linalg functions are only designed to work on >> one matrix at a time. At first glance, this restriction seems harmless >> enough since the underlying LAPACK functions are also designed to work >> one matrix at a time and it seem that there would be little to be gained >> by parallelizing the Python level code. However, this turns out to be >> the case. >> >> I have some code, originally written for numarray, but I recently ported >> it over to numpy, that rewrites the linalg function determinant, inverse >> and solve_linear_equations[1] to operate of 2 or 3D arrays, treating 3D >> arrays as stacks of 2D arrays. For operating on large numbers of small >> arrays (for example 1000 2x2 arrays), this approach is over an order of >> magnitude faster than the obvious map(inverse, threeDArray) approach. >> >> So, some questions: >> >> 1. Is there any interest in this code for numpy.linalg? I'd be >> willing to clean it up and work on the other functions in linalg so that >> they were all consistent. Since these changes should all be backwards >> compatible (unless your counting on catching an error from one of the >> linalg functions), I don't see this as a problem to put in after 1.0, so >> there's really no rush on this. I'm only bringing it up now since I just >> ported it over to numpy and it's fresh in my mind. >> >> > Say "no" to someone offering to make a Python module more feature rich? > Blasphemy! :-) > > But I am very curious as to the source/basis of the performance > improvement - did you figure this out? > Yes and no. Here's what the begining of linalg.det looks lik: a = asarray(a) _assertRank2(a) _assertSquareness(a) t, result_t = _commonType(a) a = _fastCopyAndTranspose(t, a) n = a.shape[0] if isComplexType(t): lapack_routine = lapack_lite.zgetrf else: lapack_routine = lapack_lite.dgetrf # ... There's quite a few function calls here, since the actual call to dgetrf probably takes neglible time for a 2x2 array, the cost of computation here is dominated by function call overhead, creating the return arrays and such not. Which is the biggest culprit, I'm not sure; by vectorizing it, much of that overhead is only incurred once rather than hundreds or thousands of times, so it all becomes negligible at that point. I didn't bother to track down the worst offender. -tim |
From: Charles R H. <cha...@gm...> - 2006-10-09 14:43:54
|
On 10/9/06, Tim Hochberg <tim...@ie...> wrote: > > <snip> Is this not the same things as numpy.multiply.outer(a, b)? (as opposed > to outer(a, b), which appears to pretend that everything is a vector -- > I'm not sure what the point of that is). Hmmm, yes, multiply.outer does do that. I thought that outer was short for multiply.outer and that the behaviour had changed. So the question is why outer does what it does. Chuck |
From: David G. <Dav...@no...> - 2006-10-09 14:32:52
|
Tim Hochberg wrote: > I periodically need to perform various linalg operations on a large > number of small matrices. The normal numpy approach of stacking up the > data into an extra dimension and doing the operations in parallel > doesn't work because the linalg functions are only designed to work on > one matrix at a time. At first glance, this restriction seems harmless > enough since the underlying LAPACK functions are also designed to work > one matrix at a time and it seem that there would be little to be gained > by parallelizing the Python level code. However, this turns out to be > the case. > > I have some code, originally written for numarray, but I recently ported > it over to numpy, that rewrites the linalg function determinant, inverse > and solve_linear_equations[1] to operate of 2 or 3D arrays, treating 3D > arrays as stacks of 2D arrays. For operating on large numbers of small > arrays (for example 1000 2x2 arrays), this approach is over an order of > magnitude faster than the obvious map(inverse, threeDArray) approach. > > So, some questions: > > 1. Is there any interest in this code for numpy.linalg? I'd be > willing to clean it up and work on the other functions in linalg so that > they were all consistent. Since these changes should all be backwards > compatible (unless your counting on catching an error from one of the > linalg functions), I don't see this as a problem to put in after 1.0, so > there's really no rush on this. I'm only bringing it up now since I just > ported it over to numpy and it's fresh in my mind. > Say "no" to someone offering to make a Python module more feature rich? Blasphemy! :-) But I am very curious as to the source/basis of the performance improvement - did you figure this out? DG > 2. If 1., what to do about norm? I think the other functions in > linalg stack naturally, but norm, since it is meant to work on both > vectors and matrices is problematic. Norm already seems a bit > problematic in that, for some values of 'ord', norm can return different > values for a shape [N] and a shape [1,N] array containing the same values: > > >>> linalg.norm(a[:1], 1) > 0.78120069791102054 > >>> linalg.norm(a[0], 1) > 1.4588102446677758 > > My inclination is to introduce stackable 1 and 2D version s of norm, > vnorm and mnorm for lack of better names. Ideally we'd deprecate the use > of norm for ord!=None (Froebenius norm) or at least in cases where the > result depends on the rank [1,N] versus [N]. > > 3. A similar issue arises with dot. One would like to be able to do > stacked matrix products, vector products and matrix-vector products. If > we are making the rest of linalg stackable, linalg would be a sensible > place for a stackable version of dot to live. However, it's immediately > clear how to do this without introducing a whole pile (4) of stacking > dot function to handle the various cases, plus broadcasting, cleanly. > This would require some more thought. Thoughts? > > -tim > > > [1] Those are actually the numarray names, which the current code still > uses, the numpy names are 'det', 'inv' and 'solve' > > [2] Treatment of stacking insolve linear equations is a bit more > complicated, but I'll ignore that for now. > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Tim H. <tim...@ie...> - 2006-10-09 14:10:38
|
I periodically need to perform various linalg operations on a large number of small matrices. The normal numpy approach of stacking up the data into an extra dimension and doing the operations in parallel doesn't work because the linalg functions are only designed to work on one matrix at a time. At first glance, this restriction seems harmless enough since the underlying LAPACK functions are also designed to work one matrix at a time and it seem that there would be little to be gained by parallelizing the Python level code. However, this turns out to be the case. I have some code, originally written for numarray, but I recently ported it over to numpy, that rewrites the linalg function determinant, inverse and solve_linear_equations[1] to operate of 2 or 3D arrays, treating 3D arrays as stacks of 2D arrays. For operating on large numbers of small arrays (for example 1000 2x2 arrays), this approach is over an order of magnitude faster than the obvious map(inverse, threeDArray) approach. So, some questions: 1. Is there any interest in this code for numpy.linalg? I'd be willing to clean it up and work on the other functions in linalg so that they were all consistent. Since these changes should all be backwards compatible (unless your counting on catching an error from one of the linalg functions), I don't see this as a problem to put in after 1.0, so there's really no rush on this. I'm only bringing it up now since I just ported it over to numpy and it's fresh in my mind. 2. If 1., what to do about norm? I think the other functions in linalg stack naturally, but norm, since it is meant to work on both vectors and matrices is problematic. Norm already seems a bit problematic in that, for some values of 'ord', norm can return different values for a shape [N] and a shape [1,N] array containing the same values: >>> linalg.norm(a[:1], 1) 0.78120069791102054 >>> linalg.norm(a[0], 1) 1.4588102446677758 My inclination is to introduce stackable 1 and 2D version s of norm, vnorm and mnorm for lack of better names. Ideally we'd deprecate the use of norm for ord!=None (Froebenius norm) or at least in cases where the result depends on the rank [1,N] versus [N]. 3. A similar issue arises with dot. One would like to be able to do stacked matrix products, vector products and matrix-vector products. If we are making the rest of linalg stackable, linalg would be a sensible place for a stackable version of dot to live. However, it's immediately clear how to do this without introducing a whole pile (4) of stacking dot function to handle the various cases, plus broadcasting, cleanly. This would require some more thought. Thoughts? -tim [1] Those are actually the numarray names, which the current code still uses, the numpy names are 'det', 'inv' and 'solve' [2] Treatment of stacking insolve linear equations is a bit more complicated, but I'll ignore that for now. |
From: Tim H. <tim...@ie...> - 2006-10-09 13:01:31
|
Charles R Harris wrote: > Hi Nadav, > > On 10/8/06, *Nadav Horesh* <na...@vi... > <mailto:na...@vi...>> wrote: > > > There is a "tensortdot" function in numpy1.0rc1 > > > > The tensordot is not the same thing as a tensor product. What I want > is the following: > > def tensor(a, b) : > """Tensor product of a and b > > """ > a = asarray(a) > b = asarray(b) > return outer(a, b).reshape(a.shape + b.shape) > Is this not the same things as numpy.multiply.outer(a, b)? (as opposed to outer(a, b), which appears to pretend that everything is a vector -- I'm not sure what the point of that is). -tim |
From: Bill B. <wb...@gm...> - 2006-10-09 08:20:26
|
If you don't mind going commercial, then the WingIDE has been working very well for me. And it apparently has a VIM mode (and emacs). I'm an emacs guy, though, so I haven't tried out the VIM support. Wing is based on GTK so the interface on Windows doesn't look so nice, or act so much like a native Windows app (F10 pops up menus -- weird?), but it works well. I tried a bunch of free IDEs first and gave up out of frustration after a while, too. Wing just worked for me and worked quite well. Documentation is pretty good too. www.wingware.com. Free trial so you can see if you like it first. --bb On 10/9/06, Eric Emsellem <ems...@ob...> wrote: > Hi, > > thanks for the pointer to pida. I installed it successfully but I didn't manage to make it really work, meaning having my python file opened with gvim and include some debugger (I guess either the installation is not complete or there is something I don't understand when opening a new python file). And the docs is really scarce at the moment so it is quite difficult to pursue along that line. > > Indeed keeping gvim may be too tight a constraint here. > > Someone else suggested spe (http://stani.be/python/spe/blog/), but there I have problems getting binaries for wxPython (Suse10.1) and I don't want to attempt a full compilation of that package... (I installed Suse10.1 from the downloadable version and many many packages are missing there - the commercial DVD being double layer... and it seems that only half of it is available on the web). > > I tried to install eric3, and after checking QScintilla, python-qt, etc, I launch the installation of eric3 with python install.py and I get a seg fault.. > > frustrating. > Eric > > > On 10/6/06, Robert Kern <rob...@gm...> wrote: > > > > > > > Eric Emsellem wrote: > > > >> > > Hi, > >> > > > >> > > I am looking for an IDE to develop python programs and I am not sure > >> > > what to take. > >> > > The two critical items for me are 1/ a good debugger (simple and > >> > > efficient) 2/ something simple to manage the files. > >> > > > >> > > I would also very much like to keep some basic things such as (if > >> > > > possible): > > > >> > > - editing with gvim > >> > > > > > > This probably is the most limiting factor. I use pida because it embeds > > > gvim > > > into a PyGTK frame with all of the IDE goodies around it. > > > > > > http://pida.berlios.de/ > > > > > > I believe it can use one of the PyGTK debugger GUIs, but I've never used > > > it. > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Eric E. <ems...@ob...> - 2006-10-09 08:12:12
|
Hi, thanks for the pointer to pida. I installed it successfully but I didn't manage to make it really work, meaning having my python file opened with gvim and include some debugger (I guess either the installation is not complete or there is something I don't understand when opening a new python file). And the docs is really scarce at the moment so it is quite difficult to pursue along that line. Indeed keeping gvim may be too tight a constraint here. Someone else suggested spe (http://stani.be/python/spe/blog/), but there I have problems getting binaries for wxPython (Suse10.1) and I don't want to attempt a full compilation of that package... (I installed Suse10.1 from the downloadable version and many many packages are missing there - the commercial DVD being double layer... and it seems that only half of it is available on the web). I tried to install eric3, and after checking QScintilla, python-qt, etc, I launch the installation of eric3 with python install.py and I get a seg fault.. frustrating. Eric On 10/6/06, Robert Kern <rob...@gm...> wrote: > > > > Eric Emsellem wrote: > >> > > Hi, >> > > >> > > I am looking for an IDE to develop python programs and I am not sure >> > > what to take. >> > > The two critical items for me are 1/ a good debugger (simple and >> > > efficient) 2/ something simple to manage the files. >> > > >> > > I would also very much like to keep some basic things such as (if >> > > possible): > >> > > - editing with gvim >> > > > > This probably is the most limiting factor. I use pida because it embeds > > gvim > > into a PyGTK frame with all of the IDE goodies around it. > > > > http://pida.berlios.de/ > > > > I believe it can use one of the PyGTK debugger GUIs, but I've never used > > it. > |
From: Travis O. <oli...@ie...> - 2006-10-09 08:01:59
|
Charles R Harris wrote: > Hmmm, > > I notice that there is no longer a tensor product. As it was the only > one of the outer, kron bunch that I really wanted, l miss it. In fact, > I always thought outer should act like the tensor product for the > other binary operators too. Anyway, mind if I put it back? I'm not opposed to the idea, necessarily. But, when and where did this exist? I don't remember it. -Travis |
From: Robert K. <rob...@gm...> - 2006-10-09 05:41:21
|
Daniel Mahler wrote: > On 10/8/06, Greg Willden <gre...@gm...> wrote: >> This next one is a little closer for the case when c is not just a bunch of >> 1's but you still have to know how the highest number in b. >> a=array([sum(c[b==0]), sum(c[b==1]), ... sum(c[b==N]) ] ) >> >> So it sort of depends on your ultimate goal. >> Greg >> Linux. Because rebooting is for adding hardware. > > In my case all a, b, c are large with b and c being orders of > magnitude lareger than a. > b is known to contain only, but potentially any, a-indexes, reapeated > many times. > c contains arbitray floats. > essentially it is to compute class totals > as in total[class[i]] += value[i] In that case, a slight modification to Greg's suggestion will probably be fastest: import numpy as np # Set up the problem. lena = 10 lenc = 10000 a = np.zeros(lena, dtype=float) b = np.random.randint(lena, size=lenc) c = np.random.uniform(size=lenc) idx = np.arange(lena, dtype=int)[:, np.newaxis] mask = (b == idx) for i in range(lena): a[i] = c[b[i]].sum() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Daniel M. <dm...@gm...> - 2006-10-09 05:23:19
|
On 10/8/06, Greg Willden <gre...@gm...> wrote: > On 10/8/06, Daniel Mahler <dm...@gm...> wrote: > > > > >>> a > > array([0, 0]) > > >>> b > > array([0, 1, 0, 1, 0]) > > >>> c > > array([1, 1, 1, 1, 1]) > > > > > Well for this particular example you could do > a=array([len(b)-sum(b), sum(b)]) > Since you are just counting the ones and zeros. > > This next one is a little closer for the case when c is not just a bunch of > 1's but you still have to know how the highest number in b. > a=array([sum(c[b==0]), sum(c[b==1]), ... sum(c[b==N]) ] ) > > So it sort of depends on your ultimate goal. > Greg > Linux. Because rebooting is for adding hardware. In my case all a, b, c are large with b and c being orders of magnitude lareger than a. b is known to contain only, but potentially any, a-indexes, reapeated many times. c contains arbitray floats. essentially it is to compute class totals as in total[class[i]] += value[i] Daniel |
From: Charles R H. <cha...@gm...> - 2006-10-09 02:19:03
|
Hi Nadav, On 10/8/06, Nadav Horesh <na...@vi...> wrote: > > > There is a "tensortdot" function in numpy1.0rc1 The tensordot is not the same thing as a tensor product. What I want is the following: def tensor(a, b) : """Tensor product of a and b """ a = asarray(a) b = asarray(b) return outer(a, b).reshape(a.shape + b.shape) Chuck |
From: A. M. A. <per...@gm...> - 2006-10-08 23:08:31
|
On 08/10/06, Robert Kern <rob...@gm...> wrote: > Bill Baxter wrote: > > Yes, that'd be > > a[b] += c > > No, I'm afraid that fancy indexing does not do the loop that you are thinking it > would (and for reasons that we've discussed previously on this list, *can't* do > that loop). That statement reduces to something like the following: So the question remains, is there a for-loop-free way to do this? (This, specifically, is: for i in range(len(b)): a[b[i]]+=c[i] where b[i] may contain repetitions.) I didn't find one, but came to the conclusion that for loops are not necessarily slower than fancy indexing, so the way to do this one is just to use a for loop. A. M. Archibald |
From: Bill B. <wb...@gm...> - 2006-10-08 23:07:06
|
So what's the answer then? Can it be made faster? --bb On 10/9/06, Robert Kern <rob...@gm...> wrote: > > Bill Baxter wrote: > > Yes, that'd be > > a[b] += c > > No, I'm afraid that fancy indexing does not do the loop that you are > thinking it > would (and for reasons that we've discussed previously on this list, > *can't* do > that loop). That statement reduces to something like the following: > > tmp = a[b] > tmp = tmp.__iadd__(c) > a[b] = tmp > > > In [1]: from numpy import * > > In [2]: a = array([0, 0]) > > In [3]: b = array([0, 1, 0, 1, 0]) > > In [4]: c = array([1, 1, 1, 1, 1]) > > In [5]: a[b] += c > > In [6]: a > Out[6]: array([1, 1]) > > In [7]: a = array([0, 0]) > > In [8]: tmp = a[b] > > In [9]: tmp > Out[9]: array([0, 0, 0, 0, 0]) > > In [10]: tmp = tmp.__iadd__(c) > > In [11]: tmp > Out[11]: array([1, 1, 1, 1, 1]) > > In [12]: a[b] = tmp > > In [13]: a > Out[13]: array([1, 1]) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Robert K. <rob...@gm...> - 2006-10-08 22:59:32
|
Bill Baxter wrote: > Yes, that'd be > a[b] += c No, I'm afraid that fancy indexing does not do the loop that you are thinking it would (and for reasons that we've discussed previously on this list, *can't* do that loop). That statement reduces to something like the following: tmp = a[b] tmp = tmp.__iadd__(c) a[b] = tmp In [1]: from numpy import * In [2]: a = array([0, 0]) In [3]: b = array([0, 1, 0, 1, 0]) In [4]: c = array([1, 1, 1, 1, 1]) In [5]: a[b] += c In [6]: a Out[6]: array([1, 1]) In [7]: a = array([0, 0]) In [8]: tmp = a[b] In [9]: tmp Out[9]: array([0, 0, 0, 0, 0]) In [10]: tmp = tmp.__iadd__(c) In [11]: tmp Out[11]: array([1, 1, 1, 1, 1]) In [12]: a[b] = tmp In [13]: a Out[13]: array([1, 1]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |