You can subscribe to this list here.
2000 
_{Jan}
(8) 
_{Feb}
(49) 
_{Mar}
(48) 
_{Apr}
(28) 
_{May}
(37) 
_{Jun}
(28) 
_{Jul}
(16) 
_{Aug}
(16) 
_{Sep}
(44) 
_{Oct}
(61) 
_{Nov}
(31) 
_{Dec}
(24) 

2001 
_{Jan}
(56) 
_{Feb}
(54) 
_{Mar}
(41) 
_{Apr}
(71) 
_{May}
(48) 
_{Jun}
(32) 
_{Jul}
(53) 
_{Aug}
(91) 
_{Sep}
(56) 
_{Oct}
(33) 
_{Nov}
(81) 
_{Dec}
(54) 
2002 
_{Jan}
(72) 
_{Feb}
(37) 
_{Mar}
(126) 
_{Apr}
(62) 
_{May}
(34) 
_{Jun}
(124) 
_{Jul}
(36) 
_{Aug}
(34) 
_{Sep}
(60) 
_{Oct}
(37) 
_{Nov}
(23) 
_{Dec}
(104) 
2003 
_{Jan}
(110) 
_{Feb}
(73) 
_{Mar}
(42) 
_{Apr}
(8) 
_{May}
(76) 
_{Jun}
(14) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(108) 
_{Oct}
(82) 
_{Nov}
(89) 
_{Dec}
(94) 
2004 
_{Jan}
(117) 
_{Feb}
(86) 
_{Mar}
(75) 
_{Apr}
(55) 
_{May}
(75) 
_{Jun}
(160) 
_{Jul}
(152) 
_{Aug}
(86) 
_{Sep}
(75) 
_{Oct}
(134) 
_{Nov}
(62) 
_{Dec}
(60) 
2005 
_{Jan}
(187) 
_{Feb}
(318) 
_{Mar}
(296) 
_{Apr}
(205) 
_{May}
(84) 
_{Jun}
(63) 
_{Jul}
(122) 
_{Aug}
(59) 
_{Sep}
(66) 
_{Oct}
(148) 
_{Nov}
(120) 
_{Dec}
(70) 
2006 
_{Jan}
(460) 
_{Feb}
(683) 
_{Mar}
(589) 
_{Apr}
(559) 
_{May}
(445) 
_{Jun}
(712) 
_{Jul}
(815) 
_{Aug}
(663) 
_{Sep}
(559) 
_{Oct}
(930) 
_{Nov}
(373) 
_{Dec}

From: Xavier Gnata <gnata@ob...>  20061114 11:31:59

Hi, IFAICS these new histograms versions have not yet been merged to svn. Are they problems to be solve before to be able to merge them? How could we help? Xavier > Hi, > > Your histograms functions look fine for me :) > As it is a quite usual operation on an array, I would suggest to put > it in numpy as numpy.histogram. IMHO, there is no point to create an > numpy.stats only for histograms (or do you have plans to move other > stats related function to numpy.stats?) > > Xavier. > > >> Nicolas, thanks for the bug report, I fooled around with argument >> passing and should have checked every case. >> >> You'll find the histogram function that deals with weights on the >> numpy trac ticket 189, <http://projects.scipy.org/scipy/numpy/ticket/189>; >> I'm waiting for some hints as to where the histogram function should >> reside (numpy.histogram, numpy.stats.histogram, ...) before submitting >> a patch . >> >> Salut, >> David >> >> >> 2006/10/25, Nicolas Champavert <nicolas.champavert@... >> <mailto:nicolas.champavert@...>>: >> >> Hi, >> >> it would be great if you could add the weight option in the 1D >> histogram too. >> >> Nicolas >> >> David Huard a écrit : >> > Xavier, >> > Here is the patch against svn. Please report any bug. I haven't had >> > the time to test it extensively, something that should be done >> before >> > commiting the patch to the repo. I'd appreciate your feedback. >> > >> > David >> > >> > 2006/10/24, David Huard < david.huard@... >> <mailto:david.huard@...> >> > <mailto:david.huard@... <mailto:david.huard@...>>>: >> > >> > Hi Xavier, >> > >> > You could tweak histogram2d to do what you want, or you >> could give >> > me a couple of days and I'll do it and let you know. If you want >> > to help, you could write a test using your particular >> application >> > and data. >> > >> > David >> > >> > >> > 2006/10/24, Xavier Gnata < gnata@... >> <mailto:gnata@...> >> > <mailto:gnata@... >> <mailto:gnata@...>>>: >> > >> > Hi, >> > >> > I have a set of 3 1D large arrays. >> > The first 2 one stand for the coordinates of particules and >> > the last one >> > for their masses. >> > I would like to be able to plot this data ie to compute >> a 2D >> > histogram >> > summing the masses in each bin. >> > I cannot find a way to do that without any loop on the >> indices >> > resulting >> > too a very slow function. >> > >> > I'm looking for an elegant way to do that with numpy (or >> > scipy??) function. >> > >> > For instance, scipy.histogram2d cannot do the job because it >> > only counts >> > the number of samples in each bin. >> > There is no way to deal with weights. >> > >> > Xavier. >> > >> > >> >  >> > ############################################ >> > Xavier Gnata >> > CRAL  Observatoire de Lyon >> > 9, avenue Charles André >> > 69561 Saint Genis Laval cedex >> > Phone: +33 4 78 86 85 28 >> > Fax: +33 4 78 86 83 86 >> > Email: gnata@... >> <mailto:gnata@...> <mailto:gnata@... >> <mailto:gnata@...>> >> > ############################################ >> > >> > >> > >>  >> > Using Tomcat but need to do more? Need to support web >> > services, security? >> > Get stuff done quickly with preintegrated technology to >> make >> > your job easier >> > Download IBM WebSphere Application Server v.1.0.1 based on >> > Apache Geronimo >> > >> http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> <http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642>; >> > >> <http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> <http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642>>; >> > _______________________________________________ >> > Numpydiscussion mailing list >> > Numpydiscussion@... >> <mailto:Numpydiscussion@...> >> > <mailto:Numpydiscussion@... >> <mailto:Numpydiscussion@...>> >> > >> https://lists.sourceforge.net/lists/listinfo/numpydiscussion >> > >> > >> > >> > >> >>  >> Using Tomcat but need to do more? Need to support web services, >> security? >> Get stuff done quickly with preintegrated technology to make your >> job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> <http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642>; >> _______________________________________________ >> Numpydiscussion mailing list >> Numpydiscussion@... >> <mailto:Numpydiscussion@...> >> https://lists.sourceforge.net/lists/listinfo/numpydiscussion >> >> >>  >> >>  >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with preintegrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >>  >> >> _______________________________________________ >> Numpydiscussion mailing list >> Numpydiscussion@... >> https://lists.sourceforge.net/lists/listinfo/numpydiscussion >> >> > > >  ############################################ Xavier Gnata CRAL  Observatoire de Lyon 9, avenue Charles André 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 Email: gnata@... ############################################ 
From: Christian Meesters <meesters@un...>  20061114 11:13:59

Hoi, thanks to Robert Kern who helped me out yesterday on the f2pylist, I was able to make some progress in accessing FORTRAN from Python. But only some progress ... If I have the following code, named 'hello.f': C File hello.f subroutine foo (a) integer a print*, "Hello from Fortran!" print*, "a=",a end and compile it with g77 shared fPIC hello.f o hello.so and then start python, I get the following: >>> from numpy import * >>> from ctypes import c_int, POINTER, byref >>> hellolib = ctypeslib.load_library('hello', '.') >>> hello = hellolib.foo_ >>> hello(42) Hello from Fortran! Segmentation fault Can anybody tell me where my mistake is? (Currently python 2.4.1 (no intention to update soon), the most recent ctypes, and numpy '1.0.dev3341' from svn.) And a second question: Are there simple examples around which show how to pass and retrieve lists, numpy arrays, and dicts to and from FORTRAN? Despite an intensive web search I couldn't find anything. TIA Christian 
From: Seweryn Kokot <skokot@po...>  20061114 10:01:45

Sven Schreiber <svetosch@...> writes: > > Sure, but Seweryn used the same import statement from scipy and never > explicitly referred to numpy, so there must be some subtle import voodoo > going on. Or did you not show us everything, Seweryn? It's all ok now, It was my mistake. The problem was that in ipython I typed "from scipy import linalg" which is wrong and being surprised by the output I open python shell and tried different combinations of import, among others "from scipy import *" and this is the reason of the difference. So now in ipython I get the same output when typing help(linalg.eig). Sorry for bothering you, regards, SK 
From: Sven Schreiber <svetosch@gm...>  20061114 09:51:17

Charles R Harris schrieb: > In [1]: from scipy import linalg > > In [2]: help(linalg.eig) > > > >>> from scipy import linalg > >>> help(linalg.eig) > > Help on function eig in module scipy.linalg.decomp: > > > I expect scipy.linalg and numpy.linalg are different modules containing > different functions. That said, the documentation in scipy.linalg looks > quite a bit more complete. > > Chuck Sure, but Seweryn used the same import statement from scipy and never explicitly referred to numpy, so there must be some subtle import voodoo going on. Or did you not show us everything, Seweryn? sven 
From: underwater stolen <drrfqwrif@pa...>  20061114 06:27:13

From: Robert Kern <robert.kern@gm...>  20061114 04:17:20

Tim Hochberg wrote: > Another little tidbit: this is not as general as where, and could > probably be considered a little too clever to be clear, but: > > b = 1 / (a + (a==0.0)) > > is faster than using where in this particular case and sidesteps the > divide by zero issue altogether. A less clever approach that does much the same thing: b = 1.0 / where(a==0, 1.0, a)  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: Paul Dubois <pfdubois@gm...>  20061114 04:13:41

Unfortunately, where does not have the behavior of not evaluating the second argument where the first one is true. That would be nice (if the speed were ok) but it isn't possible unless where is built into the language, since where doesn't even get called until the arguments have all been calculated. It was intended as having a different use than avoiding zerodivide. The ma package can calculate 1/a without problem, resulting in masked results where a is 0.0. I put where into numeric after it had proved invaluable in Basis, even though it has this limitation; it takes care of doing both merge and compress. On 13 Nov 2006 20:02:31 0800, Tim Hochberg <tim.hochberg@...> wrote: > > vallis.35530053@... wrote: > > Using numpy 1.0, why does > > > > > > > > > >>>> a = numpy.array([0.0,1.0,2.0],'d') > >>>> > > > > > >>>> numpy.where(a > >>>> > > == 0.0,1,1/a) > > > > > > > > give the correct result, but with the warning "Warning: divide > > by zero encountered in divide"? > > > > > > > > ? I thought that the point of where was > > that the second expression is never used for the elements where the > condition > > evaluates true. > > > > > > > > If this is the desired behavior, is there a way to suppress > > the warning? > > > Robert Kern has already pointed you to seterr. If you are using Python > 2.5, you also have the option using the with statement, which is more > convenient if you want to temporarily change the error state. You'll > need a "from __future__ import with_statement" at the top of your file. > Then you can temporarily disable errors as shown: > > >>> a = zeros([3]) > >>> b = 1/a # This will warn > Warning: divide by zero encountered in divide > >>> with errstate(divide='ignore'): # But this will not > ... c = 1/a > ... > >>> d = 1/a # And this will warn again since the error state is > restored when we exit the block > Warning: divide by zero encountered in divide > > > Another little tidbit: this is not as general as where, and could > probably be considered a little too clever to be clear, but: > > b = 1 / (a + (a==0.0)) > > is faster than using where in this particular case and sidesteps the > divide by zero issue altogether. > > tim > > > >  > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with preintegrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion > > 
From: Tim Hochberg <tim.hochberg@ie...>  20061114 03:10:31

vallis.35530053@... wrote: > Using numpy 1.0, why does > > > > >>>> a = numpy.array([0.0,1.0,2.0],'d') >>>> > > >>>> numpy.where(a >>>> > == 0.0,1,1/a) > > > > give the correct result, but with the warning "Warning: divide > by zero encountered in divide"? > > > > ? I thought that the point of where was > that the second expression is never used for the elements where the condition > evaluates true. > > > > If this is the desired behavior, is there a way to suppress > the warning? > Robert Kern has already pointed you to seterr. If you are using Python 2.5, you also have the option using the with statement, which is more convenient if you want to temporarily change the error state. You'll need a "from __future__ import with_statement" at the top of your file. Then you can temporarily disable errors as shown: >>> a = zeros([3]) >>> b = 1/a # This will warn Warning: divide by zero encountered in divide >>> with errstate(divide='ignore'): # But this will not ... c = 1/a ... >>> d = 1/a # And this will warn again since the error state is restored when we exit the block Warning: divide by zero encountered in divide Another little tidbit: this is not as general as where, and could probably be considered a little too clever to be clear, but: b = 1 / (a + (a==0.0)) is faster than using where in this particular case and sidesteps the divide by zero issue altogether. tim 
From: Charles R Harris <charlesr.harris@gm...>  20061114 01:54:24

On 11/13/06, Mathew Yeates <myeates@...> wrote: > > Not sure. When I run "top" I see the line > Memory: 6016M real, 2895M free, 4174M swap in use, 2427M swap free > > Its the second number that drops like a rock. Plus, it never comes back > until I quit the program. This is a great way to turn my machine into a > nice desk ornament! try free: $[charris@... ~]$ free total used free shared buffers cached Mem: 1034952 995212 39740 0 126616 328124 /+ buffers/cache: 540472 494480 Swap: 979956 152 979804 The second line under 'used' shows actual program use, i.e. used  buffers  cached from the first line. But if your system is dying I don't know what to say. My knowledge of these things is a bit sketchy. Chuck 
From: Robert Kern <robert.kern@gm...>  20061114 01:44:09

vallis.35530053@... wrote: > ? I thought that the point of where was > that the second expression is never used for the elements where the condition > evaluates true. It is not used, but the expression still gets evaluated. There's really no way around that. > If this is the desired behavior, is there a way to suppress > the warning? In [1]: from numpy import * In [2]: a = zeros(3) In [3]: 1/a Warning: divide by zero encountered in divide Warning: invalid value encountered in double_scalars Out[3]: array([ inf, inf, inf]) In [4]: seterr(divide='ignore', invalid='ignore') Out[4]: {'divide': 'print', 'invalid': 'print', 'over': 'print', 'under': 'ignore'} In [5]: 1/a Out[5]: array([ inf, inf, inf]) In [6]: seterr? Type: function Base Class: <type 'function'> Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/sitepackages/numpy1.0.1.dev3432py2.5macosx10.4i386.egg /numpy/core/numeric.py Definition: seterr(all=None, divide=None, over=None, under=None, invalid=None) Docstring: Set how floatingpoint errors are handled. Valid values for each type of error are the strings "ignore", "warn", "raise", and "call". Returns the old settings. If 'all' is specified, values that are not otherwise specified will be set to 'all', otherwise they will retain their old values. Note that operations on integer scalar types (such as int16) are handled like floating point, and are affected by these settings. Example: >>> seterr(over='raise') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> seterr(all='warn', over='raise') {'over': 'raise', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> int16(32000) * int16(3) Traceback (most recent call last): File "<stdin>", line 1, in ? FloatingPointError: overflow encountered in short_scalars >>> seterr(all='ignore') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'}  Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth."  Umberto Eco 
From: <vallis.35530053@bl...>  20061114 01:32:31

Using numpy 1.0, why does >>> a = numpy.array([0.0,1.0,2.0],'d') >>> numpy.where(a == 0.0,1,1/a) give the correct result, but with the warning "Warning: divide by zero encountered in divide"? ? I thought that the point of where was that the second expression is never used for the elements where the condition evaluates true. If this is the desired behavior, is there a way to suppress the warning? Thanks! Michele 
From: Mathew Yeates <myeates@jp...>  20061113 22:23:54

Not sure. When I run "top" I see the line Memory: 6016M real, 2895M free, 4174M swap in use, 2427M swap free Its the second number that drops like a rock. Plus, it never comes back until I quit the program. This is a great way to turn my machine into a nice desk ornament! Mathew Charles R Harris wrote: > > > On 11/13/06, *Mathew Yeates* <myeates@... > <mailto:myeates@...>> wrote: > > I have a memory mapped array. When I try and assign data, my mem usage > goes through the roof. > > > Is it cache memory or process memory? I think a memory mapped file > will keep pages cached in memory until the space is needed so as to > avoid unneeded io. At least that is what happens in linux. > > Chuck > > >  > >  > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with preintegrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.asus.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >  > > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion > 
From: Charles R Harris <charlesr.harris@gm...>  20061113 22:03:43

On 11/13/06, Mathew Yeates <myeates@...> wrote: > > I have a memory mapped array. When I try and assign data, my mem usage > goes through the roof. Is it cache memory or process memory? I think a memory mapped file will keep pages cached in memory until the space is needed so as to avoid unneeded io. At least that is what happens in linux. Chuck 
From: Stefan van der Walt <stefan@su...>  20061113 21:50:38

On Mon, Nov 13, 2006 at 02:29:11PM 0700, Tim Hochberg wrote: > Erin Sheldon wrote: > > On 11/13/06, Tim Hochberg <tim.hochberg@...> wrote: > > =20 > >> Here's one more approach that's marginally faster than the map based > >> solution and also won't chew up an extra memory since it's based on = from > >> iter: > >> > >> numpy.fromiter(itertools.imap(tuple, results), dtype=3Dmydescriptor, > >> count=3Dlen(results)) > >> =20 > > > > Yes, this is what I need. BTW, there is no doc string for this.=20 > Yeah, I noticed that too. I swear I wrote one at one point I'm not sure= =20 > what happened to it. Sigh. A typo slipped into add_newdocs.py. Fixed in SVN. Cheers St=E9fan 
From: Tim Hochberg <tim.hochberg@ie...>  20061113 21:29:36

Erin Sheldon wrote: > On 11/13/06, Tim Hochberg <tim.hochberg@...> wrote: > >> Here's one more approach that's marginally faster than the map based >> solution and also won't chew up an extra memory since it's based on from >> iter: >> >> numpy.fromiter(itertools.imap(tuple, results), dtype=mydescriptor, >> count=len(results)) >> > > Yes, this is what I need. BTW, there is no doc string for this. Yeah, I noticed that too. I swear I wrote one at one point I'm not sure what happened to it. Sigh. > I just added > an example the the Numpy Example List. > Great. tim 
From: Mathew Yeates <myeates@jp...>  20061113 21:23:37

I have a memory mapped array. When I try and assign data, my mem usage goes through the roof. example: outdat[filenum,:]=outarr where outdat is memory mapped. Anybody know how to avoid this? Mathew 
From: Erin Sheldon <erin.sheldon@gm...>  20061113 19:43:28

On 11/13/06, Tim Hochberg <tim.hochberg@...> wrote: > Here's one more approach that's marginally faster than the map based > solution and also won't chew up an extra memory since it's based on from > iter: > > numpy.fromiter(itertools.imap(tuple, results), dtype=mydescriptor, > count=len(results)) Yes, this is what I need. BTW, there is no doc string for this. I just added an example the the Numpy Example List. Thanks, Erin 
From: Charles R Harris <charlesr.harris@gm...>  20061113 19:13:25

On 11/13/06, Seweryn Kokot <skokot@...> wrote: > > Hello, > > Why ipython and python interactive shell give two different information? > >  ipython > Python 2.4.4 (#2, Oct 20 2006, 00:23:25) > Type "copyright", "credits" or "license" for more information. > > IPython 0.7.2  An enhanced Interactive Python. > ? > Introduction to IPython's features. > %magic > Information about IPython's 'magic' % functions. > help > Python's own help system. > object? > Details about 'object'. ?object also works, ?? prints more. > > In [1]: from scipy import linalg > > In [2]: help(linalg.eig) > > Help on function eig in module numpy.linalg.linalg: > > eig(a) > eig(a) returns u,v where u is the eigenvalues and > v is a matrix of eigenvectors with vector v[:,i] corresponds to > eigenvalue u[i]. Satisfies the equation dot(a, v[:,i]) = > u[i]*v[:,i] >  ipython > > while > >  python > Python 2.4.4 (#2, Oct 20 2006, 00:23:25) > [GCC 4.1.2 20061015 (prerelease) (Debian 4.1.116.1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> from scipy import linalg > >>> help(linalg.eig) > > Help on function eig in module scipy.linalg.decomp: > > eig(a, b=None, left=False, right=True, overwrite_a=False, > overwrite_b=False) > Solve ordinary and generalized eigenvalue problem > of a square matrix. > > Inputs: > > a  An N x N matrix. > b  An N x N matrix [default is identity(N)]. > left  Return left eigenvectors [disabled]. > right  Return right eigenvectors [enabled]. > overwrite_a, overwrite_b  save space by overwriting the a and/or > b matrices (both False by default) > > Outputs: > > w  eigenvalues [left==right==False]. > w,vr  w and right eigenvectors [left==False,right=True]. > w,vl  w and left eigenvectors [left==True,right==False]. > w,vl,vr  [left==right==True]. > > Definitions: > ... >  python > > Any idea? I expect scipy.linalg and numpy.linalg are different modules containing different functions. That said, the documentation in scipy.linalg looks quite a bit more complete. Chuck 
From: Tim Hochberg <tim.hochberg@ie...>  20061113 17:46:24

Tim Hochberg wrote: > [SNIP] >> >> > > Just for completeness, I benchmarked the fromiter and map(tuple, > results) solutions as well. Map is fastest, followed by fromiter, list > comprehension and then fromrecords. The differences are pretty minor > however, so I'd stick with whatever seems clearest. > > tim > > Here's one more approach that's marginally faster than the map based solution and also won't chew up an extra memory since it's based on from iter: numpy.fromiter(itertools.imap(tuple, results), dtype=mydescriptor, count=len(results)) [SNIP] tim 
From: Seweryn Kokot <skokot@po...>  20061113 17:00:26

Hello, Why ipython and python interactive shell give two different information?  ipython Python 2.4.4 (#2, Oct 20 2006, 00:23:25) Type "copyright", "credits" or "license" for more information. IPython 0.7.2  An enhanced Interactive Python. ? > Introduction to IPython's features. %magic > Information about IPython's 'magic' % functions. help > Python's own help system. object? > Details about 'object'. ?object also works, ?? prints more. In [1]: from scipy import linalg In [2]: help(linalg.eig) Help on function eig in module numpy.linalg.linalg: eig(a) eig(a) returns u,v where u is the eigenvalues and v is a matrix of eigenvectors with vector v[:,i] corresponds to eigenvalue u[i]. Satisfies the equation dot(a, v[:,i]) = u[i]*v[:,i]  ipython while  python Python 2.4.4 (#2, Oct 20 2006, 00:23:25) [GCC 4.1.2 20061015 (prerelease) (Debian 4.1.116.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import linalg >>> help(linalg.eig) Help on function eig in module scipy.linalg.decomp: eig(a, b=None, left=False, right=True, overwrite_a=False, overwrite_b=False) Solve ordinary and generalized eigenvalue problem of a square matrix. Inputs: a  An N x N matrix. b  An N x N matrix [default is identity(N)]. left  Return left eigenvectors [disabled]. right  Return right eigenvectors [enabled]. overwrite_a, overwrite_b  save space by overwriting the a and/or b matrices (both False by default) Outputs: w  eigenvalues [left==right==False]. w,vr  w and right eigenvectors [left==False,right=True]. w,vl  w and left eigenvectors [left==True,right==False]. w,vl,vr  [left==right==True]. Definitions: ...  python Any idea? regards SK 
From: Erin Sheldon <erin.sheldon@gm...>  20061113 15:10:47

On 11/13/06, Francesc Altet <faltet@...> wrote: > In any case, you can also use rec.fromrecords for build recarrays from > lists of lists. This breaks the aforementioned rule, but Travis allowed > this because rec.* had to mimic numarray behaviour as much as possible. > Here is an example of use: > > In [46]:mydescriptor = {'names': ('gender','age','weight'), > 'formats':('S1','f4', 'f4')} > In [47]:results=[['M',64.0,75.0],['F',25.0,60.0]] > In [48]:a = numpy.rec.fromrecords(results, dtype=mydescriptor) > In [49]:b = numpy.array([tuple(row) for row in results], > dtype=mydescriptor) > In [50]:a==b > Out[50]:recarray([True, True], dtype=bool) > > OTOH, it is said in the docs that fromrecords is discouraged because it > is somewhat slow, but apparently it has similar performance than using > comprehensions lists: > > In [51]:Timer("numpy.rec.fromrecords(results, dtype=mydescriptor)", > "import numpy; results = [['M',64.0,75.0]]*10000; mydescriptor = > {'names': ('gender','age','weight'), 'formats':('S1','f4', > 'f4')}").repeat(3,10) > Out[51]:[0.44204592704772949, 0.43584394454956055, 0.50145101547241211] > > In [52]:Timer("numpy.array([tuple(row) for row in results], > dtype=mydescriptor)", "import numpy; results = [['M',64.0,75.0]]*10000; > mydescriptor = {'names': ('gender','age','weight'), > 'formats':('S1','f4', 'f4')}").repeat(3,10) > Out[52]:[0.49885106086730957, 0.4325258731842041, 0.43297886848449707] I checked the code. For lists of lists it just creates the recarray and runs a loop copying in the data row by row. The fact that they are of similar speed is actually good news because the list comprehension was making an extra copy of the data in memory. For large memory usage, which is my case, this 50% overhead would have been an issue. Erin 
From: Tim Hochberg <tim.hochberg@ie...>  20061113 15:03:51

Francesc Altet wrote: > El dl 13 de 11 del 2006 a les 02:07 0500, en/na Erin Sheldon va > escriure: > >> On 11/13/06, Charles R Harris <charlesr.harris@...> wrote: >> >>> On 11/12/06, Erin Sheldon <erin.sheldon@...> wrote: >>> >>>> Hi all  >>>> >>>> Thanks to everyone for the suggestions. >>>> I think map(tuple, list) is probably the most compact, >>>> but the list comprehension also works well. >>>> >>>> Because map() is proably going to disappear someday, I'll >>>> stick with the list comprehension. >>>> array( [tuple(row) for row in result], dtype=dtype) >>>> >>>> That said, is there some compelling reason that the array >>>> function doesn't support this operation? >>>> >>> My understanding is that the array needs to be allocated up front. Since the >>> list comprehension is iterative it is impossible to know how big the result >>> is going to be. >>> >> Isn't it the same with a list of tuples? But you can send that directly to the >> array constructor. I don't see the fundamental difference, except that the >> code might be simpler to write. >> > > I think that the correct explanation is that Travis has chosen a tuple > as the way to refer to a inhomogeneous list of values (a record) and a > list as the way to refer to homogenous list of values. Just for the record, this is the officially blessed usage of tuple and lists for all of Python (by Guido himself). On the other hand, it's honored more in the breach than in reality. Since other factors, such as mutability/immutability or the mistaken belief that using tuples everywhere will make code noticeably faster or more memory frugal or something. > I'm not > completely sure why he did this, but I guess the reason was to be able > to distinguish the records in scenarios where nested records do appear. > I suspect that this could be made a little more forgiving, without loosing rigor. As long as none of the fields are objects of course in which case nearly all bets are off. Then again, the rule that tuple designate records is a lot simpler than something like tuples designate records, but you can use lists too, unless of course you have an object field in your array, in which case you really need to use tuples, except sometimes lists will work anyway, depending on where the object is fields is. So, maybe it's best just to keep it strict. > In any case, you can also use rec.fromrecords for build recarrays from > lists of lists. This breaks the aforementioned rule, but Travis allowed > this because rec.* had to mimic numarray behaviour as much as possible. > Here is an example of use: > [SNIP] > Just for completeness, I benchmarked the fromiter and map(tuple, results) solutions as well. Map is fastest, followed by fromiter, list comprehension and then fromrecords. The differences are pretty minor however, so I'd stick with whatever seems clearest. tim print Timer("numpy.rec.fromrecords(results, dtype=mydescriptor)", """import numpy; results = [['M',64.0,75.0]]*100000; mydescriptor = {'names': ('gender','age','weight'), 'formats':('S1','f4','f4')}""").repeat(3,10) print Timer("numpy.array([tuple(row) for row in results], dtype=mydescriptor)", """import numpy; results = [['M',64.0,75.0]]*100000; mydescriptor = {'names': ('gender','age','weight'),'formats':('S1','f4', 'f4')}""").repeat(3,10) print Timer("numpy.fromiter((tuple(x) for x in results), dtype=mydescriptor, count=len(results))", """import numpy; results = [['M',64.0,75.0]]*100000; mydescriptor = {'names': ('gender','age','weight'),'formats':('S1','f4', 'f4')}""").repeat(3,10) print Timer("numpy.array(map(tuple, results), dtype=mydescriptor)", """import numpy; results = [['M',64.0,75.0]]*100000; mydescriptor = {'names': ('gender','age','weight'),'formats':('S1','f4', 'f4')}""").repeat(3,10) ===> [1.3928521641717035, 1.3892659541925021, 1.3949996438094785] [1.344854164425926, 1.3157404083479882, 1.3207066819944986] [1.2768430065832401, 1.2742884919731416, 1.2736657871321633] [1.2081393026208644, 1.2025276955590734, 1.205871416618594] 
From: Sven Schreiber <svetosch@gm...>  20061113 10:03:38

Pierre GM schrieb: > On Sunday 12 November 2006 17:08, A. M. Archibald wrote: >> On 12/11/06, Keith Goodman <kwgoodman@...> wrote: >>> Is anybody interested in making x.max() and nanmax() behave the same >>> for matrices, except for the NaN part? That is, make >>> numpy.matlib.nanmax return a matrix instead of an array. > > Or, you could use masked arrays... In the new implementation, you can add a > mask to a subclassed array (such as matrix) to get a regular masked array. If > you fill this masked array, you get an array of the same subclass. > That is very interesting, but I agree with Keith and would actually call this a bug. (If still present in 1.0, that is, haven't checked, I think Keith used some rc?.) One proclaimed goal of numpy for the 1.0 release has been to be as matrixfriendly as possible, for which I am very grateful. Still, the use of masked arrays looks more attractive every time they're mentioned... sven 
From: Francesc Altet <faltet@ca...>  20061113 08:19:24

El dl 13 de 11 del 2006 a les 02:07 0500, en/na Erin Sheldon va escriure: > On 11/13/06, Charles R Harris <charlesr.harris@...> wrote: > > > > > > On 11/12/06, Erin Sheldon <erin.sheldon@...> wrote: > > > Hi all  > > > > > > Thanks to everyone for the suggestions. > > > I think map(tuple, list) is probably the most compact, > > > but the list comprehension also works well. > > > > > > Because map() is proably going to disappear someday, I'll > > > stick with the list comprehension. > > > array( [tuple(row) for row in result], dtype=dtype) > > > > > > That said, is there some compelling reason that the array > > > function doesn't support this operation? > > > > My understanding is that the array needs to be allocated up front. Since the > > list comprehension is iterative it is impossible to know how big the result > > is going to be. > > Isn't it the same with a list of tuples? But you can send that directly to the > array constructor. I don't see the fundamental difference, except that the > code might be simpler to write. I think that the correct explanation is that Travis has chosen a tuple as the way to refer to a inhomogeneous list of values (a record) and a list as the way to refer to homogenous list of values. I'm not completely sure why he did this, but I guess the reason was to be able to distinguish the records in scenarios where nested records do appear. In any case, you can also use rec.fromrecords for build recarrays from lists of lists. This breaks the aforementioned rule, but Travis allowed this because rec.* had to mimic numarray behaviour as much as possible. Here is an example of use: In [46]:mydescriptor = {'names': ('gender','age','weight'), 'formats':('S1','f4', 'f4')} In [47]:results=[['M',64.0,75.0],['F',25.0,60.0]] In [48]:a = numpy.rec.fromrecords(results, dtype=mydescriptor) In [49]:b = numpy.array([tuple(row) for row in results], dtype=mydescriptor) In [50]:a==b Out[50]:recarray([True, True], dtype=bool) OTOH, it is said in the docs that fromrecords is discouraged because it is somewhat slow, but apparently it has similar performance than using comprehensions lists: In [51]:Timer("numpy.rec.fromrecords(results, dtype=mydescriptor)", "import numpy; results = [['M',64.0,75.0]]*10000; mydescriptor = {'names': ('gender','age','weight'), 'formats':('S1','f4', 'f4')}").repeat(3,10) Out[51]:[0.44204592704772949, 0.43584394454956055, 0.50145101547241211] In [52]:Timer("numpy.array([tuple(row) for row in results], dtype=mydescriptor)", "import numpy; results = [['M',64.0,75.0]]*10000; mydescriptor = {'names': ('gender','age','weight'), 'formats':('S1','f4', 'f4')}").repeat(3,10) Out[52]:[0.49885106086730957, 0.4325258731842041, 0.43297886848449707] HTH,  Francesc Altet  Be careful about using the following code  Carabos Coop. V.  I've only proven that it works, http://www.carabos.com  I haven't tested it.  Donald Knuth 
From: Erin Sheldon <erin.sheldon@gm...>  20061113 07:07:32

On 11/13/06, Charles R Harris <charlesr.harris@...> wrote: > > > On 11/12/06, Erin Sheldon <erin.sheldon@...> wrote: > > Hi all  > > > > Thanks to everyone for the suggestions. > > I think map(tuple, list) is probably the most compact, > > but the list comprehension also works well. > > > > Because map() is proably going to disappear someday, I'll > > stick with the list comprehension. > > array( [tuple(row) for row in result], dtype=dtype) > > > > That said, is there some compelling reason that the array > > function doesn't support this operation? > > My understanding is that the array needs to be allocated up front. Since the > list comprehension is iterative it is impossible to know how big the result > is going to be. Isn't it the same with a list of tuples? But you can send that directly to the array constructor. I don't see the fundamental difference, except that the code might be simpler to write. > > BTW, it might be possible to use fromfile('name', dtype=dtype) to do what > you want if the data is stored by rows in a file. I'm reading from a database. Erin 