You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Robert K. <rob...@gm...> - 2006-11-08 23:27:06
|
koara wrote: > koara wrote: >> Hello, >> >> a piece of my code started giving strange results with certain data; i >> managed to track down the cause to a slice array assignment. In the >> .... > > Also if i first build a sequence of columns and then use > numpy.transpose(numpy.vstack(sequence)) the result is ok. But the > arrays can be quite big so in this case i fear space overhead -- or am > i wrong? You might want to use column_stack() instead, for clarity if nothing else. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Albert S. <fu...@gm...> - 2006-11-08 23:18:57
|
Argh, On Thu, 09 Nov 2006, Albert Strasheim wrote: > %_unpackaged_files_terminate_build 1 Cut and paste error. Make that %_unpackaged_files_terminate_build 0 Cheers, Albert |
From: Albert S. <fu...@gm...> - 2006-11-08 23:14:55
|
Howdy On Wed, 08 Nov 2006, Vincent Broman wrote: > Building an rpm of numpy-1.0.1.dev3432-1 on fedora core 6 is failing for me. > With either python-2.4.3 or 2.4.4 I try "python setup.py bdist_rpm" > inside the source directory, and everything seems to go well except for > many "File listed twice" messages for all kinds of files, > and then at the end there is the error message: > > error: Installed (but unpackaged) file(s) found > > with a long list of .pyc and .pyo files. > The same seems to happen to me with the source rpm I downloaded > from sourceforge. And I vaguely recall the same happening with > my fedora core 4 + updates, which I had till my recent upgrade to fc6. > > The list or lists of files to install are not in the %files section, > but are generated dynamically somehow by setup.py, but I don't > know enough about distutils to debug that. > > Anyone succeed with fc6 rpms or have any suggestion? I also ran into this problem a few days ago. I'm guessing it's a bug in either distutils or numpy.distutils. Anyway, you can hack it to work by putting %_unpackaged_files_terminate_build 1 in your ~/.rpmmacros. More details here: http://www.rpm.org/hintskinks/unpackaged-files/ With that in place, bdist_rpm works on my FC6 machine. rpm -qpl on the RPM seems to indicate a "sane" RPM. Cheers, Albert |
From: koara <ko...@at...> - 2006-11-08 23:10:00
|
koara wrote: > Hello, > > a piece of my code started giving strange results with certain data; i > managed to track down the cause to a slice array assignment. In the > .... Also if i first build a sequence of columns and then use numpy.transpose(numpy.vstack(sequence)) the result is ok. But the arrays can be quite big so in this case i fear space overhead -- or am i wrong? Cheers. |
From: Vincent B. <vin...@na...> - 2006-11-08 22:55:51
|
Building an rpm of numpy-1.0.1.dev3432-1 on fedora core 6 is failing for me. With either python-2.4.3 or 2.4.4 I try "python setup.py bdist_rpm" inside the source directory, and everything seems to go well except for many "File listed twice" messages for all kinds of files, and then at the end there is the error message: error: Installed (but unpackaged) file(s) found with a long list of .pyc and .pyo files. The same seems to happen to me with the source rpm I downloaded from sourceforge. And I vaguely recall the same happening with my fedora core 4 + updates, which I had till my recent upgrade to fc6. The list or lists of files to install are not in the %files section, but are generated dynamically somehow by setup.py, but I don't know enough about distutils to debug that. Anyone succeed with fc6 rpms or have any suggestion? Vincent Broman SSC-SD |
From: koara <ko...@at...> - 2006-11-08 22:50:41
|
Hello, a piece of my code started giving strange results with certain data; i managed to track down the cause to a slice array assignment. In the following code snip; 'mat' is a numpy.array with shape=(22973, 1009), 'vec' is a numpy.array with shape=(22973,), both of type int: for i in xrange(1009): ... fr = vec[10001] mat[:, i] = vec # assign whole column if mat[10001, i] != fr: print "how come?" ... for elements beyond index 10000, nothing is assigned (ie, numpy.sum(mat[row, :]) is zero for any row > 10000). As soon as i replace the assignment with a cycle that assigns each element explicitly (for j in xrange(22973): result[j, i] = vec[j]), everything's OK. With some matrices, the above seems to work fine though, as well as for smaller dimensions, so i am unable to provide a full simple example. Any ideas? I am using enthought python (python 2.4.3) which uses numpy version 0.9.9. |
From: Sven S. <sve...@gm...> - 2006-11-08 22:00:35
|
Johannes Loehnert schrieb: > Hi, > > in extension to the previous answers, I'd like to say that it is strongly > preferable to use dot(A,dot(B,C)) or dot(dot(A,B),C) instead of A*B*C. > > The reason is that with dot(), you can control of which operation is performed > first, which can *massively* influence the time needed, depending on the > involved matrices. A*B*C will always be evaluated left-to-right (if I > remember correctly). > Well what about A*(B*C)? |
From: Pierre GM <pgm...@gm...> - 2006-11-08 21:08:35
|
> A good candidate for "should be masked" marked is NaN. It is supposed > to mean, more or less, "no sensible value". Which might turn out out to be the best indeed. Michael's application would then look like >>> import numpy as N >>> import maskedarray as MA >>> maskit = N.nan >>> test = N.array([1,2,maskit]) >>> test_ma1 = MA.array(x,mask=N.isnan(x)) > Switching to a MaskedArray might have > been a better idea, but the NaNs were a rare occurrence. Once again, that's a situation when one would use masked arrays. > If you've got floating point, you can again fill in NaNs, but you have > a good point about wanting to extract the original values that were > masked out. Depending on what one is doing, one might want one or the > other. In any case, I think we should stick to the numpy.core.ma default behavior for backwards compatibility. If you really wanht to distinguish between several kind of masks (one for missing data, one for data to discard temporarily), that could be done by defining a special subclass. But is it really needed ? A smart use of filled and masked_values should do the trick in most cases. |
From: A. M. A. <per...@gm...> - 2006-11-08 18:42:09
|
On 08/11/06, Pierre GM <pgm...@gm...> wrote: > I like your idea, but not its implementation. If MA.masked_singleton is > defined as an object, as you suggest, then the dtype of the ndarray it is > passed to becomes 'object', as you pointed out, and that is not something one > would naturally expec, as basic numerical functions don't work well with the > 'object' dtype (just try N.sqrt(N.array([1],dtype=N.object)) to see what I > mean). > Even if we can construct a mask rather easily at the creation of the masked > array, following your 'a==masked' suggestion, we still need to get the dtype > of the non-masked section, and that doesn't seem trivial... A good candidate for "should be masked" marked is NaN. It is supposed to mean, more or less, "no sensible value". Unfortunately, integer types do not have such a special value. It's also conceivable that some user might want to keep NaNs in their array separate from the mask. Finally, on some hardware, operations with NaN are very slow (so leaving them in the array, even masked, might not be a good idea). The reason I suggest this is that in the last major application I had for numpy, one stage of the problem would occasionally result in NaNs for certain values, but the best thing I could do was leave them in place to represent "no data". Switching to a MaskedArray might have been a better idea, but the NaNs were a rare occurrence. > About the conversion to ndarray: > By default, the result should have the same dtype as the _data section. > For this reason, I disagree with your idea of "(returning) an object ndarray > with the missing value containing the masked singleton". If you really want > an object ndarray, you can use the filled method or the filled function, with > your own definition of the filling value (such as your MaskedScalar). If you've got floating point, you can again fill in NaNs, but you have a good point about wanting to extract the original values that were masked out. Depending on what one is doing, one might want one or the other. A. M. Archibald |
From: Francesc A. <fa...@ca...> - 2006-11-08 18:00:53
|
A Dimecres 08 Novembre 2006 18:36, amit soni escrigu=C3=A9: > how can I calculate arctan of a number in python? > thanks > Amit Have you tried to read the numpy docs? some googling? In general, you can get a lot of insight by querying this to Google: "your words" site:www.scipy.org in particular, try: arctan site:www.scipy.org Cheers, =2D-=20 >0,0< Francesc Altet =C2=A0 =C2=A0 http://www.carabos.com/ V V C=C3=A1rabos Coop. V. =C2=A0=C2=A0Enjoy Data "-" |
From: Nadav H. <na...@vi...> - 2006-11-08 17:49:05
|
There is arctan function in numpy, and in math (atan, atan2) =20 Nadav. =20 _____ =20 From: num...@li... [mailto:num...@li...] On Behalf Of amit soni Sent: Wednesday, November 08, 2006 19:36 To: num...@li... Subject: [Numpy-discussion] Calculating tan inverse =20 how can I calculate arctan of a number in python? thanks Amit =20 _____ =20 Sponsored Link Mortgage rates near 39yr lows. $420,000 Mortgage for $1,399/mo - Calculate new house payment <http://www.lowermybills.com/lre/index.jsp?sourceid=3Dlmb-9069-16273&moid= =3D 4116>=20 |
From: amit s. <ami...@ya...> - 2006-11-08 17:36:35
|
how can I calculate arctan of a number in python? thanks Amit --------------------------------- Sponsored Link Mortgage rates near 39yr lows. $420,000 Mortgage for $1,399/mo - Calculate new house payment |
From: Fernando P. <fpe...@gm...> - 2006-11-08 16:57:45
|
On 11/8/06, Stefan van der Walt <st...@su...> wrote: > This looks very interesting. It works for me on simple scripts, but > whenever I include the lines > > from numpy.testing import set_local_path > set_local_path('../../..') > > in the input, pycachegrind aborts with > > File "/home/stefan//lib/python2.4/site-packages/numpy/testing/numpytest.py", line 68, in set_local_path > if f.f_locals['__name__']=='__main__': > KeyError: '__name__' > > I guess this is because the script is run in a separate namespace. > I've managed to work around the problem by changing the definition of > 'run' to: Good catch, thanks, I've fixed the public version with your changes. Best, f |
From: Andrew S. <str...@as...> - 2006-11-08 16:51:52
|
David Cournapeau wrote: > Andrew Straw wrote: > >> David Cournapeau wrote: >> >> >>> - To send data from the calling process to matlab, you first have to >>> create a mxArray, which is the basic matlab handler of a matlab array, >>> and populating it. Using mxArray is very ackward : you cannot create >>> mxArray from existing data, you have to copy data to them, etc... >>> >>> >> My understanding, never having done it, but from reading the docs, is >> that you can create a "hybrid array" where you manage the memory. Thus, >> you can create an mxArray from existing data. However, the docs >> basically say that this is too hard for most mortals (and they may well >> be right -- too painful for me, anyway)! >> >> > Ok, I have looked at it. It is not hard, it is just totally brain > damaged: there is no way to destroy a mxArray without destroying the > data it is holding, even after a call with mxSetPr. So the data > referenced by the pointer given to mxSetPr is always destroyed by > mxDestroyArray; I don't see any way to use this to avoid copy... They > could at least have given a function which frees the data buffer and one > which destroys the other stuff; as it is, it is totally useless, unless > you don't mind memory leaks. > It does sound brain damaged, I agree. But here's a suggestion: can you keep a pool of unused mxArrays rather than calling mxDestroyArray? I guess without the payload, they're just a few bytes and shouldn't take up that much space. |
From: Johannes L. <a.u...@gm...> - 2006-11-08 16:16:31
|
Hi, in extension to the previous answers, I'd like to say that it is strongly preferable to use dot(A,dot(B,C)) or dot(dot(A,B),C) instead of A*B*C. The reason is that with dot(), you can control of which operation is performed first, which can *massively* influence the time needed, depending on the involved matrices. A*B*C will always be evaluated left-to-right (if I remember correctly). Johannes |
From: Stefan v. d. W. <st...@su...> - 2006-11-08 15:22:37
|
On Wed, Nov 08, 2006 at 01:32:44AM -0700, Fernando Perez wrote: > Hi all, >=20 > in the past, Arnd Baecker has made a number of very useful posts on > this matter, and provided some nice utilities to do it. I now needed > to profile some fairly complex codes prior to a big optimization push, > so I went over his material and wrote a little tool to make the whole > process as painless as possible. Here it is, hoping others may find > it useful: >=20 > http://amath.colorado.edu/faculty/fperez/python/profiling/ This looks very interesting. It works for me on simple scripts, but whenever I include the lines from numpy.testing import set_local_path set_local_path('../../..') in the input, pycachegrind aborts with File "/home/stefan//lib/python2.4/site-packages/numpy/testing/numpytest= .py", line 68, in set_local_path if f.f_locals['__name__']=3D=3D'__main__': KeyError: '__name__' I guess this is because the script is run in a separate namespace. I've managed to work around the problem by changing the definition of 'run' to: def run(code): loc =3D locals() loc['__name__'] =3D '__main__' loc['__file__'] =3D sys.argv[0] exec code in locals() Cheers St=E9fan |
From: Joris De R. <jo...@st...> - 2006-11-08 15:01:57
|
[im]: Sorry if this is an obvious question, but what is the easiest way to multiply matrices in numpy? Suppose I want to do A=B*C*D. The ' * ' operator apparently does element wise multiplication, as does the 'multiply' ufunc. [im] All I could find was the numeric function 'matrix_multiply, but this only takes two arguments. Have a look at the examples "dot()" and "mat()" in the Numpy Example List. http://www.scipy.org/Numpy_Example_List J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm |
From: Francesc A. <fa...@ca...> - 2006-11-08 15:01:51
|
A Dimecres 08 Novembre 2006 15:55, Charles R Harris escrigu=C3=A9: > Try > > In [8]: tmp =3D fromfile('tmp.txt', sep=3D' ', dtype=3Dint) > > In [9]: a =3D tmp[:4].reshape(2,2) > > In [10]: b =3D tmp[4:].reshape(2,2) > > In [11]: a > Out[11]: > array([[1, 2], > [3, 9]]) > > In [12]: b > Out[12]: > array([[2, 3], > [4, 4]]) Yeah. Much, much better indeed. =2D-=20 >0,0< Francesc Altet =C2=A0 =C2=A0 http://www.carabos.com/ V V C=C3=A1rabos Coop. V. =C2=A0=C2=A0Enjoy Data "-" |
From: Charles R H. <cha...@gm...> - 2006-11-08 14:55:53
|
On 11/8/06, Francesc Altet <fa...@ca...> wrote: > > A Dimecres 08 Novembre 2006 13:42, amit soni escrigu=E9: > > Hi, > > i have a file with following format: > > 1 2 > > 3 9 > > 2 3 > > 4 4 > > I want to read it and then store the values into two matrices, s.t. > > A=3D[1 2;3 9] > > B=3D[2 3;4 4] > > > > Can anyone tell me how to do this in python? > > thanks > > Amit Try In [8]: tmp =3D fromfile('tmp.txt', sep=3D' ', dtype=3Dint) In [9]: a =3D tmp[:4].reshape(2,2) In [10]: b =3D tmp[4:].reshape(2,2) In [11]: a Out[11]: array([[1, 2], [3, 9]]) In [12]: b Out[12]: array([[2, 3], [4, 4]]) Chuck |
From: Sven S. <sve...@gm...> - 2006-11-08 14:52:25
|
izak marais schrieb: > Hi > > Sorry if this is an obvious question, but what is the easiest way to > multiply matrices in numpy? Suppose I want to do A=B*C*D. The ' * ' > operator apparently does element wise multiplication, as does the > 'multiply' ufunc. All I could find was the numeric function > 'matrix_multiply, but this only takes two arguments. > > Thanks in advance! > Izak There are (at least) two ways: You can use 'dot', possibly nested. or you can convert your arrays into the matrix subclass, for which '*' is matrix multiplication. I.e. mat(B)*mat(C)*mat(D) does what you want. If you "only" deal with algebra-style matrices (2d-arrays), consider using the matrix subclass as much as you can. E.g. use the functions in numpy.matlib to build your inputs. -sven |
From: Charles R H. <cha...@gm...> - 2006-11-08 14:49:52
|
On 11/8/06, Keith Goodman <kwg...@gm...> wrote: > > On 11/8/06, izak marais <iza...@ya...> wrote: > > > Sorry if this is an obvious question, but what is the easiest way to > > multiply matrices in numpy? Suppose I want to do A=B*C*D. The ' * ' > operator > > apparently does element wise multiplication, as does the 'multiply' > ufunc. > > All I could find was the numeric function 'matrix_multiply, but this > only > > takes two arguments. Same with the operator *, it takes two arguments but is in infix order, i.e., left side and right side. If B and C and D are matrices, then '*' is matrix multiplication. And if they are arrays: A = dot(B,dot(C,D)) Python has a dearth of recognized operators which makes this necessary once '*' is used for elementwise multiplication, it's a long standing complaint. You can use matrices in numpy, in which case '*' is used for matrix multiplication like in matlab, but I think it would be better to get used to using arrays as they are the numpy core. Chuck |
From: Albert S. <as...@di...> - 2006-11-08 14:47:52
|
> Izak, you should first convert you arrays to matrices using ``numpy.matrix``. or numpy.asmatrix() > > --Rob > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Roberto De A. <ro...@de...> - 2006-11-08 14:45:35
|
On 11/8/06, Keith Goodman <kwg...@gm...> wrote: > > Sorry if this is an obvious question, but what is the easiest way to > > multiply matrices in numpy? Suppose I want to do A=B*C*D. > If B and C and D are matrices, then '*' is matrix multiplication. I think the difference between arrays and matrices is not clear for him (he's clearly multiplying arrays). Izak, you should first convert you arrays to matrices using ``numpy.matrix``. --Rob |
From: Francesc A. <fa...@ca...> - 2006-11-08 14:45:26
|
A Dimecres 08 Novembre 2006 13:42, amit soni escrigu=C3=A9: > Hi, > i have a file with following format: > 1 2 > 3 9 > 2 3 > 4 4 > I want to read it and then store the values into two matrices, s.t. > A=3D[1 2;3 9] > B=3D[2 3;4 4] > > Can anyone tell me how to do this in python? > thanks > Amit There are many possibilities. One of them could be: In [64]: a =3D []; b =3D [] In [65]: for i, line in enumerate(file("/tmp/data.txt")): ....: if i < 2: ....: a.extend([float(n) for n in line.split()]) ....: else: ....: b.extend([float(n) for n in line.split()]) ....: In [66]: A=3Dnumpy.array(a).reshape(2,2); B=3Dnumpy.array(b).reshape(2,2) In [67]: A, B Out[67]: (array([[ 1., 2.], [ 3., 9.]]), array([[ 2., 3.], [ 4., 4.]])) HTH, =2D-=20 >0,0< Francesc Altet =C2=A0 =C2=A0 http://www.carabos.com/ V V C=C3=A1rabos Coop. V. =C2=A0=C2=A0Enjoy Data "-" |
From: David C. <da...@ar...> - 2006-11-08 14:44:27
|
Andrew Straw wrote: > David Cournapeau wrote: > >> - To send data from the calling process to matlab, you first have to >> create a mxArray, which is the basic matlab handler of a matlab array, >> and populating it. Using mxArray is very ackward : you cannot create >> mxArray from existing data, you have to copy data to them, etc... >> > My understanding, never having done it, but from reading the docs, is > that you can create a "hybrid array" where you manage the memory. Thus, > you can create an mxArray from existing data. However, the docs > basically say that this is too hard for most mortals (and they may well > be right -- too painful for me, anyway)! > Ok, I have looked at it. It is not hard, it is just totally brain damaged: there is no way to destroy a mxArray without destroying the data it is holding, even after a call with mxSetPr. So the data referenced by the pointer given to mxSetPr is always destroyed by mxDestroyArray; I don't see any way to use this to avoid copy... They could at least have given a function which frees the data buffer and one which destroys the other stuff; as it is, it is totally useless, unless you don't mind memory leaks. David |