You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Alan G I. <ai...@am...> - 2006-07-03 05:46:28
|
> Alan G Isaac wrote: >> I argue that rand and randn should accept a tuple as the >> first argument. Whether the old behavior is also allowed, >> I have no opinion. But the numpy-consistent behavior should >> definitely be allowed. I perhaps wrongly understood Robert >> to argue that the current behavior of rand and randn is not >> a wart since i. alternative tuple-accepting functions are >> available and ii. the suprising behavior is documented. >> This seems quite wrong to me, and I am farily confident that >> such an argument would not be offered except in defence of >> legacy code. On Sun, 02 Jul 2006, Robert Kern apparently wrote: > i. Yes, you're still misunderstanding my arguments. > ii. I'm bloody sick of rehashing it, so I won't be responding further. Sorry, I should not have said: "not a wart". I perhaps should have instead said: "an acceptable wart", due to issues of backward compatability. At least that's what you implied here: http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3150643 And note that you emphasized the availability of the alternative functions here: http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3150702 I made the documentation comment based on your action in response to this conversation: adding documentation. You make a claim not an argument when you say: http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3150643 *Changing* the API of rand() and randn() doesn't solve any problem. *Removing* them might. Your primary argument against changing the API, as far as I can see, is that allowing *both* the extant behavior and the numpy consistent behavior will result in confusing code. http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3150643 Is this a knock-down argument? I think not. But in any case, I did not argue (above) for the combined behaviors: only for the numpy-consistent behavior. (Or for removing rand and randn, an action which I view as inferior but acceptable, and which you seem---at the link above---willing to consider.) To repeat a point I made before: http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3150728 numpy should take a step so that this question goes away, rather than maintain the status quo and see it crop up continually. (I.e., its recurrence should be understood to signal a problem.) Apologies in advance for any misrepresentations, Alan Isaac |
From: Tim L. <tim...@gm...> - 2006-07-03 05:46:17
|
On 7/3/06, Keith Goodman <kwg...@gm...> wrote: > I have a list x > > >> x > [[1, None], [2, 3]] > > that I generate outside of numpy (with plain python). What is the best > way to convert x into an array? This doesn't work > > >> asarray(x) > > array([[1, None], > [2, 3]], dtype=object) <-- I'm hoping for something like dtype=float64 > > Is there something better than None to represent missing values so > that when I convert to numpy arrays (actually matrices) I'll be all > set? (I could use -99, but that would be even more embarrassing than > my python skills.) > > If there is nothing better than None, what's a fast way to take care > of the None's if x is faily large? You might want to have a look at the masked array module in numpy (numpy.ma). The following example might help to get you started. >>> import numpy as N >>> x = [[1, None], [2, 3]] >>> m = N.ma.array(x, mask=N.equal(x, None)) >>> print m [[1 --] [2 3]] Cheers, Tim > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Keith G. <kwg...@gm...> - 2006-07-03 05:27:54
|
I have a list x >> x [[1, None], [2, 3]] that I generate outside of numpy (with plain python). What is the best way to convert x into an array? This doesn't work >> asarray(x) array([[1, None], [2, 3]], dtype=object) <-- I'm hoping for something like dtype=float64 Is there something better than None to represent missing values so that when I convert to numpy arrays (actually matrices) I'll be all set? (I could use -99, but that would be even more embarrassing than my python skills.) If there is nothing better than None, what's a fast way to take care of the None's if x is faily large? |
From: David C. <da...@ar...> - 2006-07-03 04:49:43
|
Albert Strasheim wrote: > Hey Chuck > > >> -----Original Message----- >> From: num...@li... [mailto:numpy- >> dis...@li...] On Behalf Of Charles R Harris >> Sent: 01 July 2006 19:57 >> To: Robert Kern >> Cc: num...@li... >> Subject: Re: [Numpy-discussion] Time for beta1 of NumPy 1.0 >> >> All, >> >> This is bit off topic, but a while ago there were some complaints about >> the usefulness of distutils. I note that KDE has gone over to using cmake >> after trying scon. I am not familiar with cmake, but perhaps someone here >> knows more and can comment on its suitability. >> > > CMake definately warrants investigation, but I think SCons might be a better > way to go. I think it would make it easier to reuse large parts of the > existing build code (e.g. conv_template.py could be converted into a SCons > builder without too much effort). Reusing parts of distutils and setuptools > would also be easier if the new tool is somehow Python-aware. > > I think the main problem with distutils in the NumPy context is that it was > never designed to build C/Fortran code over so many platforms with to many > possible build configurations. python setup.py install works pretty well, > but any kind of non-default configuration can become a bit hairy, despite > the excellent work on NumPy extensions to distutils. > > I'd like to take a stab at doing something with SCons in a few weeks' time. > Does anybody want to brainstorm on some ideas for what is needed from a > better build system for NumPy? Maybe a wiki page? > I have a small experience with scons, as a replacement of the auto* tools for small packages on my own (requirements for cross-building, library and header dependency, build of libraries, etc...). So I am willing to share my somewhat limited experience with scons (the code I am building scons with is using cblas/clapack, and has libraries + some unit testing, so we would not start from scratch). Also, I have access to x86 and ppc linux + mac os x + windows easily, which makes it easy to test on some common platforms, David P.S: Some comments on scons : I don't know distutils, so I can only compare to autotools: from *my* experience, you should think about scons as a Makefile replacement, and as a build framework to build onto. The main pro of scons: - having a real language for build rules programming is a real plus. It makes it much easier to extend that autoconf, for example (debugging m4 macro is not something I can enjoy much, and I am sure I am not alone). - the dependency checking works great - parallel build is explicitly handled - scons knows how to build library (static and shared) on the plateforms it supports - can be included in the project so scons does not need to be installed if needed (I have never used this feature myself). The main cons: - configuration part: there are some tools to test library/header a la autoconf, but this is far from great in the present situation, mainly because of the next point - option handling from the command line: there is some support, but nothing is automatic. On the long run, this is painful. - No support for library versioning; I am not sure about rpath support, which is useful for non-standard path installation. I don't know how difficult it would be to implement for all platforms (I have some - can be slow for big projects ? I have seen quite big projects (eg ardour: several hundred of .c and .h files) using scons, and it was not really slow, and I don't think it would be a problem for something like numpy which size is nothing compared to kde. To sum it up: as a make replacement, from a developer POV, it works great. As a tool for *distribution*, I am less convinced. For people familiar with autotools, scons is a great automake replacement. Everything else has to be implemented: autoconf, libtool, etc... My understanding is that those two tools (autoconf and libtool) are the ones most needed for numpy, so there is a lot of work to do if we want to use scons. |
From: Robert K. <rob...@gm...> - 2006-07-03 04:38:04
|
Alan G Isaac wrote: > I argue that rand and randn should accept a tuple as the > first argument. Whether the old behavior is also allowed, > I have no opinion. But the numpy-consistent behavior should > definitely be allowed. I perhaps wrongly understood Robert > to argue that the current behavior of rand and randn is not > a wart since i. alternative tuple-accepting functions are > available and ii. the suprising behavior is documented. > This seems quite wrong to me, and I am farily confident that > such an argument would not be offered except in defence of > legacy code. i. Yes, you're still misunderstanding my arguments. ii. I'm bloody sick of rehashing it, so I won't be responding further. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alan G I. <ai...@am...> - 2006-07-03 04:20:08
|
On Sun, 2 Jul 2006, Webb Sprague apparently wrote:=20 > I have spent a huge amount of my time fixing and bending=20 > my head around off-by-one errors caused by trying to index=20 > matrices using 0 to n-1.=20 I come from GAUSS so I am symphathetic, but in the end=20 zero-based indexing is usually great. Anyway, ideally you will rely on vector/matrix operations=20 rather than constantly tracking indices. fwiw, Alan |
From: Alan G I. <ai...@am...> - 2006-07-03 04:16:59
|
On Mon, 3 Jul 2006, Bill Baxter apparently wrote:=20 > Here's another possible now or never change:=20 > fix rand(), eye(), ones(), zeros(), and empty() to ALL take either a tupl= e=20 > argument or plain list.=20 > I know this has been discussed before, but I really don't=20 > see why these methods can't be overloaded to accept either=20 > one.=20 I think the discussion has been slightly different than this. The "numpy way" for array creation is generally to specify=20 dimension as tuples. A small number of functions violate=20 this, which is an unhappy inconsistency. Specifically,=20 rand() and randn() violate this. (Perhaps one could also=20 say that eye() violates this; I do not yet have an opinion.) I argue that rand and randn should accept a tuple as the=20 first argument. Whether the old behavior is also allowed,=20 I have no opinion. But the numpy-consistent behavior should=20 definitely be allowed. I perhaps wrongly understood Robert=20 to argue that the current behavior of rand and randn is not=20 a wart since i. alternative tuple-accepting functions are=20 available and ii. the suprising behavior is documented. =20 This seems quite wrong to me, and I am farily confident that=20 such an argument would not be offered except in defence of=20 legacy code. In fact, I would argue that if rand and randn are not=20 "fixed" to accept a tuple, then they should be moved into=20 a compatability module and not be considered part of numpy. Cheers, Alan Isaac |
From: Alan G I. <ai...@am...> - 2006-07-03 03:56:59
|
On Mon, 3 Jul 2006, Bill Baxter apparently wrote:=20 > What's the best way to combine say several 2-d arrays=20 > together into a grid?=20 >>> help(N.bmat) Help on function bmat in module numpy.core.defmatrix: bmat(obj, ldict=3DNone, gdict=3DNone) Build a matrix object from string, nested sequence, or array. Ex: F =3D bmat('A, B; C, D') F =3D bmat([[A,B],[C,D]]) F =3D bmat(r_[c_[A,B],c_[C,D]]) all produce the same Matrix Object [ A B ] [ C D ] if A, B, C, and D are appropriately shaped 2-d arrays. hth, Alan Isaac |
From: Tim H. <tim...@co...> - 2006-07-03 03:22:43
|
Bill Baxter wrote: > Here's another possible now or never change: > fix rand(), eye(), ones(), zeros(), and empty() to ALL take either a > tuple argument or plain list. Since a tuple seems to work fine as an argument I imagine you means something else. > > I know this has been discussed before, but I really don't see why > these methods can't be overloaded to accept either one. > Here are some wrappers I cooked up that basically do the trick, with a > few exceptions which could easily be ironed out with a little input > from someone who knows python better than me. I've got versions of > rand(), eye() and ones() below. empty() and zeros() should basically > be the same as ones(). > > Actually I guess it's not a "now or never change", because this should > be completely backwards compatible. But still I find myself very > frequently typing things like ones(2,2), and I'm getting tired of it > not working. Ah! You mean accepts either multiple args or a sequence. I don't like this kind of "helpful" gimmickry. I suspect that the errors that it masks in some inscrutable way more than make up for the minor time that it saves one on occasion. So mark me down as -1. -tim > > > ------------------------------------------------------------------------ > > > import types > import numpy > > def xrand(*varg): > """xrand(d0, d1, ..., dn) -> random values > > Return an array of the given dimensions which is initialized to > random numbers from a uniform distribution in the range [0,1). > > xrand(d0, d1, ..., dn) -> random values > or > xrand((d0, d1, ..., dn)) -> random values > """ > if len(varg) == 0 or type(varg[0]) != types.TupleType: > return rand(*varg) > else: > if len(varg) != 1: > raise TypeError('Argument should either be a tuple or an argument list') > else: > return rand(*varg[0]) > > > def xeye(N, *varg,**kwarg): > """xeye(N, M=None, k=0, dtype=float64) > > eye returns a N-by-M 2-d array where the k-th diagonal is all ones, > and everything else is zeros. > """ > > if hasattr(N,'__getitem__') and hasattr(N,'__len__'): > if len(N) == 0 or len(N) > 2: > raise TypeError('First tuple to eye should be length 1 or 2') > if len(N) == 1: > return numpy.eye(N[0], None, *varg, **kwarg) > else: > return numpy.eye(N[0], N[1], *varg, **kwarg) > > return numpy.eye(N,*varg,**kwarg) > > > def xones(shape, *varg, **kwarg): > """xones(shape, dtype=<type 'int32scalar'>, order='C') > > xones(shape, dtype=int_) returns an array of the given > dimensions which is initialized to all ones. > """ > > if hasattr(shape,'__getitem__'): > return numpy.ones(shape,*varg,**kwarg) > > i = 0 > for x in varg: > if type(x)==types.IntType: > i+=1 > else: > break > > tshape = (shape,) > if i>0: > tshape += tuple( varg[0:i] ) > args = varg[i:] > return numpy.ones(tshape, *args, **kwarg) > > > def test(): > xrand() > xrand(2) > xrand(2,2) > xrand((2,2)) > > xeye(1) > xeye(2,2) > xeye(2,2) > xeye(2,2,0,numpy.float64) > xeye(2,2,k=0,dtype=numpy.float64) > xeye((2,2),0,numpy.float64) > xeye((2,2),k=0,dtype=numpy.float64) > > xones(1) > xones([1]) > xones([1]) > xones(2,2) > xones([2,2]) > xones((2,2)) > xones(numpy.array([2,2])) > > xones(1,numpy.float64) > xones([1],numpy.float64) > xones([1],numpy.float64) > xones(2,2,numpy.float64) > xones([2,2],numpy.float64) > xones((2,2),numpy.float64) > xones(numpy.array([2,2]),numpy.float64) > > xones(1,dtype=numpy.float64) > xones([1],dtype=numpy.float64) > xones([1],dtype=numpy.float64) > xones(2,2,dtype=numpy.float64) > xones([2,2],dtype=numpy.float64) > xones((2,2),dtype=numpy.float64) > xones(numpy.array([2,2]),dtype=numpy.float64) > > > > ------------------------------------------------------------------------ > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Bill B. <wb...@gm...> - 2006-07-03 02:18:06
|
What's the best way to combine say several 2-d arrays together into a grid? Here's the best I can see: >>> a = eye(2,2) >>> b = 2*a >>> c = 3*a >>> d = 4*a >>> r_[c_[a,b],c_[c,d]] array([[1, 0, 2, 0], [0, 1, 0, 2], [3, 0, 4, 0], [0, 3, 0, 4]]) In matlab you'd get the same effect by saying: [ a, b; c, d ] Compared to that, r_[c_[a,b],c_[c,d]] looks quite a mess. Would be nice if there were some operator like c_ that took a special argument that introduced a new row. Like maybe: c_[a,b, newrow, c,d] or maybe you could abuse the syntax and make something like this work: c_[a,b : c,d] or perhaps an empty argument could work? c_[a,b ,, c,d] Or empty tuple: c_[a,b, (), c,d] Hmm... I see there's already something in the code for handling 'special directives': >>> c_[a,b,'r',c,d] matrix([[1, 0, 2, 0, 3, 0, 4, 0], [0, 1, 0, 2, 0, 3, 0, 4]]) 'r' seems to turn the results into a matrix. Maybe that could be used to recognize a newline or something: >>> c_[a,b,'\n',c,d] Traceback (most recent call last): File "<input>", line 1, in ? File "C:\Python24\Lib\site-packages\numpy\lib\index_tricks.py", line 239, in __getitem__ raise ValueError, "Unknown special directive." ValueError: Unknown special directive. --Bill |
From: Bill B. <wb...@gm...> - 2006-07-03 01:59:02
|
DQppbXBvcnQgdHlwZXMNCmltcG9ydCBudW1weQ0KDQpkZWYgeHJhbmQoKnZhcmcpOg0KICAgICIi InhyYW5kKGQwLCBkMSwgLi4uLCBkbikgLT4gcmFuZG9tIHZhbHVlcw0KDQogICAgUmV0dXJuIGFu IGFycmF5IG9mIHRoZSBnaXZlbiBkaW1lbnNpb25zIHdoaWNoIGlzIGluaXRpYWxpemVkIHRvIA0K ICAgIHJhbmRvbSBudW1iZXJzIGZyb20gYSB1bmlmb3JtIGRpc3RyaWJ1dGlvbiBpbiB0aGUgcmFu Z2UgWzAsMSkuDQoNCiAgICB4cmFuZChkMCwgZDEsIC4uLiwgZG4pIC0+IHJhbmRvbSB2YWx1ZXMN CiAgICBvcg0KICAgIHhyYW5kKChkMCwgZDEsIC4uLiwgZG4pKSAtPiByYW5kb20gdmFsdWVzDQog ICAgIiIiICAgIA0KICAgIGlmIGxlbih2YXJnKSA9PSAwIG9yIHR5cGUodmFyZ1swXSkgIT0gdHlw ZXMuVHVwbGVUeXBlOg0KICAgICAgICByZXR1cm4gcmFuZCgqdmFyZykNCiAgICBlbHNlOg0KICAg ICAgICBpZiBsZW4odmFyZykgIT0gMToNCiAgICAgICAgICAgIHJhaXNlIFR5cGVFcnJvcignQXJn dW1lbnQgc2hvdWxkIGVpdGhlciBiZSBhIHR1cGxlIG9yIGFuIGFyZ3VtZW50IGxpc3QnKQ0KICAg ICAgICBlbHNlOg0KICAgICAgICAgICAgcmV0dXJuIHJhbmQoKnZhcmdbMF0pDQoNCg0KZGVmIHhl eWUoTiwgKnZhcmcsKiprd2FyZyk6DQogICAgIiIieGV5ZShOLCBNPU5vbmUsIGs9MCwgZHR5cGU9 ZmxvYXQ2NCkNCg0KICAgIGV5ZSByZXR1cm5zIGEgTi1ieS1NIDItZCBhcnJheSB3aGVyZSB0aGUg IGstdGggZGlhZ29uYWwgaXMgYWxsIG9uZXMsDQogICAgYW5kIGV2ZXJ5dGhpbmcgZWxzZSBpcyB6 ZXJvcy4NCiAgICAiIiINCg0KICAgIGlmIGhhc2F0dHIoTiwnX19nZXRpdGVtX18nKSBhbmQgaGFz YXR0cihOLCdfX2xlbl9fJyk6DQogICAgICAgIGlmIGxlbihOKSA9PSAwIG9yIGxlbihOKSA+IDI6 DQogICAgICAgICAgICByYWlzZSBUeXBlRXJyb3IoJ0ZpcnN0IHR1cGxlIHRvIGV5ZSBzaG91bGQg YmUgbGVuZ3RoIDEgb3IgMicpDQogICAgICAgIGlmIGxlbihOKSA9PSAxOg0KICAgICAgICAgICAg cmV0dXJuIG51bXB5LmV5ZShOWzBdLCBOb25lLCAqdmFyZywgKiprd2FyZykNCiAgICAgICAgZWxz ZTogDQogICAgICAgICAgICByZXR1cm4gbnVtcHkuZXllKE5bMF0sIE5bMV0sICp2YXJnLCAqKmt3 YXJnKQ0KDQogICAgcmV0dXJuIG51bXB5LmV5ZShOLCp2YXJnLCoqa3dhcmcpDQoNCg0KZGVmIHhv bmVzKHNoYXBlLCAqdmFyZywgKiprd2FyZyk6DQogICAgIiIieG9uZXMoc2hhcGUsIGR0eXBlPTx0 eXBlICdpbnQzMnNjYWxhcic+LCBvcmRlcj0nQycpDQoNCiAgICB4b25lcyhzaGFwZSwgZHR5cGU9 aW50XykgcmV0dXJucyBhbiBhcnJheSBvZiB0aGUgZ2l2ZW4NCiAgICBkaW1lbnNpb25zIHdoaWNo IGlzIGluaXRpYWxpemVkIHRvIGFsbCBvbmVzLg0KICAgICIiIg0KDQogICAgaWYgaGFzYXR0cihz aGFwZSwnX19nZXRpdGVtX18nKToNCiAgICAgICAgcmV0dXJuIG51bXB5Lm9uZXMoc2hhcGUsKnZh cmcsKiprd2FyZykNCg0KICAgIGkgPSAwDQogICAgZm9yIHggaW4gdmFyZzoNCiAgICAgICAgaWYg dHlwZSh4KT09dHlwZXMuSW50VHlwZToNCiAgICAgICAgICAgIGkrPTENCiAgICAgICAgZWxzZToN CiAgICAgICAgICAgIGJyZWFrDQoNCiAgICB0c2hhcGUgPSAoc2hhcGUsKQ0KICAgIGlmIGk+MDoN CiAgICAgICAgdHNoYXBlICs9IHR1cGxlKCB2YXJnWzA6aV0gKQ0KICAgIGFyZ3MgPSB2YXJnW2k6 XQ0KICAgIHJldHVybiBudW1weS5vbmVzKHRzaGFwZSwgKmFyZ3MsICoqa3dhcmcpDQoNCg0KZGVm IHRlc3QoKToNCiAgICB4cmFuZCgpDQogICAgeHJhbmQoMikNCiAgICB4cmFuZCgyLDIpDQogICAg eHJhbmQoKDIsMikpDQoNCiAgICB4ZXllKDEpDQogICAgeGV5ZSgyLDIpDQogICAgeGV5ZSgyLDIp DQogICAgeGV5ZSgyLDIsMCxudW1weS5mbG9hdDY0KQ0KICAgIHhleWUoMiwyLGs9MCxkdHlwZT1u dW1weS5mbG9hdDY0KQ0KICAgIHhleWUoKDIsMiksMCxudW1weS5mbG9hdDY0KQ0KICAgIHhleWUo KDIsMiksaz0wLGR0eXBlPW51bXB5LmZsb2F0NjQpDQoNCiAgICB4b25lcygxKQ0KICAgIHhvbmVz KFsxXSkNCiAgICB4b25lcyhbMV0pDQogICAgeG9uZXMoMiwyKQ0KICAgIHhvbmVzKFsyLDJdKQ0K ICAgIHhvbmVzKCgyLDIpKQ0KICAgIHhvbmVzKG51bXB5LmFycmF5KFsyLDJdKSkNCg0KICAgIHhv bmVzKDEsbnVtcHkuZmxvYXQ2NCkNCiAgICB4b25lcyhbMV0sbnVtcHkuZmxvYXQ2NCkNCiAgICB4 b25lcyhbMV0sbnVtcHkuZmxvYXQ2NCkNCiAgICB4b25lcygyLDIsbnVtcHkuZmxvYXQ2NCkNCiAg ICB4b25lcyhbMiwyXSxudW1weS5mbG9hdDY0KQ0KICAgIHhvbmVzKCgyLDIpLG51bXB5LmZsb2F0 NjQpDQogICAgeG9uZXMobnVtcHkuYXJyYXkoWzIsMl0pLG51bXB5LmZsb2F0NjQpDQoNCiAgICB4 b25lcygxLGR0eXBlPW51bXB5LmZsb2F0NjQpDQogICAgeG9uZXMoWzFdLGR0eXBlPW51bXB5LmZs b2F0NjQpDQogICAgeG9uZXMoWzFdLGR0eXBlPW51bXB5LmZsb2F0NjQpDQogICAgeG9uZXMoMiwy LGR0eXBlPW51bXB5LmZsb2F0NjQpDQogICAgeG9uZXMoWzIsMl0sZHR5cGU9bnVtcHkuZmxvYXQ2 NCkNCiAgICB4b25lcygoMiwyKSxkdHlwZT1udW1weS5mbG9hdDY0KQ0KICAgIHhvbmVzKG51bXB5 LmFycmF5KFsyLDJdKSxkdHlwZT1udW1weS5mbG9hdDY0KQ0KDQogICAgDQo= |
From: Webb S. <web...@gm...> - 2006-07-02 23:36:16
|
Hi Numpeans, I have been working on a web-based scientific application for about a year, most of which had been written in either Matlab or SPLUS/R. My task has been to make it "driveable" through an online interface (if anyone cares about mortality forecasting, drop me an email and we can chat about it offline). I chose Python/Numpy for the language because Python and Numpy are both so full featured and easy to work with (except for one little thing...), and neither Matlab nor R could gracefully deal with CGI programming (misguided propaganda notwithstanding). However.... I have spent a huge amount of my time fixing and bending my head around off-by-one errors caused by trying to index matrices using 0 to n-1. The problem is two-fold (threefold if you count my limited IQ...): one, all the formulas in the literature use 1 to n indexing except for some small exceptions. Second and more important, it is far more natural to program if the indices are aligned with the counts of the elements (I think there is a way to express that idea in modern algebra but I can't recall it). This lets you say "how many are there? Three--ok, grab the third one and do whatever to it" etc. Or "how many? zero--ok don't do anything". With zero-based indexing, you are always translating between counts and indices, but such translation is never a problem in one-based indexing. Given the long history of python and its ancestry in C (for which zero based indexing made lots of sense since it dovetailed with thinking in memory offsets in systems programming), there is probably nothing to be done now. I guess I just want to vent, but also to ask if anyone has found any way to deal with this issue in their own scientific programming. Or maybe I am the only with this problem, and if I were a real programmer would translate into zero indexing without even noticing.... Anyway, thanks for listening... |
From: Pierre GM <pgm...@ma...> - 2006-07-02 22:48:35
|
Pepe, > In [56]: array(prob) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /Users/elventear/Projects/workspace/libSVM Scripts/src/<ipython console> > > TypeError: a float is required Make sure that all the elements of your `prob` list have the same length. If not, then numpy can't create an array from your list and raises the exception you see. |
From: Pepe B. <elv...@gm...> - 2006-07-02 21:32:28
|
Hi, I have some data I've generated and stored in a PyTables database. Then I retrieve the data in a script and do some processing. I have two different datasets that have been generated in the same way, and one works perfectly as intended while the other gives me an error. I'll explain in the following lines what happens. The information in question is stored in a list that contains 10 one dimensional arrays. The data is stored in a variable called prob. If I execute array(prob) on the data coming of one of the sources it is converted in a 10xn numpy array. If I do the same thing on the other I get the following: In [56]: array(prob) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /Users/elventear/Projects/workspace/libSVM Scripts/src/<ipython console> TypeError: a float is required I have no idea why it is complaining about this. I've compared the data coming from the two sources and they seem the same. In both cases, they are like this: In [57]: type(prob) Out[57]: <type 'list'> In [58]: type(prob[0]) Out[58]: <type 'numpy.ndarray'> In [59]: type(prob[0][0]) Out[59]: <type 'float64scalar'> I have no idea where else to look to solve this problem. As I said the data is generated identically starting from two different data sources and then processed identically with the same script. From one data source it just works while the other complains. Thanks, Pepe |
From: Jonathan T. <jon...@st...> - 2006-07-02 20:44:45
|
Ooops -- I posted this same question. Sorry. One more bit of info: ipdb> maxf array(1.7976931348623157e+308) ipdb> minf -1.7976931348623157e+308 ipdb> type(maxf) <type 'numpy.ndarray'> ipdb> type(minf) <type 'float64scalar'> ipdb> y[are_inf] = float(maxf) ipdb> ipdb> y[are_inf] array([], dtype=float64) --Jonathan Tim Leslie wrote: > Hi All, > > The following script: > > import numpy as N > print N.__version__ > a = N.array([1,2,3,4], dtype=N.float64) > a.dtype = a.dtype.newbyteorder('big') > print N.nan_to_num(a) > > gives the following exception: > > 0.9.9.2707 > Traceback (most recent call last): > File "memmap_nan.py", line 5, in ? > print N.nan_to_num(a) > File "/usr/lib/python2.4/site-packages/numpy/lib/type_check.py", > line 127, in nan_to_num > y[are_inf] = maxf > SystemError: error return without exception set > > Could someone shed some light on this? I'm at a bit of a loss as where > to go. I had a poke around inside type_check.py but didn't get very > far. I confirmed that are_inf in type_check.py is [False, False, > False, False], as expected. I changed the value of maxf to different > values (0, maxf+1, maxf-1, maxf+0.00000001, minf, 37), and with any of > these different values the SystemError goes away. Beyond that, I'm not > sure how to chase this one down. > > Cheers, > > Tim Leslie > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 |
From: Jonathan T. <jon...@st...> - 2006-07-02 20:39:08
|
Thanks. The byteswap works for me now, but I have another problem... -------------------------------------------------------------------- >>> import numpy as N >>> d = N.dtype(N.float64) >>> d.newbyteorder('big') dtype('>f8') >>> D = d.newbyteorder('big') >>> >>> x = N.zeros((10,10), D) >>> N.nan_to_num(x) Traceback (most recent call last): File "<stdin>", line 1, in ? File "/home/jtaylo/python/lib/python2.4/site-packages/numpy/lib/type_check.py", line 127, in nan_to_num y[are_inf] = maxf SystemError: error return without exception set >>> >>> ------------------------------------------------------------ Here is what maxf, minf are: ---------------------------------------------------------------- SystemError: error return without exception set /home/jtaylo/python/lib/python2.4/site-packages/numpy/lib/type_check.py in nan_to_num() 126 y[are_nan] = 0 --> 127 y[are_inf] = maxf 128 y[are_neg_inf] = minf ipdb> maxf array(1.7976931348623157e+308) ipdb> minf -1.7976931348623157e+308 ----------------------------------------------------------------------- ----------------------------------------------------------- -- Jonathan -- ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 |
From: Bruce S. <bso...@gm...> - 2006-07-02 20:23:52
|
Hi, Linux Weekly News (http://lwn.net) had an very interesting article on KDE's switch on June 19, 2006 by Alexander Neundorf: http://lwn.net/Articles/187923/ The full article is at: http://lwn.net/Articles/188693/ This should be freely available to all. Also, the current US Linux Magazine (June or July 2006 ) has a small feature on cmake as well. Regards Bruce On 7/1/06, Albert Strasheim <fu...@gm...> wrote: > Hey Chuck > > > -----Original Message----- > > From: num...@li... [mailto:numpy- > > dis...@li...] On Behalf Of Charles R Harris > > Sent: 01 July 2006 19:57 > > To: Robert Kern > > Cc: num...@li... > > Subject: Re: [Numpy-discussion] Time for beta1 of NumPy 1.0 > > > > All, > > > > This is bit off topic, but a while ago there were some complaints about > > the usefulness of distutils. I note that KDE has gone over to using cmake > > after trying scon. I am not familiar with cmake, but perhaps someone here > > knows more and can comment on its suitability. > > CMake definately warrants investigation, but I think SCons might be a better > way to go. I think it would make it easier to reuse large parts of the > existing build code (e.g. conv_template.py could be converted into a SCons > builder without too much effort). Reusing parts of distutils and setuptools > would also be easier if the new tool is somehow Python-aware. > > I think the main problem with distutils in the NumPy context is that it was > never designed to build C/Fortran code over so many platforms with to many > possible build configurations. python setup.py install works pretty well, > but any kind of non-default configuration can become a bit hairy, despite > the excellent work on NumPy extensions to distutils. > > I'd like to take a stab at doing something with SCons in a few weeks' time. > Does anybody want to brainstorm on some ideas for what is needed from a > better build system for NumPy? Maybe a wiki page? > > Regards, > > Albert > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Sebastian H. <ha...@ms...> - 2006-07-02 17:20:56
|
Hi, It seems that there is a current discussion on python-dev http://www.gossamer-threads.com/lists/engine?do=post_view_flat;post=497487;page=1;mh=-1;list=python;sb=post_latest_reply;so=ASC and the warning should go away by 2.5b2 So: don't change any code quite yet .... Cheers, -Sebastian Haase Tim Hochberg wrote: > Russell E. Owen wrote: >> I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get >> Numeric 24.2 installed. It seems to build fine (no obvious error >> messages), but when I try to import it I get: >> Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) >> [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin >> Type "help", "copyright", "credits" or "license" for more information. >> >>>>> import Numeric >>>>> >> __main__:1: ImportWarning: Not importing directory >> '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac >> kages/Numeric': missing __init__.py >> >> >> Any ideas? Is it somehow incompatible with python 2.5b1? >> > Import warning is a new 'feature' of 2.5. It warns if there are > directories on sys.path that are *not* packages. I'll refer you to the > py-dev archives if you want figure out the motivation for that. So, if > everything seems to work, there's a good chance that nothing's wrong, > but that your just seeing a complaint due to this new behaviour. If you > check recent messages on Python-dev someone just posted a recipe for > suppressing this warning. > > -tim > > >> For what it's worth, numarray builds and installs fine. I've not tried >> numpy or any other packages yet. >> >> -- Russell |
From: Sasha <nd...@ma...> - 2006-07-02 16:18:35
|
On 7/2/06, Norbert Nemec <Nor...@gm...> wrote: > ... > Does anybody know about the internals of the python "set"? How is > .keys() implemented? I somehow have really doubts about the efficiency > of this method. > Set implementation (Objects/setobject.c) is a copy and paste job from dictobject with values removed. As a result it is heavily optimized for the case of string valued keys - a case that is almost irrelevant for numpy. I think something like the following (untested, 1d only) will probably be much faster and sorted: def unique(x): s = sort(x) r = empty_like(s) r[:-1] = s[1:] r[-1] = s[0] return s[r != s] |
From: Albert S. <fu...@gm...> - 2006-07-02 14:24:42
|
Hello all Travis Oliphant wrote: > I've been playing a bit with ctypes and realized that with a little > help, it could be made much easier to interface with NumPy arrays. > Thus, I added a ctypes attribute to the NumPy array. If ctypes is > installed, this attribute returns a "conversion" object otherwise an > AttributeError is raised. > > The ctypes-conversion object has attributes which return c_types aware > objects so that the information can be passed directly to c-code (as an > integer, the number of dimensions can already be passed using c-types). > > The information available and it's corresponding c_type is > > data - c_void_p > shape, strides - c_int * nd or c_long * nd or c_longlong * nd > depending on platform I did a few tests and this seems to work nicely: In [133]: printf = ctypes.cdll.msvcrt.printf In [134]: printf.argtypes = [ctypes.c_char_p, ctypes.c_void_p] In [135]: x = N.array([1,2,3]) In [136]: printf('%p\n', x.ctypes.data) 01CC8AC0 Out[136]: 9 In [137]: hex(x.__array_interface__['data'][0]) Out[137]: '0x1cc8ac0' It would be nice if we could the _as_parameter_ magic to work as well. See this thread: http://aspn.activestate.com/ASPN/Mail/Message/ctypes-users/3122558 If I understood Thomas correctly, in the presence of argtypes an an instance, say x, with _as_parameter_, the following is done to convert the instance to something that the function accepts as its nth argument: func.argtypes[n].from_param(x._as_parameter_) However, if I try passing x directly to printf, I get this: In [147]: printf('%p\n', x) ... ArgumentError: argument 2: exceptions.TypeError: wrong type However, this much works: In [148]: ctypes.c_void_p.from_param(x._as_parameter_) Out[148]: <cparam 'P' (01cc8ac0)> So I don't understand why the conversion isn't happening automatically. Another quirk I noticed is that non-void pointers' from_param can't seem to be used with ints. For example: In [167]: ctypes.POINTER(ctypes.c_double).from_param(x._as_parameter_) ... TypeError: expected LP_c_double instance instead of int But this works: In [168]: ctypes.POINTER(ctypes.c_double).from_address(x._as_parameter_) Out[168]: <ctypes.LP_c_double object at 0x01DCE800> I don't think this is too much of an issue though -- you could wrap all your functions to take c_void_ps. If you happen to pass an int32 NumPy array to a function expecting a double*, you might run into problems though. Maybe there should be a way to get a pointer to the NumPy array data as a POINTER(c_double) if it is known that the array's dtype is float64. Ditto for c_int/int32 and the others. Regards, Albert |
From: Jahan M. <mc...@br...> - 2006-07-02 13:30:23
|
Hi, =20 V / / A G R A from 3 , 33 $=20 and man y other at http://uadesaxecoin.com - - - - - - And he told her what had happened in Madam Puddifoots teashop. : so then, he finished several minutes later, as the final bit of crumble disappeared, she jumps up, right, and says, Ill see you around, Harry, and runs out of the place! He put down his spoon and looked at Hermione. I mean, what was all that about? What was going on? Hermione glanced over at the back of Chos head and sighed. |
From: Tim L. <tim...@gm...> - 2006-07-02 10:54:01
|
Hi All, The following script: import numpy as N print N.__version__ a = N.array([1,2,3,4], dtype=N.float64) a.dtype = a.dtype.newbyteorder('big') print N.nan_to_num(a) gives the following exception: 0.9.9.2707 Traceback (most recent call last): File "memmap_nan.py", line 5, in ? print N.nan_to_num(a) File "/usr/lib/python2.4/site-packages/numpy/lib/type_check.py", line 127, in nan_to_num y[are_inf] = maxf SystemError: error return without exception set Could someone shed some light on this? I'm at a bit of a loss as where to go. I had a poke around inside type_check.py but didn't get very far. I confirmed that are_inf in type_check.py is [False, False, False, False], as expected. I changed the value of maxf to different values (0, maxf+1, maxf-1, maxf+0.00000001, minf, 37), and with any of these different values the SystemError goes away. Beyond that, I'm not sure how to chase this one down. Cheers, Tim Leslie |
From: Robert K. <rob...@gm...> - 2006-07-02 10:37:50
|
Norbert Nemec wrote: > I agree. > > Currently the order of the output of unique is undefined. Defining it in > such a way that it produces a sorted array will not break any compatibility. > > My idea would be something like > > def unique(arr,sort=True): > if hasattr(arr,'flatten'): > tmp = arr.flatten() > tmp.sort() > idx = concatenate([True],tmp[1:]!=tmp[:-1]) > return tmp[idx] > else: # for compatibility: > set = {} > for item in inseq: > set[item] = None > if sort: > return asarray(sorted(set.keys())) > else: > return asarray(set.keys()) > > Does anybody know about the internals of the python "set"? How is > .keys() implemented? I somehow have really doubts about the efficiency > of this method. Well, that's a dictionary, not a set, but they both use the same algorithm. They are both hash tables. If you need more specific details about how the hash tables are implemented, the source (Object/dictobject.c)is the best place for them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Norbert N. <Nor...@gm...> - 2006-07-02 09:47:20
|
I agree. Currently the order of the output of unique is undefined. Defining it in such a way that it produces a sorted array will not break any compatibility. My idea would be something like def unique(arr,sort=True): if hasattr(arr,'flatten'): tmp = arr.flatten() tmp.sort() idx = concatenate([True],tmp[1:]!=tmp[:-1]) return tmp[idx] else: # for compatibility: set = {} for item in inseq: set[item] = None if sort: return asarray(sorted(set.keys())) else: return asarray(set.keys()) Does anybody know about the internals of the python "set"? How is .keys() implemented? I somehow have really doubts about the efficiency of this method. David Huard wrote: > Hi, > > Numpy's unique(x) returns an array x with repetitions removed. > However, since it returns asarray(dict.keys()), the resulting array is > not sorted, worse, the original order may not be conserved. I think > that unique() should return a sorted array, like its matlab homonym. > > Regards, > > David Huard > ------------------------------------------------------------------------ > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Robert K. <rob...@gm...> - 2006-07-02 04:43:15
|
Travis Oliphant wrote: > Is anybody else having trouble connecting to the SciPy SVN server? It > just started failing with could not connect to server errors in the last > hour. It works intermittently for me. The scipy.org server has apparently been using up a lot of bandwidth over the past few days (I have this information second-hand so I don't know why exactly). It was affecting our network connectivity at Enthought. Its upload bandwidth was capped at 1 Mbps yesterday. I'll try to escalate this issue, but both Monday and Tuesday are holidays for us. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |