You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Rob <eu...@ho...> - 2001-09-01 18:57:54
|
I hate to bother you guys, but this is how I learn. I have had some success using the take() and add.reduce() routines all by myself. However this one seems intractable: for IEdge in range(0,a.HybrdBoundEdgeNum): for JEdge in range(0,a.HybrdBoundEdgeNum): AVec[IEdge+a.TotInnerEdgeNum] += a.Cdd[IEdge,JEdge]*XVec[JEdge+a.TotInnerEdgeNum] ## below is my attempt at this, but it doesn't work ## h=a.TotInnerEdgeNum ## for IEdge in range(0,a.HybrdBoundEdgeNum): ## AVec[IEdge+h] = add.reduce(a.Cdd[IEdge,0:a.HybrdBoundEdgeNum] * XVec[h:(h+a.HybrdBoundEdgeNum)]) I think the problem lies in that add.reduce is using the slice 0:a.HybridBoundEdgeNum for Cdd, but then has to contend with XVec slice which is offset by h. Is there any elegant way to deal with this. This routine is part of a sparse matrix multiply routine. If I could find SparsePy, maybe I could eliminate it altogether. Thanks, Rob. -- The Numeric Python EM Project www.members.home.net/europax |
From: Rob <eu...@ho...> - 2001-09-01 05:47:11
|
I'm pretty certain now that this is a numeric precision problem. The discrepancy only shows up for very small numbers. For now I'll ignore it. Rob. Rob wrote: > > I found a slight discrepancy in my Python EM code output when I use the > invert() function. It may be that the C routine that I am comparing it > to actually has the bug. For now I don't know. It would be cool to > have a large two dimensional array that has a known inverse. I think > NIST keeps an array collection, but I don't know if they have the > inverses. Rob. > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax |
From: Rob <eu...@ho...> - 2001-09-01 02:42:27
|
I found a slight discrepancy in my Python EM code output when I use the invert() function. It may be that the C routine that I am comparing it to actually has the bug. For now I don't know. It would be cool to have a large two dimensional array that has a known inverse. I think NIST keeps an array collection, but I don't know if they have the inverses. Rob. -- The Numeric Python EM Project www.members.home.net/europax |
From: Jeff W. <js...@cd...> - 2001-08-31 20:51:25
|
I've uploaded a new version of my LinearAlgebra patch that adds a Cholesky decomposition routine (cholesky_decomposition), as per Frederik's request. -Jeff On Fri, 31 Aug 2001, Fredrik Stenberg wrote: > You read my mind...... > > ;)) > > /fredriks > > > ------- Ursprungligt meddelande ------- > Fr=E5n: js...@cd... > Datum: Fri, 31 Aug 2001 08:32:01 -0700 > > This message was sent from Geocrawler.com by "jeff whitaker" > <js...@cd...> > > > Fredrik: I could easily add this to the > LinearAlgebra patches I've just submitted. Are > you thinking of an interface to the lapack routine > dpotrf (cholesky factorization of a real symmteric > pos definite matrix), or something else? > > -Jeff > > --------------------------------------- > Hi.. > > Has anyone written a Cholesky factorization for > general > matrices? > > Any package out there? > > The only one i know of is the python > wrappers around lapack (pylapack). > > I could write it myself, but i prefer to be lazy ;)) > > /fredriks > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > Geocrawler.com - The Knowledge Archive > > > > --=20 Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : js...@cd... 325 Broadway Web : www.cdc.noaa.gov/~jsw Boulder, CO, USA 80303-3328 Office : Skaggs Research Cntr 1D-124 |
From: Rodgers, K. <KRo...@ry...> - 2001-08-31 16:24:04
|
Indices in Fortran can be defined at the time an array is declared, so Fortran is not necessarily "one-indexed". Kevin Rodgers Northrop Grumman Ryan Aeronautical -----Original Message----- From: Eric Nodwell [mailto:no...@ph...] Sent: Tuesday, August 28, 2001 10:01 PM To: Num...@li... Subject: Re: [Numpy-discussion] one-offset arrays >> Does anyone have a simple method of using one-offset arrays? >> Specifically, can I define an array "a" so that a[1] refers to the >> first element? >> > Inherit from the UserArray and redefine slicing to your hearts content. >> Without one-offset indexing, it seems to me that Python is minimally >> useful for numerical computations. Many, perhaps the majority, of >> numerical algorithms are one-indexed. Matlab for example is one-based >> for this reason. In fact it seems strange to me that a "high-level" >> language like Python should use zero-offset lists. >> > Wow, that is a pretty condescending-sounding statement, though I'm sure you > didn't mean it that way. Python is indeed being used for quite serious > numerical computations. I have been using Python for quite some time for > Numerical work and find it's zero-based indexing combined with the > leave-last-one-out slicing notation to be much more convenient. Oops, what I really need is a wet-ware (i.e. brain) macro which enforces the correct order of the pair (think,write,send)! The above unconsidered comments arose from the sequence (frustration,write,send,think,regret). ;) Obviously there is a lot of numerical work being done with python and people are very happy with it. But for me I still think it would be "minimally useful" without 1-indexed arrays. Here's why: In my view, the most important reason to prefer 1-based indexing versus 0-based indexing is compatibility. For numerical work, some of the languages which I use or have used are Matlab, Mathematica, Maple and Fortran. These are all 1-indexed. (C is by nature 0-indexed because it is so close to machine architecture, but with a little bit of not-entirely-clean pointer manipulation, you can easily make 1-indexed arrays and matrices.) Obviously Python can't be compatible with these languages in a strict sense, but like most people who do some programming work, I've built up a library of my own commonly used routines specific to my work; in general it's a trivial matter to translate numerical routines from one language to the another if translation is just a matter of substituting of one set of syntactical symbols and function names for anther. However it can be damn tricky to convert 1-indexed code to 0-indexed code or visa versa without introducing any errors- believe me! (Yes it's possible to call nearly any language from nearly any other language these days so in theory you don't have to recode, but there are lots of reasons why often recoding is the preferable route.) The other reason for choosing 1-based indexing is to keep the code as near to the standard notation as possible. This is one of the advantages of using a high level language - it approximates the way you think about things instead of the way the computer organizes them. Of course, this can go either way depending on the quantity in question: as a simple example a spatial vector (x,y,z) is conventionally labelled 1,2,3 (1-indexed), but a relativistic four-vector with time included (t,x,y,z) is conventionally labelled 0,1,2,3 (0-indexed). So ideally one would be able to choose the indexing-type case-by-case. I'm sure that computer programmers will argue vehemently that code which mixes both 0-indexed and 1-indexed arrays is hard to understand and maintain, but for experts in a particular field who are accustomed to certain ingrained notations, it is the code which breaks the conventional notation which is most error-prone. In my case, I'm dealing at the moment with crystal structures with which are associated certain conventional sets of vectors and tensors - all 1-indexed by convention. I find it a delicate enough task to always get the correct vector or tensor without having to remember that d[2] is actually d3. Defining d1,d2,d3 is not convenient because often the index itself needs to be calculated. I guess if I understood the reason for 0-indexed lists and tuples in Python I would be happier. In normal, everyday usage, sets, collections and lists are 1-indexed (the first item, the second item, the third item, and so on). Python is otherwise such an elegant and natural language. Why the ugly exception of making the user conform to the underlying mechanism of an array being an address plus an offset? All this is really neither here nor there, since this debate, at least as far as Python is concerned, was probably settled 10 years ago and I'm sure nobody wants to hear anything more about it at this point. As you point out, I can define my own array type with inheritance. I will also need my own range command and several other functions which haven't occured to me yet. I was hoping that there would be a standard module to implement this. By the way, what is leave-last-one-out slicing? Is it a[:-1] or is it a[0,...] or is it something else? Eric _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Aureli S. F. <Aur...@ip...> - 2001-08-31 15:00:48
|
Hello, I am writing a C program, which uses a class I have written in Python. The class has a method 'operate_image' that receives as parameters a Numeric array, an integer and a float. The class has been repeatedly used with no problems. When I try to call the method I use: pres_fi=PyObject_CallMethod(pFI,"operate_image","(Oif)",parray,1,1.0); But the result (pres-fi) is NULL. I have tested pFI to exist and to have the mentioned method, with no troubles. So it seems to me that PyObject_CallMethod is casting down the parray object, which is a PyArrayObject. That is the reason of malfunctioning. Questions: 1) Is my interpratation right? 2) Which would be the correct way to do such an operation? It is the first time I write an Python embedded/extension, so I am quite lost. Thanks in advance, Aureli ################################# Aureli Soria Frisch Fraunhofer IPK Dept. Pattern Recognition post: Pascalstr. 8-9, 10587 Berlin, Germany e-mail:au...@ip... fon: +49 30 39006-150 fax: +49 30 3917517 ################################# |
From: J. M. <mi...@cg...> - 2001-08-31 14:11:49
|
> I came across this problem a few months ago, I hacked the indicated > lines in what seemed, at the time, to be the obvious fashion and the > thing worked fine. I'm sorry that I can't be more helpful, but I > haven't kept the file. The hack might just have been to comment out > the offending statements. Yes, I tried just that (also suggested by Travis Oliphant), and it compiled just fine. Thanks Judah Judah Milgram mi...@cg... College Park Press http://www.cgpp.com P.O. Box 143, College Park MD, USA 20741 +001 (301) 422-4626 (422-3047 fax) |
From: Huaiyu Z. <hua...@ya...> - 2001-08-31 14:10:43
|
Following up to an earlier discussion, I've written up some wrapper functions for subarrays. The 'takes' function only works for 2d arrays, but the 'puts' function works for arbitrary array. Hopefully something like this can be included in Numeric module. Huaiyu Zhu """ Wrapper functions for dealing with arbitrary subarrays. Example: addeqs(x, [[0,2], [1,3,4], y), would add (2,3)array y to the (2,3)subarray of x comprised of rows 0, 2 and columns 1, 3, 4. A more natural notation of addeqs(x, ij, y) would be x[ij] += y, but seems difficult to do with current numpy. (How to let slice handle lists?) """ from Numeric import * def puts(x, irows, icols, v): """ puts(x, i, j, v): Put v in subgrid of 2d array x given by i, j. """ nrow, ncol = x.shape if irows is None: irows = arange(nrow) if icols is None: icols = arange(ncol) if len(shape(icols)) == 1: icols = icols[:ncol] if len(shape(irows)) == 0 or len(shape(icols)) == 0: ii = irows*ncol + icols v1 = v else: ii = (irows*ncol)[:, NewAxis] + icols[NewAxis, :] v1 = reshape(v, shape(ii)) put(x, ii, v1) def takes(x, I): """ takes(x, I): Takes a subgrid from array x. I is a list of list of subindices. """ for i in xrange(len(I)): ii = I[i] if ii is not None: x = take(x, ii, i) return x def addeqs(x, ij, y): """ Simulates x[ij] += y, where ij can be arbitray subarray. """ i, j = ij puts(x, i, j, takes(x, ij) + y) if __name__ == "__main__": a5 = arange(5) a2 = arange(2) a3 = arange(3) d = array([a3, a3+3]) print d; print b = array([a5, a5+5, a5+10, a5+15]); print b; print c = b.copy(); puts(c, None, 3, a5+1000); print c; print c = b.copy(); puts(c, a2*2, 3, a5+1000); print c; print c = b.copy(); puts(c, 2, a2*2, a5+1000); print c; print c = b.copy(); puts(c, a2*2+1, a3*2, d+1000); print c; print c = b.copy(); d1 = takes(c, (a2*2+1, a3*2)) c1 = c print d1; print puts(c, a2*2+1, a3*2, d+1000); print c; print puts(c, a2*2+1, a3*2, d1); print c; print addeqs(c, (a2*2+1, a3*2), d1*0+100); print c; print print c1; print d1 += 20; print c; print # Alas, this does not change c d2 = takes(c, (a2*2+1, None)) print d2; print print shape(c), shape(a2), shape(d2) addeqs(c, (a2*2+1, None), -d2) print c; print """ The expected results are [[0 1 2] [3 4 5]] [[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14] [15 16 17 18 19]] [[ 0 1 2 1000 4] [ 5 6 7 1001 9] [ 10 11 12 1002 14] [ 15 16 17 1003 19]] [[ 0 1 2 1000 4] [ 5 6 7 8 9] [ 10 11 12 1001 14] [ 15 16 17 18 19]] [[ 0 1 2 3 4] [ 5 6 7 8 9] [1000 11 1001 13 14] [ 15 16 17 18 19]] [[ 0 1 2 3 4] [1000 6 1001 8 1002] [ 10 11 12 13 14] [1003 16 1004 18 1005]] [[ 5 7 9] [15 17 19]] [[ 0 1 2 3 4] [1000 6 1001 8 1002] [ 10 11 12 13 14] [1003 16 1004 18 1005]] [[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14] [15 16 17 18 19]] [[ 0 1 2 3 4] [105 6 107 8 109] [ 10 11 12 13 14] [115 16 117 18 119]] [[ 0 1 2 3 4] [105 6 107 8 109] [ 10 11 12 13 14] [115 16 117 18 119]] [[ 0 1 2 3 4] [105 6 107 8 109] [ 10 11 12 13 14] [115 16 117 18 119]] [[105 6 107 8 109] [115 16 117 18 119]] (4, 5) (2,) (2, 5) [[ 0 1 2 3 4] [ 0 0 0 0 0] [10 11 12 13 14] [ 0 0 0 0 0]] """ |
From: Fredrik S. <fre...@kt...> - 2001-08-31 14:06:33
|
Hi.. Has anyone written a Cholesky factorization for general matrices? Any package out there? The only one i know of is the python wrappers around lapack (pylapack). I could write it myself, but i prefer to be lazy ;)) /fredriks |
From: Rob <eu...@ho...> - 2001-08-31 00:28:33
|
My problems with the invert() function has been resolved. I found that somewhere in my program it was trying to invert [0]. So I just added a conditional statement ahead of it. Evidently the C program did the same sort of thing. So now I've shaved another 20% of execution time off of my program! Rob. -- The Numeric Python EM Project www.members.home.net/europax |
From: Rob <eu...@ho...> - 2001-08-31 00:24:20
|
I'd love to have Python and Numpy optimized for the Athlon. I will have to wait for gcc3.0 to get absorbed into FreeBSD. (Mandrake linux has a beta that uses it) It turns out that mpg encoding runs at 10x the rate on my Athlon DDR machine compared to my laptop due to the highly Athlon optimized "GoGo" encoding program. Rob. Chris Barker wrote: > > Jeff Whitaker wrote: > > Paul's initial comment > > was that since it was not self-contained (it required linking external > > blas and lapack libs) he could not consider putting it in. I have > > rectified that now by including new versions of f2c_lite.c, blas_lite.c > > and d(z)lapack_lite.c that support the newer, faster lapack routines. > > I don't know what it would take, but I would LOVE it if NumPy used a > highly optimised BLAS wherever possible. I have no idea whether there is > a good BLAS available for every major platform that has a licence that > allows it to be included with NumPy, but if so, I think it should be > included. I know this is more effor than just including a generic one, > but having optimum performace "out of the box" would be a great thing. > > just my $.02, which is worth about that, as I am in littel position to > help... > > -Chris > > -- > Christopher Barker, > Ph.D. > Chr...@ho... --- --- --- > http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ > ------@@@ ------@@@ ------@@@ > Oil Spill Modeling ------ @ ------ @ ------ @ > Water Resources Engineering ------- --------- -------- > Coastal and Fluvial Hydrodynamics -------------------------------------- > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax |
From: Jeff W. <js...@cd...> - 2001-08-30 17:08:18
|
Chris: The ATLAS project (http://www.netlib.org/ATLAS) provides a mechanism for creating optimized BLAS/LAPACK libs on almost any (unixish) platform. The tuning process is very complicated. It would not be feasible to include this in the Numpy distribution - however the mechanism does exist to link these libs when compiling numpy. Perhaps a more realistic goal would be to modify the setup.py script to automatically detect ATLAS and use those libs if they are installed. -Jeff On Thu, 30 Aug 2001, Chris Barker wrote: > Jeff Whitaker wrote: > > Paul's initial comment > > was that since it was not self-contained (it required linking external > > blas and lapack libs) he could not consider putting it in. I have > > rectified that now by including new versions of f2c_lite.c, blas_lite.c > > and d(z)lapack_lite.c that support the newer, faster lapack routines. > > I don't know what it would take, but I would LOVE it if NumPy used a > highly optimised BLAS wherever possible. I have no idea whether there is > a good BLAS available for every major platform that has a licence that > allows it to be included with NumPy, but if so, I think it should be > included. I know this is more effor than just including a generic one, > but having optimum performace "out of the box" would be a great thing. > > just my $.02, which is worth about that, as I am in littel position to > help... > > > -Chris > > > > -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : js...@cd... 325 Broadway Web : www.cdc.noaa.gov/~jsw Boulder, CO, USA 80303-3328 Office : Skaggs Research Cntr 1D-124 |
From: Chris B. <chr...@ho...> - 2001-08-30 16:51:57
|
Jeff Whitaker wrote: > Paul's initial comment > was that since it was not self-contained (it required linking external > blas and lapack libs) he could not consider putting it in. I have > rectified that now by including new versions of f2c_lite.c, blas_lite.c > and d(z)lapack_lite.c that support the newer, faster lapack routines. I don't know what it would take, but I would LOVE it if NumPy used a highly optimised BLAS wherever possible. I have no idea whether there is a good BLAS available for every major platform that has a licence that allows it to be included with NumPy, but if so, I think it should be included. I know this is more effor than just including a generic one, but having optimum performace "out of the box" would be a great thing. just my $.02, which is worth about that, as I am in littel position to help... -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Tim H. <tim...@ie...> - 2001-08-30 15:07:59
|
This is just a followup to my earlier message. The getitem issues I identify there are fixed in Python 2.2, if you use new style class (inherit from object). So, problems 1-3 can be fixed in Python 2.2. It seems that problem 4 could be fixed fairly easily in the python layer by writing wrapper functions in Numeric.py. For example: def cos(a): r = umath.cos(a) if isinstance(a, UserArray): r = a.__class__(r) return r This would be potentially a bit tedious, but straightforward. -tim |
From: Tim H. <tim...@ie...> - 2001-08-30 14:44:17
|
Hi Eric, Here are some comments on the problems you identify. > Problem 1 > --------- > > Description > for loops don't work with variables of type arrayone Soon (Python 2.2?) you will be able to use the new iterator protocol to do this correctly. So target the iterator protocol and this will begin to work as people begin to upgrade. > Problem 2 > --------- > > Description > Slicing an arrayone from 0 doesn't generate an error but it should. I was going to suggest removing __getslice__, and only using __getitem__. However, the slices recieved by __getitem__ *sometimes* gets their first value filled in with zero (in the same cases that __getslice__ is called) instead of None. I regard this as a bug/misfeature and I just reported it on sourceforge. > Problem 3 > --------- > > Description > Negative subscripts return a slice offset by -1 from the expected > result. This one _should_ be fixable just by removing __getslice__ and working with __getitem__ only. (getitem is called instead of getslice). However it fails for the same reason that this fix fails for problem 2. Mumble. Note that both fixes work fine if you always include the second colon. One approach would be to do all [] operations in getitem and have getslice raise an error. This would force your users to always use a[-3:-2:] instead of a[-2:-2]. The former can be made to work since it calls getitem with the raw values [slice(-3,-2,None)] as opposed to munging them like the latter does. -tim |
From: Jeff W. <js...@cd...> - 2001-08-30 12:08:29
|
Has anyone tried my patch to LinearAlgebra.py and lapack_litemodule.c? I've found major speedups (by a factor of 3-5) in all operations involving an svd or a eigenanalysis of a symmetric matrix. Paul's initial comment was that since it was not self-contained (it required linking external blas and lapack libs) he could not consider putting it in. I have rectified that now by including new versions of f2c_lite.c, blas_lite.c and d(z)lapack_lite.c that support the newer, faster lapack routines. I would appreciate any reports on how it works (or doesn't) on other platforms - I've tried it in on MacOS X and Solaris. Grab it from the 'Patches' section of the sourceforge project page. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : js...@cd... 325 Broadway Web : www.cdc.noaa.gov/~jsw Boulder, CO, USA 80303-3328 Office : Skaggs Research Cntr 1D-124 |
From: Eric N. <no...@ph...> - 2001-08-30 08:07:52
|
Thanks to everyone who responded to my comments about one-offset arrays in Python. I understand much better now why zero-offset arrays are a good choice for Python. For the reasons already discussed (mostly ease of translating one-offset algorithms), one-offset arrays would nevertheless be useful in certain situations, so I went ahead and wrote a class for a one-offset array. It ended up being a somewhat bigger job than I expected, and since it is rather too much material for a mailing-list submission I created a web site for it. Anyone who is interested in this topic please have a look: http://www.physics.ubc.ca/~mbelab/python/arrayone/index.html I'd love some feedback along the following lines: 1. There are 4 outstanding problems/bugs which I could identify but was unable to fix. Most of these seem to be due to limitations of the UserArray class, but they could also be due to limitations of the programmer. None of these problems make the class unusable. One of them is the issue which Travis identified about functions taking UserArrays but returning standard arrays. It would be nice if the resulting discussion led to something be done... Any suggestions for fixing the other 3 issues would be most welcome. 2. I would like this module to be as accessible as possible to others, particularly those who are just getting started with Python and may therefore be especially encumbered with one-offset baggage. How can I do this? Submit it to a Python resource such as the Vaults of Parnassus? Get people to link to my web page? Does someone with a more prominant website want to volunteer to host it? Is there any possibility of having it included eventually with Numerical Python or Scientific Python? Probably some people will consider it too trivial to include, but I found it less trivial than I expected once all the cases were covered - and there are still the 4 outstanding issues. (Yes the code sucks! Suggestions for improvement are most welcome!) Why make people reinvent the wheel? People coming from MatLab for example might be inclined to give up and go back to MatLab. Eric P.S. In case anyone is interested in the outstanding problems but for some reason doesn't want to or can't visit the web site, here are the summaries: Problem 1 --------- Description for loops don't work with variables of type arrayone Example X=arrayone((1,2,3)) for item in X: print item This just generates an error. Reason In Python, a for loop works by starting at x(0) and incrementing until an out-of-bounds error occurs. arrayone types have no 0 element. Work-around: Cast to type array in for loops. For example X=arrayone((1,2,3)) for item in array(X): print item Possible solutions: Is it possible to "overload" "for" so that it behaves differently for arrayone type? Problem 2 --------- Description Slicing an arrayone from 0 doesn't generate an error but it should. Example: X=arrayone((1,2,3)) X[0:3] This returns ArrayOne.arrayone([1, 2, 3]) instead of an error. Reason X[:3] results in a call to __getslice__ with low=0. This cannot be distinguished from X[0:3]. Therefore in order to deal correctly with the X[:] case, we have to assume that low=0 means an unspecified low. Work-around If you don't trust your code, you have to implement specific checking for this condition before slicing. Possible solution If it was possible to get access to the unparsed input (literally '0:3' for example), this could be fixed. Problem 3 --------- Description Negative subscripts return a slice offset by -1 from the expected result. Example X=arrayone((1,2,3,4)) X[-3:-2] This returns ArrayOne.arrayone([1, 2]) instead of ArrayOne.arrayone([2, 3]). Reason X[-3:-2] in the above example results in a call to __getslice__ with low=1, high=2, which cannot be distinguished from X[1:2]. Work-around Don't use negative index slicing with arrayone types. Possible solution: If had access to unparsed input, could fix this. Problem 4 --------- Description Functions which take arrayone arguments return type array. Work-around Have to cast back into arrayone, for example: Y=arrayone(somefunc(X)) Possible solution See http://www.geocrawler.com/lists/3/SourceForge/1329/0/6505926 |
From: Bernard F. <fra...@er...> - 2001-08-30 03:41:37
|
I came across this problem a few months ago, I hacked the indicated lines in what seemed, at the time, to be the obvious fashion and the thing worked fine. I'm sorry that I can't be more helpful, but I haven't kept the file. The hack might just have been to comment out the offending statements. Bernie |
From: Jon S. <js...@wm...> - 2001-08-29 21:16:34
|
On Wed, 29 Aug 2001, Chris Barker wrote: > A) Are there any tricks to making a function work with multiple types? I > currently have it working with only double arrays, but it would be great > for it ot be universal (and it could then be added to the main NumPy > distribution, I suppose) I seem tohave to typecast the data to do the > comparison I want, so I can't figure out how to make it general. For > example, I have this line: > > if (*(double *)(Array->data + i*elementsize) > *(double *)(Max->data + > i*elementsize)) > { > *(double *)(Array->data + i*elementsize) = *(double > *)(Max->data + i*elementsize) ; > } > > How could I do that without the explicit typecasts? I'm pretty sure I > could make the rest of the routine general. Some time ago, I used a union like: typedef union { short *sdata; int *idata; long *ldata; float *fdata; double *ddata; /* And so on */ } MyNumPyUnion; Then, you can assign (and CAST) the data field of the NumPy array according to the typecode of the array, like in: MyNumPyUnion theunion; theunion.fdata=(float*) thenumpyarray->data; /* If it is a Float32 */ Finally, you use sdata, idata, ldata, fdsata and so on to iterate dereferencing accordingly, depending on the typecode of the array. It is a nightmare to write this code, but I can not imagine of any other approach. May be other developers can help you with a better approach. Hope this helps. Jon Saenz. | Tfno: +34 946012470 Depto. Fisica Aplicada II | Fax: +34 944648500 Facultad de Ciencias. \\ Universidad del Pais Vasco \\ Apdo. 644 \\ 48080 - Bilbao \\ SPAIN |
From: Chris B. <chr...@ho...> - 2001-08-29 20:49:19
|
Hi all, I wrote last week looking for hints for writing a new clip function, that would be faster than the existing one. I got some suggestions, and have written the function. It works, but I'd love to get a little code review from the more experienced folks here. I have a few specific questions: A) Are there any tricks to making a function work with multiple types? I currently have it working with only double arrays, but it would be great for it ot be universal (and it could then be added to the main NumPy distribution, I suppose) I seem tohave to typecast the data to do the comparison I want, so I can't figure out how to make it general. For example, I have this line: if (*(double *)(Array->data + i*elementsize) > *(double *)(Max->data + i*elementsize)) { *(double *)(Array->data + i*elementsize) = *(double *)(Max->data + i*elementsize) ; } How could I do that without the explicit typecasts? I'm pretty sure I could make the rest of the routine general. B) I was trying to figure out how to loop through all the elements of a non-contiguous array. I got two hints: "Rob W. W. Hooft" wrote: > Never done this in C for numpy yet, but I normally program this > along the following lines: > > n=[2,3,2,3] > x=[0,0,0,0] > while 1: > print x > i=len(n)-1 > while i>=0 and x[i]+1==n[i]: > x[i]=0 > i=i-1 > if i<0: > break > x[i]=x[i]+1 > > It is always going to be slower than the straight loop for contiguous > arrays. This is what I did, and it is a LOT slower (about a factor of ten). See the enclosed C code. There has got to be a way to optimize this! >So if you want to do it properly, you'd have to check whether > the array is contiguous, and if it is, use the fast way. Since I had already written code for the contiguous case, it was easy to special case it. Pete Shinners wrote: > it's easiest to use recursion than nested loops. then you can > support any number of dimensions. that should point you in the > right direction :] When I read this, I thought: "well, duh!", but then I couldn't figure out how to do it. Would there be any advantage over the above loops anyway? Is there either a more elgant or faster way to loop through all the elements of these three arrays? C) Where is the DL_EXPORT macro defined? I havn't been able to find it with grep yet. Does it work on all NumPy supported platforms? D) Is there an exising function or Macro for checking if two arrays are the same shape? I wrote one, but I imagine it exists. F) I needed to work with input that could be any Python number, or an Array of the right size. What I did was call: PyArray_FromObject on the min and max inputs, so that I would get a PyArray, then I could check if it was a rank-o array, or the right size. I havn't been able to decide if this is a nifty trcik, or a kludge. Any other suggestions? E) Have I got reference counting right? I tried to Py_DECREF all my temp arrays, but I'm not sure I did it right. Sorry to enclose the code if you have no interest in it, but it's not really that big. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Chris B. <chr...@ho...> - 2001-08-29 17:21:05
|
It's mostly been said already, but I can't help but add my $.02 sometimes... Eric Nodwell wrote: > >> Without one-offset indexing, it seems to me that Python is minimally > >> useful for numerical computations. Many, perhaps the majority, of > >> numerical algorithms are one-indexed. Matlab for example is one-based > >> for this reason. In fact it seems strange to me that a "high-level" > >> language like Python should use zero-offset lists. I was a heavy user of MATLAB for a long time before I discovered NumPy, and I have to say that I like the 0-indexing scheme MUCH better! > In my view, the most important reason to prefer 1-based indexing > versus 0-based indexing is compatibility. For numerical work, some of > the languages which I use or have used are Matlab, Mathematica, Maple > and Fortran. These are all 1-indexed. Actually Fortran is indexed however you decide you want it: DIMENSION array(0:9) DIMENSION array(1:10) or DIMENSION array(10) DIMENSION array(1900:1999) Are all legal. This is a VERY handy feature, and I would say that I used the 0-indexed version most often. The reason is related to C's pointer arithmetic logic: Often the array would represent discrete points on a continuous scale, so I could find the value of X, for instance, by doing: Xaxis(i) = MinX * i * DeltaX with i-indexing, you have to subtract 1 all the time. I suspect that the higher level nature of NumPy would make it a lot harder to have arbitrary indexing of this fashion: if all you have to do is access elements, it is easy, but if you have a whole collection of array oriented operations, as NumPy does, you would probably have to stick with one standard, and I think the 0-indexing standard is the best. > for experts in a > particular field who are accustomed to certain ingrained notations, it > is the code which breaks the conventional notation which is most > error-prone. This is why being able to set your own indexing notation is the best option, but a difficult one to impliment. > Python is otherwise such an elegant and > natural language. Why the ugly exception of making the user conform to > the underlying mechanism of an array being an address plus an offset? I gave an example above, and others have too: Python's indexing scheme is elegant and natural for MANY usages. As with many things Python (indentation, anyone!), I found it awkward to make the transition at first, but then found that it, in fact, made things easier in general. For me, this is the very essence of truly usable design: it is designed to make people most productive in the long run, not most comfortable when they first start using it. > All this is really neither here nor there, since this debate, at least > as far as Python is concerned, was probably settled 10 years ago and Well, yes, and having NumPy different from the rest of Python would NOT be a good idea either. > I'm sure nobody wants to hear anything more about it at this point. > As you point out, I can define my own array type with inheritance. I > will also need my own range command and several other functions which > haven't occured to me yet. I was hoping that there would be a standard > module to implement this. If it were truly generally useful, there probably would be such a package. I imagine most people have found it easier to make the transition than to write a whole package that would allow you not to make the transition. If you really have a lot of code that is 1-indexed that you want to translate, it may be worth the effort for you, and I'm sure there are other folks that would find it useful, but remember that it will always be incompatable with the rest of Python, which may make it harder to use than you imagine. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Konrad H. <hi...@cn...> - 2001-08-29 14:41:07
|
> > > Is there any way to get invert() to work with complex arrays? > > > > If you mean LinearAlgebra.inverse(), it should work with complex arrays, > > although I haven't checked that for a long time. > > I tested a small matrix and got an error, but I will try again, since > with my luck it was singular :) No need to take chances: >>> from Numeric import * >>> from LinearAlgebra import inverse >>> a = 1.j*array([[1., 0.], [0., 1.]]) >>> a array([[ 0.+1.j, 0.+0.j], [ 0.+0.j, 0.+1.j]]) >>> inverse(a) array([[ 0.-1.j, 0.+0.j], [ 0.+0.j, 0.-1.j]]) So it works for at least one complex matrix. :-) Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Rob <eu...@ho...> - 2001-08-29 14:09:34
|
Konrad Hinsen wrote: > > > Is there any way to get invert() to work with complex arrays? > > If you mean LinearAlgebra.inverse(), it should work with complex arrays, > although I haven't checked that for a long time. > > Konrad. > -- > ------------------------------------------------------------------------------- > Konrad Hinsen | E-Mail: hi...@cn... > Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 > Rue Charles Sadron | Fax: +33-2.38.63.15.17 > 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ > France | Nederlands/Francais > ------------------------------------------------------------------------------- > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion I tested a small matrix and got an error, but I will try again, since with my luck it was singular :) Rob. -- The Numeric Python EM Project www.members.home.net/europax |
From: Konrad H. <hi...@cn...> - 2001-08-29 10:06:47
|
> This led me to some thinking about why UserArrays are not more often used. > > I think one of the biggest reasons is that most of the functions can > take UserArrays but returned the basic array type upon completion. > So, you end up having to continually construct your UserArrays. Exactly. > Are there other reasons people have thought of? Performance, in some cases. Or passing objects to C routines that expect plain arrays. But the function issue is certainly the major one. > So, here's a suggestion: > > Why don't we modify PyArray_Return to return an object of the same class as > one of the arguments which was passed if the class defines an __array__ > method. I don't think it can be done at the level of PyArray_Return, it cannot know what the classes of the arguments were. We *could* achieve the same goal by making appropriate modifications to all the C functions in Numeric. > Which argument to pick and how this would be implemented without > changing old code is an interesting question. No idea about the first issue, but compatibility can be assured (well, in most cases) by picking some attribute name (__array__) that is not used in the current UserArray class. We'd then have some other class derived from UserArray which sets that attribute. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Konrad H. <hi...@cn...> - 2001-08-29 10:01:41
|
> versus 0-based indexing is compatibility. For numerical work, some of > the languages which I use or have used are Matlab, Mathematica, Maple > and Fortran. These are all 1-indexed. (C is by nature 0-indexed > because it is so close to machine architecture, but with a little bit I don't think this is a matter if numerics vs. machine orientation, it is just an historical accident. Having used 0-based and 1-based languages extensively, I find both perfectly suitable for numerical work, and in my experience 0-based indexing leads to shorter index expressions in many cases. > symbols and function names for anther. However it can be damn tricky > to convert 1-indexed code to 0-indexed code or visa versa without > introducing any errors- believe me! (Yes it's possible to call nearly Having done so recently, I agree :-( > The other reason for choosing 1-based indexing is to keep the code as > near to the standard notation as possible. This is one of the I wouldn't call 1-based indexing "standard" - it all depends on your background. > question: as a simple example a spatial vector (x,y,z) is > conventionally labelled 1,2,3 (1-indexed), but a relativistic > four-vector with time included (t,x,y,z) is conventionally labelled > 0,1,2,3 (0-indexed). So ideally one would be able to choose the So, when you implement classes that represent vectors, you should stick to whatever notation is common in the application domain. I see bare-bones arrays as we have them in NumPy as a low-level building block for high-level classes, i.e. not as a data type one should use directly in non-trivial application code. It's not just indexing, also operations. For your four-vectors, you would want a method that calculates the norm in the Einstein metric, for example. If you don't have a special four-vector class but use arrays directly, any length calculation will look messy. That's what OO design is good for. In fact, with special four-vector classes and appropriate methods, there will hardly be any explicit indices in the application code at all! > indexing-type case-by-case. I'm sure that computer programmers will > argue vehemently that code which mixes both 0-indexed and 1-indexed > arrays is hard to understand and maintain, but for experts in a Yes, if they are supposed to be the same data type. If different classes use different indexing schemes, everything remains clear, as long as the interactions between the classes are well defined. > error-prone. In my case, I'm dealing at the moment with crystal > structures with which are associated certain conventional sets of > vectors and tensors - all 1-indexed by convention. I find it a Have a look at the vector and tensor classes in Scientific Python (http://dirac.cnrs-orleans.fr/programs/scientific.html). Although they are 0-based as well (because I got so used to that!), you might find that the predefined methods cover most of your needs and that explicit indices are hardly needed. But you are also free to modify the __getitem__ and __setitem__ methods to make the classes 1-based. > I guess if I understood the reason for 0-indexed lists and tuples in > Python I would be happier. In normal, everyday usage, sets, Python has a very consistent indexing scheme, which you can best understand in the following way: imagine that the indices are pointers to positions *between* the elements: elements a b c d index 0 1 2 3 4 negative index -4 -3 -2 -1 Then all the slicing and indexing rules make a lot of sense, and fulfill practically useful relations. For example, it is true that x[a:b] + x[b:c] equals x[a:c] for any list/array x, which would not be the case with 1-based indices. Adding indices is also more straightforward with base 0, all those "i+j-1" you see so frequently in Fortran code becomes just "i+j". Sure, everyday language works differently, but then you don't write algorithms in everyday language, for a good reason. > By the way, what is leave-last-one-out slicing? Is it > a[:-1] That one, as is clear from my little picture above. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |