You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Paul F. D. <du...@ll...> - 2000-02-08 00:29:37
|
Travis says that I don't necessarily endorse his goals but in fact I do, strongly. If I understand right he intends to make a CVS branch for this experiment and that is fine with me. The only goal I didn't quite understand was: Addition of attributes so that different users can configure aspects of the math behavior, to their hearts content. In a world of reusable components the situation is complicated. I would not like to support a dot-product routine, for example, if the user could turn off any double precision behind my back. My needs for precision are local to my algorithm. |
From: Travis O. <Oli...@ma...> - 2000-02-08 00:11:26
|
I wanted to let users of the community know (so they can help if they want, or offer criticism or comments) that over the next several months I will be experimenting with a branch of the main Numerical source tree and endeavoring to "clean-up" the code for Numerical Python. I have in mind a few (in my opinion minor) alterations to the current code-base which necessitates a branch. Guido has made some good suggestions for improving the code base, and both David Ascher and Paul Dubois have expressed concerns over the current state of the source code and given suggestions as to how to improve it. That said, I should emphasize that my work is not authorized, or endorsed, by any of the people mentioned above. It is simply my little experiment. My intent is not to re-create Numerical Python --- I like most of the current functionality --- but to merely, clean-up the code, comment it, and change the underlying structure just a bit and add some features I want. One goal I have is to create something that can go into Python 1.7 at some future point, so this incarnation of Numerical Python may not be completely C-source compatible with current Numerical Python (but it will be close). This means C-extensions that access the underlying structure of the current arrayobject may need some alterations to use this experimental branch if it every becomes useful. I don't know how long this will take me. I'm not promising anything. The purpose of this announcement is just to invite interested parties into the discussion. These are the (somewhat negotiable) directions I will be pursuing. 1) Still written in C but heavily (in my opinion) commented. 2) Addition of bit-types and unsigned integer types. 3) Facility for memory-mapped dataspace in arrays. 4) Slices become copies with the addition of methods for current strict referencing behavior. 5) Handling of sliceobjects which consist of sequences of indices (so that setting and getting elements of arrays using their index is possible). 6) Rank-0 arrays will not be autoconverted to Python scalars, but will still behave as Python scalars whenever Python allows general scalar-like objects in it's operations. Methods will allow the user-controlled conversion to the Python scalars. 7) Addition of attributes so that different users can configure aspects of the math behavior, to their hearts content. If their is anyone interested in helping in this "unofficial branch work" let me know and we'll see about setting up someplace to work. Be warned, however, that I like actual code or code-templates more than just great ideas (truly great ideas are never turned away however ;-) ) If something I do benefits the current NumPy source in a non-invasive, backwards compatible way, I will try to place it in the current CVS tree, but that won't be a priority, as my time does have limitations, and I'm scratching my own itch at this point. Best regards, Travis Oliphant |
From: Travis O. <Oli...@ma...> - 2000-02-04 20:51:48
|
> I'm having a problem with PyArray_Check. If I just call > PyArray_Check(args) I don't have a problem, but if I try to assign the > result to anything, etc., it crashes (due to acces violation). So, for > example the code at the end of this note doesn't work, yet I know an array > is being passed and I can, for example, calculate its trace correctly if I > type cast it as a PyArrayObject*. > > Also, a more general question: is this the recommended way to input numpy > arrays when using swig, or do most people find it easier to use more > elaborate typemaps, or something else? I have some experience with SWIG but it is not my favorite method to use Numerical Python with C, since you have so little control over how things get allocated. Your problem is probably due to the fact that you do not run import_array() in the module header. There is a typemap in SWIG that let's you put commands to run at module initialization. Try this in your *.i file. %init %{ import_array(); %} This may help. Best, Travis |
From: Tom A. <tl...@re...> - 2000-02-03 22:00:45
|
I'm having a problem with PyArray_Check. If I just call PyArray_Check(args) I don't have a problem, but if I try to assign the result to anything, etc., it crashes (due to acces violation). So, for example the code at the end of this note doesn't work, yet I know an array is being passed and I can, for example, calculate its trace correctly if I type cast it as a PyArrayObject*. Also, a more general question: is this the recommended way to input numpy arrays when using swig, or do most people find it easier to use more elaborate typemaps, or something else? Finally, I apologize if this is the wrong forum to post this question. Please let me know. Thanks, Tom Method from C++ class: PyObject * Test01::trace(PyObject * args) { if (!(PyArray_Check(args))) { // <- crashes here PyErr_SetString(PyExc_ValueError, "must use NumPy array"); return NULL; } return NULL; } Swig file: (where typemaps are the ones included with most recent swig) /* TMatrix.i */ %module Ptest %include "typemaps.i" %{ #include "Test01.h" %} class Test01 { public: PyObject * trace(PyObject *INPUT); Test01(); virtual ~Test01(); }; Python code: import Ptest t = Ptest.Test01() import Numeric a = Numeric.arange(1.1, 2.7, .1) b = Numeric.reshape(a, (4,4)) x = t.trace(b) |
From: Joe V. A. <van...@at...> - 2000-02-02 18:37:35
|
I would like a single precision version of 'interp' in the Numeric Core. (I want such a routine since I'm operating on huge single precision arrays, that I don't want promoted to double precision.) I've written such a routine, but Paul Dubois and I are discussing the best way of integrating it into the core. One solution is to simply add a new function 'interpf' to arrayfnsmodule.c . Another solution is to add a typecode=Float option to interp. Any opinions on how this single precision version be handled? -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Harald Hanche-O. <ha...@ma...> - 2000-01-29 02:39:12
|
>> print Numeric.trace.__doc__ trace(a,offset=0, axis1=0, axis2=1) returns the sum along diagonals (defined by the last two dimenions) of the array. For arrays of rank 2, trace does what you expect, but for arrays of larger rank, it appears to simply sum along each of the two given axes. A simple experiment follows: >>> B array([[[ 1, 10], [ 100, 1000]], [[ 10000, 100000], [ 1000000, 10000000]]]) >>> # What I thought trace(B) would be: >>> B[0,0,0]+B[1,1,0], B[0,0,1]+B[1,1,1] (1000001, 10000010) >>> # But that is not what numpy thinks: >>> Numeric.trace(B) array([ 10001, 10001000]) >>> # Instead, it must be computing it as follows: >>> B[0,0,0]+B[1,0,0], B[0,1,1]+B[1,1,1] (10001, 10001000) That is, trace(B) is the vector C, given by C[i]=sum(B[j,i,i]: j=0,...). A bit more experimentation reveals that trace ignores its fourth argument, consistent with the above result: >>> Numeric.trace(B,0,0,1) array([ 10001, 10001000]) >>> Numeric.trace(B,0,0,2) array([ 10001, 10001000]) Evidently, trace is going to need a rewrite. It might perhaps also benefit from further optional arguments in groups of three, e.g., trace(A, p, 0, 3, q, 1, 2)[k,l,...] = A[i+p,j+q,j,i,k,l,...] with summing over repeated indices (i, j) ala Einstein. - Harald |
From: Harald Hanche-O. <ha...@ma...> - 2000-01-29 02:38:37
|
I am having some problems relating to the current function dot (identical to matrixmultiply, though I haven't seen the equivalence in any documentation). Here is what the docs say: dot(a,b) Will return the dot product of a and b. This is equivalent to matrix multiply for 2d arrays (without the transpose). Somebody who does more linear algebra really needs to do this function right some day! Or the builtin doc string: >>> print Numeric.dot.__doc__ dot(a,b) returns matrix-multiplication between a and b. The product-sum is over the last dimension of a and the second-to-last dimension of b. First, this is misleading. It seems to me to indicate that b must have rank at least 2, which experiments indicate is not necessary. Instead, the rule appears to be to use the only axis of b if b has rank 1, and otherwise to use the second-to-last one. Frankly, I think this convention is ill motivated, hard to remember, and even harder to justify. As a mathematician, I can see only one reasonable default choice: One should sum over the last index of a, and the first index of b. Using the Einstein summation convention [*], that would mean that dot(a,b)[j,k,...,m,n,...] = a[j,k,...,i] * b[i,m,n,...] [*] that is, summing over repeated indices -- i in this example This would of course yield the current behaviour in the important cases where the rank of b is 1 or 2. But we could do better than this: Why not leave the choice up to the user? We could allow an optional third parameter which should be a pair of indices, indicating the axes to be summed over. The default value of this parameter would be (-1, 0). Returning to my example above, the user could then easily compute, for example, dot(a,b,(1,2))[j,k,...,m,n,...] = a[j,i,k,...] * b[m,n,i,...] while the current behaviour of dot would correspond to the new behaviour of dot(a,b,(-1,-2)) whenever b has rank at least 2. Actually, there is probably a lot of code out there that uses the current behaviour of dot. So I would propose leaving dot untouched, and introducing inner instead, with the behaviour I outlined above. We could even allow any number of pairs of axes to be summed over, for example inner(a,b,(1,2),(2,0))[k,l,...,m,n,...] = a[k,i,j,l,..] * b[j,m,i,n,...] With this notation, one can for example write the Hilbert-Schmidt inner product of two real 2x2 matrices (the sum of a[i,j]b[j,i] over all i and j) as inner(a,b,(0,1),(1,0)). If my proposal is accepted, the documentation should probably declare dot (and its alias matrixmultiply?) as deprecated and due to disappear in the future, with a pointer to its replacement inner. In the meantime, dot could in fact be replaced by a simple wrapper to inner: def dot(a,b): if len(b.shape) > 1: return inner(a,b,(-1,-2) else: return inner(a,b) (with the proper embellishments to allow it to be used with python sequences, of course). - Harald |
From: fredrik <arc...@db...> - 2000-01-26 22:19:26
|
This message was sent from Geocrawler.com by "fredrik" <fre...@pi...> Be sure to reply to that address. How do i install the latest version of numPy if i have lapack already installed? I'm using distutils. Geocrawler.com - The Knowledge Archive |
From: Pearu P. <pe...@io...> - 2000-01-26 19:00:11
|
On Wed, 26 Jan 2000, Travis Oliphant wrote: > > Proclamation: > > Introduce a column-wise array to Numeric Python where data is > > stored in column-wise order that can be used specifically for fortran > > routines. > > This is a very interesting proposal that we should consider carefully. I > seem to recall reading that Jim Hugunin originally had this idea in > mind when he established the concept of contiguousness, etc. > > My current thoughts on this issue are that it is of only syntatic value > and seems like a lot of extra code has to be written in order to provide > this "user-friendliness." I don't see why it is so confusing to > recognize that Fortran just references it's arrays "backwards" (or Python > references them backwards --- whatever your preference). How you index > into an array is an arbitrary decision. Numerical Python and Fortran just > have opposite conventions. As long as that is clear, I don't see the > real trouble. If the Fortran documentation calls for an array of > dimension (M,N,L) you pass it a contiguous Python array of shape (L,N,M) > --- pretty simple. > > Perhaps someone could enlighten me as to why this is more than just a > aesthetic problem. Right now, I would prefer that the time spent by > someone to "fix" this "problem" went to expanding the availability of > easy-to-use processing routines for Numerical Python, I think that this expansion would be quicker if the Python/Fortran connection would not introduce this additional question to worry about. > or improving the > cross-platform plotting capabilities. Here I agree with you completely. I can see the following problems when two different conventions are mixed: 1) if your application Python code is larger than "just an example that demonstrates the correct usage of two different conventions" and it can call other C/API modules that do calculations in C convention then you need some kind of book keeping where your matrices need to be transposed and where not, and where to insert additional code for doing transposition. I think this can be done in lower level and more efficiently than most ordinary users would do anyway. 2) Another but minor drawback of having two conventions is that if you have square matrix that is non-symmetric, then its misuse would be easy and (may be) difficult to discover. On the other hand, I completely understand why my proposal would not be implemented --- it looks like it needs lots of work and in short term the gain would not be visible to most users. Pearu |
From: Travis O. <Oli...@ma...> - 2000-01-26 17:47:38
|
> Proclamation: > Introduce a column-wise array to Numeric Python where data is > stored in column-wise order that can be used specifically for fortran > routines. This is a very interesting proposal that we should consider carefully. I seem to recall reading that Jim Hugunin originally had this idea in mind when he established the concept of contiguousness, etc. My current thoughts on this issue are that it is of only syntatic value and seems like a lot of extra code has to be written in order to provide this "user-friendliness." I don't see why it is so confusing to recognize that Fortran just references it's arrays "backwards" (or Python references them backwards --- whatever your preference). How you index into an array is an arbitrary decision. Numerical Python and Fortran just have opposite conventions. As long as that is clear, I don't see the real trouble. If the Fortran documentation calls for an array of dimension (M,N,L) you pass it a contiguous Python array of shape (L,N,M) --- pretty simple. Perhaps someone could enlighten me as to why this is more than just a aesthetic problem. Right now, I would prefer that the time spent by someone to "fix" this "problem" went to expanding the availability of easy-to-use processing routines for Numerical Python, or improving the cross-platform plotting capabilities. I agree that it can be most confusing when you are talking about matrix math since we are so used to thinking of matrix multiplication as A * B = C with a shape analysis of: M X N * N X L = M X L If the matrix multiplacation code is in Fortran, then it expects to get an (M,N) array and a (N,L) array and returns an (M,L) array. But from Python you would pass it arrays with shape (N,M) and (L,N) and get back an (L,M) array which can be confusing to our "understanding" of shape analysis in matrix multiplication: Python matrix multiplication rule if calling a Fortran routine to do the multiplication: (N,M) (L,N) = (L, M) I think a Python-only class could solve this problem much more easily than changing the underlying C-code. This new Python Fortran-array class would just make the user think that the shapes were (M,N) and (N,L) and the output shape was (M,L). For future reference, any array-processing codes that somebody writes should take a strides array as an argument, so that it doesn't matter what "order" the array is in. --Travis |
From: Pearu P. <pe...@io...> - 2000-01-26 17:01:43
|
Hi! Problem: Using Fortran routines from Python C/API is "tricky" when multi-dimensional arrays are passed in. Cause: Arrays in Fortran are stored in column-wise order while arrays in C are stored in row-wise order. Standard solutions: 1) Create a new C array; copy the data from the old one in column-wise order; pass the new array to fortran; copy changed array back to old one in row-wise order; deallocate the array. 2) Change the storage order of an array in place: element-wise swapping; pass the array to fortran; change the storage order back with element-wise swapping Why standard solutions are not good? 1) Additional memory allocation, that is problem for large arrays; Element-wise copying is time consuming (2 times). 2) It is good as no extra memory is needed but element-wise swapping (2 times) is approx. equivalent with the element-wise copying (4 times). Proclamation: Introduce a column-wise array to Numeric Python where data is stored in column-wise order that can be used specifically for fortran routines. Proposal sketch: 1) Introduce a new flag `row_order'to PyArrayObject structure: row_order == 1 -> the data is stored in row-wise order (default, as it is now) row_order == 0 -> the data is stored in column-wise order Note that now the concept of contiguousness depends on this flag. 2) Introduce new array "constructors" such as PyArray_CW_FromDims, PyArray_CW_FromDimsAndData, PyArray_CW_ContiguousFromObject, PyArray_CW_CopyFromObject, PyArray_CW_FromObject, etc. that all return arrays with row_order=0 and data stored in column-wise order (that is in case of contiguous results, otherwise strides feature is employd). 3) In order to operations between arrays (possibly with different storage order) would work correctly, many internal functions of NumPy C/API need to be modifyied. 4) anything else? What is the good of this? 1) The fact is that there is a large number of very good scietific tools freely available written in Fortran (Netlib, for instance). And I don't mean only Fortran 77 codes but also Fortran 90/95 codes. 2) Having Numeric Python arrays with data stored in column-wise order, calling Fortran routines from Python becomes really efficient and space-saving. 3) There should be little performance hit if, say, two arrays with different storage order are multiplied (compared to the operations between non-contiguous arrays in the current implementation). 4) I don't see any reason why older C/API modules would broke because of this change if it is carried out carefully enough. So, back-ward compability should be there. 5) anything else? What are against of this? 1) Lots of work but with current experience it should not be a problem. 2) The size of the code will grow. 3) I suppose that most people using Numerical Python will not care of calling Fortran routines from Python. Possible reasons: too "tricky" or no need. In the first case, the answer is that there are tools such as PyFort, f2py that solve this problem. In the later case, there is no problem:-) 4) anything else? I understand that my proposal is quite radical but taking into account that we want to use Python for many years to come, the use would be more pleasing if one cause of (constant) confusion would be less during this time. Best regards, Pearu |
From: Hassan A. <au...@cr...> - 2000-01-22 16:27:52
|
Hi, I have just seen with pleasure that numpy is on sourceforge. So welcome. I am maintaining and writing a generic mathematical session manager, in which I have a numpy session. The project if you haven't noticed it is at http://gmath.sourceforge.net Now, this done, I'd like to know if you will be making rpm/tar.gz releases available on your site always and preferably on the anonymous ftp site for latest release, like in: ftp://numpy.sourceforge.net/latest/srpms .... If yes, then I'd be happy to make the app go fetch the packages from that site or another instead of making the srpms myself (which means possible bugs!). Thank you in advance. H. Aurag au...@cr... au...@us... |
From: Paul F. D. <pau...@ho...> - 2000-01-20 23:13:14
|
This is a test. Ignore. Paul |