You can subscribe to this list here.
2000 
_{Jan}
(8) 
_{Feb}
(49) 
_{Mar}
(48) 
_{Apr}
(28) 
_{May}
(37) 
_{Jun}
(28) 
_{Jul}
(16) 
_{Aug}
(16) 
_{Sep}
(44) 
_{Oct}
(61) 
_{Nov}
(31) 
_{Dec}
(24) 

2001 
_{Jan}
(56) 
_{Feb}
(54) 
_{Mar}
(41) 
_{Apr}
(71) 
_{May}
(48) 
_{Jun}
(32) 
_{Jul}
(53) 
_{Aug}
(91) 
_{Sep}
(56) 
_{Oct}
(33) 
_{Nov}
(81) 
_{Dec}
(54) 
2002 
_{Jan}
(72) 
_{Feb}
(37) 
_{Mar}
(126) 
_{Apr}
(62) 
_{May}
(34) 
_{Jun}
(124) 
_{Jul}
(36) 
_{Aug}
(34) 
_{Sep}
(60) 
_{Oct}
(37) 
_{Nov}
(23) 
_{Dec}
(104) 
2003 
_{Jan}
(110) 
_{Feb}
(73) 
_{Mar}
(42) 
_{Apr}
(8) 
_{May}
(76) 
_{Jun}
(14) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(108) 
_{Oct}
(82) 
_{Nov}
(89) 
_{Dec}
(94) 
2004 
_{Jan}
(117) 
_{Feb}
(86) 
_{Mar}
(75) 
_{Apr}
(55) 
_{May}
(75) 
_{Jun}
(160) 
_{Jul}
(152) 
_{Aug}
(86) 
_{Sep}
(75) 
_{Oct}
(134) 
_{Nov}
(62) 
_{Dec}
(60) 
2005 
_{Jan}
(187) 
_{Feb}
(318) 
_{Mar}
(296) 
_{Apr}
(205) 
_{May}
(84) 
_{Jun}
(63) 
_{Jul}
(122) 
_{Aug}
(59) 
_{Sep}
(66) 
_{Oct}
(148) 
_{Nov}
(120) 
_{Dec}
(70) 
2006 
_{Jan}
(460) 
_{Feb}
(683) 
_{Mar}
(589) 
_{Apr}
(559) 
_{May}
(445) 
_{Jun}
(712) 
_{Jul}
(815) 
_{Aug}
(663) 
_{Sep}
(559) 
_{Oct}
(930) 
_{Nov}
(373) 
_{Dec}

S  M  T  W  T  F  S 






1
(4) 
2
(3) 
3
(2) 
4
(3) 
5
(8) 
6
(5) 
7
(1) 
8

9
(1) 
10

11
(6) 
12
(3) 
13
(7) 
14
(5) 
15
(3) 
16
(2) 
17
(1) 
18
(4) 
19

20
(6) 
21
(4) 
22
(1) 
23
(3) 
24
(2) 
25
(4) 
26
(12) 
27
(19) 
28
(8) 
29
(5) 
30

31







From: Daniel Sheltraw <sheltraw@un...>  20050726 23:09:53

Hello all Would someone please tell me how negative and positive frequencies are stored in the output of the Numerical Python fft? Cheers, Daniel 
From: <rbastian@cl...>  20050726 20:16:12

Hello, in the actual form=20 convolve(a,b)=3D=3Dconvolve(b,a)[::1] but following the mathematics, one should obtain convolve(a,b)=3D=3Dconvolve(b,a) can anyone explain me the reason ? thanks =20 Ren=E9 Bastian http://www.musiquesrb.org 
From: Perry Greenfield <perry@st...>  20050726 17:20:48

On Jul 26, 2005, at 12:41 PM, Sebastian Haase wrote: > Hi, > This is not sopposed to be an evil question; instead I'm hoping for the > answer: "No, generally we get >=95% the speed of a pure C/fortran > implementation" ;) > But as I am the strongest Python/numarray advocate in our group I get > often > the answer that Matlab is (of course) also very convenient but > generally > memory handling and overall execution performance is so bad that for > final > implementation one would generally have to reimplement in C. > We are a biophysics group at UCSF developping new algorithms for > deconvolution (often in 3D). Our data sets are regularly bigger than > several > 100MB. When deciding for numarray I was assuming that the "Hubble > Crowd" had > a similar situation and all the operations are therefore very much > optimized > for this type of data. > Is 95% a reasonable number to hope for ? I did wrap my own version of > FFTW > (with "plancaching"), which should give 100% of the Cspeed. But > concerns > arise from expression like "a=b+c*a" (think "convenience"!): If a,b,c > are > each 3Ddatastacks creation of temporary dataarrays for 'c*a' AND > then also > for 'b+...' would have to be very costly. (I think this is at least > happening > for Numeric  I don't know about Matlab and numarray) > Is it speed or memory usage you are worried about? Where are you actually seeing unacceptable performance? Offhand, I would say the temporaries are not likely to be serious speed issues (unless you running out of memory). We did envision at some point (we haven't done it yet) of recognizing situations where the temporaries could be reused. As for 95% speed, it's not what required for our work (I think that what an acceptable speed ratio is depends on the problem). In many cases being within 50% is good enough except for heavily used things where it would need to be faster. But yes, we do plan on (and do indeed actually use it now) for large data sets where speed is important. We generally don't compare the numarray speed to C speed all the time (if we were writing C equivalents every time, that would defeat the purpose of using Python :). Perhaps you could give a more specific example with some measurements? I don't think I would promise anyone that all one's code could be done in Python. Undoubtedly, there are going to be some cases where coding in C or similar is going to be needed. I'd just argue that Python let's you keep as much as possible in a higher level language and a little as necessary in a low level language such as C. Perry 
From: Peter Verveer <verveer@em...>  20050726 17:14:21

On 26 Jul 2005, at 18:41, Sebastian Haase wrote: > Hi, > This is not sopposed to be an evil question; instead I'm hoping for the > answer: "No, generally we get >=95% the speed of a pure C/fortran > implementation" ;) you won't, generally. Question is: since you are certainly not going to gain an order of a magnitude doing it in C, do you really care? > But as I am the strongest Python/numarray advocate in our group I get > often > the answer that Matlab is (of course) also very convenient but > generally > memory handling and overall execution performance is so bad that for > final > implementation one would generally have to reimplement in C. Well its true that implementations in C will be faster. And memory handling in Numeric/numarray can be a pain since the tendency is to create and destroy a lot of arrays if you are not careful. > We are a biophysics group at UCSF developping new algorithms for > deconvolution (often in 3D). Our data sets are regularly bigger than > several > 100MB. When deciding for numarray I was assuming that the "Hubble > Crowd" had > a similar situation and all the operations are therefore very much > optimized > for this type of data. Funny you mention that example. I did my PhD in exactly the same field (considering you are from Sedats lab I guess you are in exactly the same field as I was/am, i.e. fluorescence microscopy. What are you guys up to these days?) and I developed all my algorithms in C at the time. Now, about 7 years later, I returned to the field to reimplement and extend some of my old algorithms for use with microscopy data that can consist of multiple sets, each several 100MB at least. Now I use python with numarray, and I am actualy quite happy with. I am pushing it by using up to 2GB of memory, (per process, after splitting the problem up and distributing it on a cluster...), but it works. I am sure I could squeeze maybe a factor of two or three in terms of speed and memory usage by rewriting in C, but that is currently not worth my time. So I guess that counts as using numarray as a prototyping environment, but the result is also suitable for production use. > Is 95% a reasonable number to hope for ? I did wrap my own version of > FFTW > (with "plancaching"), which should give 100% of the Cspeed. That should help a lot, as the standard FFTs that come with Numarray/Numeric suck big time. I do use them, but have to go through all kind of tricks to get some decent memory usage in 32bit floating point. The FFT array module is in fact very badly written for use with large multidimensional data sets. > But concerns > arise from expression like "a=b+c*a" (think "convenience"!): If a,b,c > are > each 3Ddatastacks creation of temporary dataarrays for 'c*a' AND > then also > for 'b+...' would have to be very costly. (I think this is at least > happening > for Numeric  I don't know about Matlab and numarray) That is indeed a problem, although I think in your case you maybe limited by your FFTs anyway, at least in terms of speed. One thing you should consider is replacing expressions such as ' c= a + b' with add(a, b, c). If you do that cleverly you can avoid quite some memory allocations and you 'should' get closer to C. That does not solve everything though: 1) Complex expressions still need to be broken up in sequences of operations which is likely slower then iterating once over you array and do the expression at each point. 2) Unfortunately not all numarray functions support an output array (maybe only the ufuncs?). This can be a big problem, as then temporary arrays must be allocated. (It sure was a problem for me.) You can of course always reimplement the parts that are critical in C and wrap them (as you did with FFTW). In fact, I think numarray now provides a relatively easy way to write ufuncs which would allow you to write a single python function in C for complex expressions. > Hoping for comments, Hope this gives some insights. I guess I have had similar experiences, there are definitely some limits to the use of numarray/Numeric that could be relieved, for instance by having a consistent implementation of output arrays. That would allow writing algorithms where you could very strictly control the allocation and deallocation of arrays, which would be a big help for working with large arrays. Cheers, Peter PS. I would not mind to hear a bit about your experiences doing the big deconvolutions in numarray/Numeric, but that may not be a good topic for this list. > Thanks > Sebastian Haase > UCSF, Sedat Lab >  Dr Peter J Verveer European Molecular Biology Laboratory Meyerhofstrasse 1 D69117 Heidelberg Germany Tel. +49 6221 387 8245 Fax. +49 6221 397 8306 
From: Sebastian Haase <haase@ms...>  20050726 16:41:47

Hi, This is not sopposed to be an evil question; instead I'm hoping for the answer: "No, generally we get >=95% the speed of a pure C/fortran implementation" ;) But as I am the strongest Python/numarray advocate in our group I get often the answer that Matlab is (of course) also very convenient but generally memory handling and overall execution performance is so bad that for final implementation one would generally have to reimplement in C. We are a biophysics group at UCSF developping new algorithms for deconvolution (often in 3D). Our data sets are regularly bigger than several 100MB. When deciding for numarray I was assuming that the "Hubble Crowd" had a similar situation and all the operations are therefore very much optimized for this type of data. Is 95% a reasonable number to hope for ? I did wrap my own version of FFTW (with "plancaching"), which should give 100% of the Cspeed. But concerns arise from expression like "a=b+c*a" (think "convenience"!): If a,b,c are each 3Ddatastacks creation of temporary dataarrays for 'c*a' AND then also for 'b+...' would have to be very costly. (I think this is at least happening for Numeric  I don't know about Matlab and numarray) Hoping for comments, Thanks Sebastian Haase UCSF, Sedat Lab 
From: Chris Barker <Chris.Barker@no...>  20050726 16:31:14

Soeren Sonnenburg wrote: > Hmmhh. I see that this again breaks with R/octave/matlab. One should not > do so if there is no serious reason. It just makes life harder for every > potential convert from any of these languages. If you're looking for a matlab clone, use octave or psilab, or.... Speaking as an exmatlab user, I much prefer the NumPy approach. The reason is that I very rarely did linear algebra, and generally used matlab as a general computational environment. I got very tired of having to type that "." before all my operators. I also love array broadcasting, it seems so much cleaner and efficient. When I moved from Matlab to NumPy, I missed these things: Integrated plotting: many years later, this is still weak, but at least for 2d matplotlib is wonderful. The large library of math functions: SciPy is moving to fill that gap. Automatic handling of IEEE special values: numarray now does that pretty well. That's what I missed. What I gained was a far more powerful and expressive language, AND a more powerful and flexible array type. I don't miss MATLAB at all. In fact, you'll find me on the matplotlib mailing list, often repeating the refrain: matplotlib is NOT MATLAB nor should it be! > It now seems very difficult for me to end up with a single > numeric/matrix package that makes it into core python  That is probably true. > which is at the same time very sad. But I'm not sure we should be sad about it. What we all want is the package best suited to our own needs to be in the standard library. However, I'd rather the current situation than having a package poorly suited to my needs in the standard library. As this thread proves, there is no one kind of array package that would fit even most people's needs. However, there is some promise for the future: 1) SciPy base may serve to unify Numeric/numarray 2) Travis has introduced the "array interface" http://numeric.scipy.org/array_interface.html this would provide an efficient way for the various array and matrix packages to exchange data. It does have a hope of making it into the standard library, though even if it doesn't, if a wide variety of addon packages use it, then the same thing is accomplished. If fact, if packages that don't implement an array type, but might use one (like PIL, wxPython, etc) accept this interface, then any package that implements it can be used together with them. 3) What about PEP 225: Elementwise/Objectwise Operators? It's listed under: Deferred, Abandoned, Withdrawn, and Rejected PEPs Which of those applied here? I had high hope for it one time. By the way, I had not seen cvxopt before, thanks for the link. Perhaps it would serve as a better base for a fullfeatured linear algebra package than Numeric/numarray. Ideally, it could use the array interface, for easy exchange of data with other packages. Chris  Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 5266959 voice 7600 Sand Point Way NE (206) 5266329 fax Seattle, WA 98115 (206) 5266317 main reception Chris.Barker@... 
From: Perry Greenfield <perry@st...>  20050726 13:38:12

On Jul 26, 2005, at 12:35 AM, Soeren Sonnenburg wrote: >> Since we deal with big data sets we have adopted 2 (no doubt to the >> consternation of many). > > hmmhh, there is no advantage with big datasets for any of the formats. > It is if you have to reorder the data, or use nonoptimum iteration through the array. The first is definitely slower. >>> Do you know whether mixing slices and arrays will be supported at >>> some >>> point a[:,[0,1]] ? >>> >>> Soeren. >> >> No plans at the moment. We figured indexing was complicated enough as >> it was. I think Travis Oliphant did allow for this in Numeric3 (aka >> scipy.core); see the link below. But it may not mean what you think it >> should so you should check that out to see: >> >> http://cvs.sourceforge.net/viewcvs.py/numpy/Numeric3/PEP.txt? >> view=markup > > Hmmhh while we are at it, is there work on a consensus num* ? I've seen > the PEP for inclusion of numarray, now I see numeric3 and scipy and > also > cvxopt  are people actually converging towards a single num* package ? > That's what the goal of Numeric3 is as far as different underlying array engines. But if you mean a merging of the array/matrix viewpoints, I think you've answered your own question. Perry 
From: Alan G Isaac <aisaac@am...>  20050726 13:31:56

On Tue, 26 Jul 2005, Soeren Sonnenburg apparently wrote: > In my eyes 'array broadcasting' is confusing and should rather be in a > function like meshgrid and instead a*b should return > matrixmultiply(a,b) ... It does, if you work with matrices. Try import scipy help(scipy.mat) Coming from GAUSS I had the same initial reaction. It did not last long. hth, Alan Isaac 
From: Soeren Sonnenburg <pythonml@nn...>  20050726 05:44:26

On Mon, 20050725 at 17:47 0400, Perry Greenfield wrote: > I missed this part and was reminded by Travis's message. > > On Jul 23, 2005, at 12:03 PM, Soeren Sonnenburg wrote: > >  How can one obtain submatrices from full matrices: > > > > numarray gives only elements (which is very, very rarely needed and > > should IMHO rather be some extra function): > > > >>>> i=[0,1] > >>>> j=[0,2] > >>>> a[i,j] > > array([1, 6]) > > > > vs octave: > >>> i=[1,2]; > >>> j=[1,3]; > >>> a(i,j) > > ans = > > 1 3 > > 4 6 > > > Maybe for you it is rarely needed, but not for us. The situation is > reversed. The latter is rarely used in our case. This is largely a > reflection of whether your orientation is matrices or multidimensional > arrays. In our case it is quite handy to select out a list of point in > an image by giving a list of their respective indices (e.g., stars). Hmmhh. I see that this again breaks with R/octave/matlab. One should not do so if there is no serious reason. It just makes life harder for every potential convert from any of these languages. A funktion take() would have served this purposed much better... but this is ofcourse all IMHO and I can see your point: You design it according to your (or your communities) requirements and different communities different needs... I am realizing that this must have been why cvxopt switched away from numarray/numeric. There slicing/indexing and '*' work as I would have expected: In [71]: a=matrix([1,2,3,4,5,6,7,8,9],size=(3,3)) In [72]: a Out[72]: <3x3 matrix, tc='d'> In [73]: b=matrix([1,2,3,4,5,6,7,8,9],size=(3,3)) In [74]: a*b Out[74]: <3x3 matrix, tc='d'> In [75]: print a 1.0000e+00 4.0000e+00 7.0000e+00 2.0000e+00 5.0000e+00 8.0000e+00 3.0000e+00 6.0000e+00 9.0000e+00 In [76]: print b 1.0000e+00 4.0000e+00 7.0000e+00 2.0000e+00 5.0000e+00 8.0000e+00 3.0000e+00 6.0000e+00 9.0000e+00 In [77]: print a*b 3.0000e+01 6.6000e+01 1.0200e+02 3.6000e+01 8.1000e+01 1.2600e+02 4.2000e+01 9.6000e+01 1.5000e+02 In [78]: print a[:,0] 1.0000e+00 2.0000e+00 3.0000e+00 In [79]: print a[0,1] 4.0 In [80]: print a[0,:] 1.0000e+00 4.0000e+00 7.0000e+00 > True, I do see that others may need the other view, but then the > question is which should get the convenience. Since Numeric/numarray > are primarily oriented towards multidimensional arrays (e.g., > operations are elementbyelement rather than matrix) it seemed to make > sense to go this way for consistency, but I understand that there is > another side to this. It now seems very difficult for me to end up with a single numeric/matrix package that makes it into core python  which is at the same time very sad. Soeren 
From: Robert Kern <rkern@uc...>  20050726 05:44:21

Soeren Sonnenburg wrote: > On Mon, 20050725 at 08:59 0700, Chris Barker wrote: [snip] >>such thing as a column vs. a row vector. A vector is a onedimensional >>array: it has no orientation. > > This makes life more difficult if one wants to convert > from octave/matlab > numarray and automated systems close to > impossible. <shrug> That wasn't a design goal. > If vectors had the same properties/functions as matrices one > would not have such problems, i.e. v^{transpose} * u == dot(v,u) and v*u > > error That would be a big problem for those of us who don't use Numeric just for linear algebra. These are general arrays, not matrices and vectors. [snip] > In my eyes 'array broadcasting' is confusing and should rather be in a > function like meshgrid and instead a*b should return > matrixmultiply(a,b) ... Spend some time with it. It will probably grow on you. Numeric is not Matlab or Octave. It never will be, thank Gd. [snip] >>I hope that helps: > > Indeed it does  Thanks!! Unfortunately I am not at all happy now that > '*' != matrixmultiply (but outerproduct) for vectors/matrices... Again, Numeric is a package for arrays, not just linear algebra. Please spend some more time with Python and Numeric before deciding that they must be changed to match your preconceptions. > I realize that with lists it is ok to grow them via slicing. > > x=[] > x[0]=1 > IndexError: list assignment index out of range > x[0:0]=[1] > x > [1] > > that seems not to work with numarray ... or ? > > y=array() > y[0]=1 > TypeError: object does not support item assignment > y[0:0]=array([1]) > TypeError: object does not support item assignment Python lists are designed to grow dynamically. Their memory is preallocated so that growing them is on average pretty cheap. Numeric arrays are not, nor will they be.  Robert Kern rkern@... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die."  Richard Harter 
From: Soeren Sonnenburg <pythonml@nn...>  20050726 05:22:26

On Mon, 20050725 at 08:59 0700, Chris Barker wrote: > Soeren Sonnenburg wrote: > >  why do vectors have no 'orientation', i.e. there are only row but no > > column vectors (or why do you treat matrices/vectors differently, i.e. > > have vectors at all as a separate type) > > I think the key to understanding this is that NumPy uses a > fundamentally different data type that MATLAB and it's derivatives. > MATLAB was originally just what it is called: a "Matrix" laboratory. The > basic data type of Matlab is a 2d matrix. Even a scalar is really a 1X! > matrix. Matlab has a few tricks that can make these look like row and > column vectors, etc, but they are really always matrices. Ok, I am realizing that R also distinguishes between vectors and matrices. > On the other hand, NumPy arrays are Ndimensional, where N is > theoretically unlimited. In practice, I think the max N is defined and > compiled in, but you could change it and recompile if you wanted. In > any case, many of us frequently use 3d and higher arrays, and they can > be very useful. When thought of this way, you can see why there is no Well at least this is the same for octave/matlab. > such thing as a column vs. a row vector. A vector is a onedimensional > array: it has no orientation. This makes life more difficult if one wants to convert from octave/matlab > numarray and automated systems close to impossible. If vectors had the same properties/functions as matrices one would not have such problems, i.e. v^{transpose} * u == dot(v,u) and v*u > error > However, NumPy does support NX1 and 1XN 2d arrays which can be very handy: > >>> import numarray as N > >>> a = N.arange(5) > >>> a.shape = (1,1) > >>> a > array([[0, 1, 2, 3, 4]]) > >>> b = N.arange(5) > >>> b.shape = (1,1) > >>> a > array([0, 1, 2, 3, 4]) > >>> b > array([[0], > [1], > [2], > [3], > [4]]) > > So a is a row vector and b is a column vector. If you multiply them, you > get "array broadcasting": > >>> a * b > array([[ 0, 0, 0, 0, 0], > [ 0, 1, 2, 3, 4], > [ 0, 2, 4, 6, 8], > [ 0, 3, 6, 9, 12], > [ 0, 4, 8, 12, 16]]) > > This eliminates a LOT of extra duplicate arrays that you have to make in > Matlab with meshgrid. In my eyes 'array broadcasting' is confusing and should rather be in a function like meshgrid and instead a*b should return matrixmultiply(a,b) ... > When you index into an array, you reduce its rank (number of dimensions) > by 1: > >>> a = N.arange(27) > >>> a.shape = (3,3,3) > >>> a > array([[[ 0, 1, 2], > [ 3, 4, 5], > [ 6, 7, 8]], > > [[ 9, 10, 11], > [12, 13, 14], > [15, 16, 17]], > > [[18, 19, 20], > [21, 22, 23], > [24, 25, 26]]]) > >>> a.shape > (3, 3, 3) > >>> a[1].shape > (3, 3) > >>> a[1][1].shape > (3,) > > When you slice, you keep the rank the same: > > >>> a[1:2].shape > (1, 3, 3) > > This creates a way to make row and column "vectors" from your 2d array > (matrix) > >>> a = N.arange(25) > >>> a.shape = (5,5) > >>> a > array([[ 0, 1, 2, 3, 4], > [ 5, 6, 7, 8, 9], > [10, 11, 12, 13, 14], > [15, 16, 17, 18, 19], > [20, 21, 22, 23, 24]]) > > To make a "row vector" (really a 1XN matrix) > >>> a[0:1,:] > array([[0, 1, 2, 3, 4]]) > > > To make a "column vector" (really a NX1 matrix) > >>> a[:,0:1] > array([[ 0], > [ 5], > [10], > [15], > [20]]) > > > I hope that helps: Indeed it does  Thanks!! Unfortunately I am not at all happy now that '*' != matrixmultiply (but outerproduct) for vectors/matrices... I realize that with lists it is ok to grow them via slicing. x=[] x[0]=1 IndexError: list assignment index out of range x[0:0]=[1] x [1] that seems not to work with numarray ... or ? y=array() y[0]=1 TypeError: object does not support item assignment y[0:0]=array([1]) TypeError: object does not support item assignment Soeren. 
From: Soeren Sonnenburg <pythonml@nn...>  20050726 04:35:37

On Mon, 20050725 at 10:10 0400, Perry Greenfield wrote: > On Jul 24, 2005, at 2:41 PM, Soeren Sonnenburg wrote: > > > > > Well but why did you change to the C version then ? Well maybe if it is > > about optimizing stuff seriously one could work with the transpose > > anyway... > > > > To get a really solid answer to "why" you would have to ask those that > wrote the original Numeric. (Jim Hugunin and Co.). My guess is was that > it was just as much to preserve the following relation > > arr[i,j] = arr[i][j] > > (instead of arr[i,j] = arr[j][i]) Ok, that sounds reasonable. > But I could be wrong. > > Note that this is a confusing issue to many and often as not there are > unstated assumptions that are repeatedly made by many that are *not* > shared by everyone. To illustrate, there are at least two different > approaches to handling this mismatch and it seems to me that many seem > oblivious to the fact that there are two approaches. > > Approach 1: Keep the index convention the same. As a user of Numeric or > numarray, you wish M[i,j] to mean the same as it does in > Fortran/IDL/matlab... The consequence is that the data are ordered > differently between Numeric/numarray and these other languages. This is > seen as a data compatibility problem. > > Approach 2: Keep the data invariant and change the indexing convention. > What was M[i,j] in Fortran is now M[j,i] in Numeric/numarray. This is > not a data compatibility problem, but is now a brain compatibility > problem ;) well it might not be *that* bad in the end... only it is a tough decision to break with everything that is there (to put it to extreme: the world is wrong and but we did it right) and be compatible with C like array access... If one does so one needs to have very serious reasons to do so ... that is why I was asking. > Since we deal with big data sets we have adopted 2 (no doubt to the > consternation of many). hmmhh, there is no advantage with big datasets for any of the formats. > > Do you know whether mixing slices and arrays will be supported at some > > point a[:,[0,1]] ? > > > > Soeren. > > No plans at the moment. We figured indexing was complicated enough as > it was. I think Travis Oliphant did allow for this in Numeric3 (aka > scipy.core); see the link below. But it may not mean what you think it > should so you should check that out to see: > > http://cvs.sourceforge.net/viewcvs.py/numpy/Numeric3/PEP.txt?view=markup Hmmhh while we are at it, is there work on a consensus num* ? I've seen the PEP for inclusion of numarray, now I see numeric3 and scipy and also cvxopt  are people actually converging towards a single num* package ? Soeren. 