You can subscribe to this list here.
2000 
_{Jan}
(8) 
_{Feb}
(49) 
_{Mar}
(48) 
_{Apr}
(28) 
_{May}
(37) 
_{Jun}
(28) 
_{Jul}
(16) 
_{Aug}
(16) 
_{Sep}
(44) 
_{Oct}
(61) 
_{Nov}
(31) 
_{Dec}
(24) 

2001 
_{Jan}
(56) 
_{Feb}
(54) 
_{Mar}
(41) 
_{Apr}
(71) 
_{May}
(48) 
_{Jun}
(32) 
_{Jul}
(53) 
_{Aug}
(91) 
_{Sep}
(56) 
_{Oct}
(33) 
_{Nov}
(81) 
_{Dec}
(54) 
2002 
_{Jan}
(72) 
_{Feb}
(37) 
_{Mar}
(126) 
_{Apr}
(62) 
_{May}
(34) 
_{Jun}
(124) 
_{Jul}
(36) 
_{Aug}
(34) 
_{Sep}
(60) 
_{Oct}
(37) 
_{Nov}
(23) 
_{Dec}
(104) 
2003 
_{Jan}
(110) 
_{Feb}
(73) 
_{Mar}
(42) 
_{Apr}
(8) 
_{May}
(76) 
_{Jun}
(14) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(108) 
_{Oct}
(82) 
_{Nov}
(89) 
_{Dec}
(94) 
2004 
_{Jan}
(117) 
_{Feb}
(86) 
_{Mar}
(75) 
_{Apr}
(55) 
_{May}
(75) 
_{Jun}
(160) 
_{Jul}
(152) 
_{Aug}
(86) 
_{Sep}
(75) 
_{Oct}
(134) 
_{Nov}
(62) 
_{Dec}
(60) 
2005 
_{Jan}
(187) 
_{Feb}
(318) 
_{Mar}
(296) 
_{Apr}
(205) 
_{May}
(84) 
_{Jun}
(63) 
_{Jul}
(122) 
_{Aug}
(59) 
_{Sep}
(66) 
_{Oct}
(148) 
_{Nov}
(120) 
_{Dec}
(70) 
2006 
_{Jan}
(460) 
_{Feb}
(683) 
_{Mar}
(589) 
_{Apr}
(559) 
_{May}
(445) 
_{Jun}
(712) 
_{Jul}
(815) 
_{Aug}
(663) 
_{Sep}
(559) 
_{Oct}
(930) 
_{Nov}
(373) 
_{Dec}

S  M  T  W  T  F  S 




1

2
(1) 
3
(10) 
4

5

6

7

8
(1) 
9

10

11

12

13

14

15

16

17

18
(2) 
19

20
(1) 
21

22
(1) 
23
(1) 
24

25

26

27
(2) 
28
(3) 
29
(6) 
30
(3) 


From: <szport@ru...>  20001130 18:09:14

> I had another look at the definition of \"ones\" and of another routine > > I frequently use: arange. It appears that even without rewriting them > > in C, some speedup can be achieved: > > > >  in ones(), the + 1 should be done \"in place\", saving about 15%, more > > if you run out of processor cache: > > I\'d also try assignment in place: > > def ones(shape, typecode=\'l\', savespace=0): > a = zeros(shape, typecode, savespace) > a[len(shape)*[slice(0, None)]] = 1 > return a > > Konrad. Is the following definition faster or not? def ones(shape, typecode=\'l\', savespace=0): a = zeros( (product(shape),), typecode, savespace) a[:] = 1 a.shape = shape return a Zaur 
From: <rob@ho...>  20001130 15:34:01

KH> I had another look at the definition of "ones" and of another routine KH> I frequently use: arange. It appears that even without rewriting them KH> in C, some speedup can be achieved: KH> KH>  in ones(), the + 1 should be done "in place", saving about 15%, more KH> if you run out of processor cache: KH> I'd also try assignment in place: KH> def ones(shape, typecode='l', savespace=0): KH> a = zeros(shape, typecode, savespace) KH> a[len(shape)*[slice(0, None)]] = 1 KH> return a This is even faster, but it is better to write "a[...] = 1", because your manual calculation of "..." gives a large overhead for small arrays. On another machine this time: Numeric.ones 10 > 0.254ms Numeric.ones 100 > 0.268ms Numeric.ones 1000 > 0.340ms Numeric.ones 10000 > 1.960ms Numeric.ones 100000 > 29.300ms Numeric.zeros 10 > 0.055ms Numeric.zeros 100 > 0.059ms Numeric.zeros 1000 > 0.068ms Numeric.zeros 10000 > 0.430ms Numeric.zeros 100000 > 9.800ms Add inplace 10 > 0.246ms Add inplace 100 > 0.255ms Add inplace 1000 > 0.312ms Add inplace 10000 > 1.270ms Add inplace 100000 > 18.100ms Assign inplace 10 > 0.192ms Assign inplace 100 > 0.201ms Assign inplace 1000 > 0.242ms Assign inplace 10000 > 1.010ms Assign inplace 100000 > 16.300ms Reshape 1 10 > 0.842ms Reshape 1 100 > 1.175ms Reshape 1 1000 > 4.100ms Reshape 1 10000 > 35.100ms Reshape 1 100000 > 368.600ms Rob  ===== rob@... http://www.hooft.net/people/rob/ ===== ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== ===== PGPid 0xFA19277D ========================== Use Linux! ========= 
From: Konrad Hinsen <hinsen@cn...>  20001130 14:26:54

> I had another look at the definition of "ones" and of another routine > I frequently use: arange. It appears that even without rewriting them > in C, some speedup can be achieved: > >  in ones(), the + 1 should be done "in place", saving about 15%, more > if you run out of processor cache: I'd also try assignment in place: def ones(shape, typecode='l', savespace=0): a = zeros(shape, typecode, savespace) a[len(shape)*[slice(0, None)]] = 1 return a Konrad.   Konrad Hinsen  EMail: hinsen@... Centre de Biophysique Moleculaire (CNRS)  Tel.: +332.38.25.56.24 Rue Charles Sadron  Fax: +332.38.63.15.17 45071 Orleans Cedex 2  Deutsch/Esperanto/English/ France  Nederlands/Francais  
From: <rob@ho...>  20001129 21:56:45

I had another look at the definition of "ones" and of another routine I frequently use: arange. It appears that even without rewriting them in C, some speedup can be achieved:  in ones(), the + 1 should be done "in place", saving about 15%, more if you run out of processor cache: amigo[167]~%3% /usr/local/bin/python test_ones.py Numeric.ones 10 > 0.098ms Numeric.ones 100 > 0.103ms Numeric.ones 1000 > 0.147ms Numeric.ones 10000 > 0.830ms Numeric.ones 100000 > 11.900ms Numeric.zeros 10 > 0.021ms Numeric.zeros 100 > 0.022ms Numeric.zeros 1000 > 0.026ms Numeric.zeros 10000 > 0.290ms Numeric.zeros 100000 > 4.000ms Add inplace 10 > 0.091ms Add inplace 100 > 0.094ms Add inplace 1000 > 0.127ms Add inplace 10000 > 0.690ms Add inplace 100000 > 8.100ms Reshape 1 10 > 0.320ms Reshape 1 100 > 0.436ms Reshape 1 1000 > 1.553ms Reshape 1 10000 > 12.910ms Reshape 1 100000 > 141.200ms Also notice that zeros() is 45 times faster than ones(), so it may pay to reimplement ones in C as well (it is used in indices() and arange()). The "resize 1" alternative is much slower.  in arange, additional 10% can be saved by adding brackets around (start+(stopstop)) (in addition to the gain by the faster "ones"): amigo[168]~%3% /usr/local/bin/python test_arange.py Numeric.arange 10 > 0.390ms Numeric.arange 100 > 0.410ms Numeric.arange 1000 > 0.670ms Numeric.arange 10000 > 4.100ms Numeric.arange 100000 > 59.000ms Optimized 10 > 0.340ms Optimized 100 > 0.360ms Optimized 1000 > 0.580ms Optimized 10000 > 3.500ms Optimized 100000 > 48.000ms Regards, Rob Hooft 
From: Daehyok Shin <sdhyok@em...>  20001129 21:36:28

Initialization on huge arrays is frequent operations in scientific programming. It must be efficient as much as possible. So, I was surprisized to see the inner codes of ones() in Numpy. It maybe use calloc() rather than malloc() in C level, then for(..) for addition. Why not use malloc() and for(...) simultaneously in C level with a command such as: a = arrray(1,shape=(10000,10000)) Daehyok Shin  Original Message  From: "Rob W. W. Hooft" <rob@...> To: "Daehyok Shin" <sdhyok@...> Cc: <numpydiscussion@...> Sent: Wednesday, November 29, 2000 1:00 PM Subject: Re: [Numpydiscussion] Initialization of array? > >>>>> "DS" == Daehyok Shin <sdhyok@...> writes: > > DS> When I initialize an array, I use a = ones(shape)*initial_val > > DS> But, I am wondering if Numpy has more efficient way. For example, > DS> a = array(initial_value, shape) > > Looking at the definition of "ones": > > def ones(shape, typecode='l', savespace=0): > """ones(shape, typecode=Int, savespace=0) returns an array of the given > dimensions which is initialized to all ones. > """ > return zeros(shape, typecode, savespace)+array(1, typecode) > > It looks like you could try a=zeros(shape)+initial_val instead. > > Hm.. I might do some experimenting. > > Rob > >  > ===== rob@... http://www.hooft.net/people/rob/ ===== > ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== > ===== PGPid 0xFA19277D ========================== Use Linux! ========= > 
From: Daehyok Shin <sdhyok@em...>  20001129 21:28:05

Comparing the performance, resize() seems to be more effient than ones(). Daehyok Shin  Original Message  From: "Chris Barker" <cbarker@...> Cc: <numpydiscussion@...> Sent: Wednesday, November 29, 2000 11:13 AM Subject: Re: [Numpydiscussion] Initialization of array? > Daehyok Shin wrote: > > When I initialize an array, I use > > a = ones(shape)*initial_val > > > > But, I am wondering if Numpy has more efficient way. > > For example, > > a = array(initial_value, shape) > > I don't know if it's any more efficient (what you have is pretty fast > already), but another option is to use resize: > > >>> shape = (3,4) > > >>> initial_val = 5.0 > > >>> resize(initial_val,shape) > > array([[ 5., 5., 5., 5.], > [ 5., 5., 5., 5.], > [ 5., 5., 5., 5.]]) > >>> > > Chris > > >  > Christopher Barker, > Ph.D. > cbarker@...    > http://www.jps.net/cbarker @@ @@ @@ > @@@ @@@ @@@ > Water Resources Engineering  @  @  @ > Coastal and Fluvial Hydrodynamics    >  >  > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > http://lists.sourceforge.net/mailman/listinfo/numpydiscussion > 
From: <rob@ho...>  20001129 21:01:14

>>>>> "DS" == Daehyok Shin <sdhyok@...> writes: DS> When I initialize an array, I use a = ones(shape)*initial_val DS> But, I am wondering if Numpy has more efficient way. For example, DS> a = array(initial_value, shape) Looking at the definition of "ones": def ones(shape, typecode='l', savespace=0): """ones(shape, typecode=Int, savespace=0) returns an array of the given dimensions which is initialized to all ones. """ return zeros(shape, typecode, savespace)+array(1, typecode) It looks like you could try a=zeros(shape)+initial_val instead. Hm.. I might do some experimenting. Rob  ===== rob@... http://www.hooft.net/people/rob/ ===== ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== ===== PGPid 0xFA19277D ========================== Use Linux! ========= 
From: Chris Barker <cbarker@jp...>  20001129 19:06:18

Daehyok Shin wrote: > When I initialize an array, I use > a = ones(shape)*initial_val > > But, I am wondering if Numpy has more efficient way. > For example, > a = array(initial_value, shape) I don't know if it's any more efficient (what you have is pretty fast already), but another option is to use resize: >>> shape = (3,4) >>> initial_val = 5.0 >>> resize(initial_val,shape) array([[ 5., 5., 5., 5.], [ 5., 5., 5., 5.], [ 5., 5., 5., 5.]]) >>> Chris  Christopher Barker, Ph.D. cbarker@...    http://www.jps.net/cbarker @@ @@ @@ @@@ @@@ @@@ Water Resources Engineering  @  @  @ Coastal and Fluvial Hydrodynamics      
From: Daehyok Shin <sdhyok@em...>  20001129 17:27:12

When I initialize an array, I use a = ones(shape)*initial_val But, I am wondering if Numpy has more efficient way. For example, a = array(initial_value, shape) Peter 
From: Paul F. Dubois <pauldubois@ho...>  20001128 14:55:50

Thank you for pointing out my error; it is of course 17.1.2. Original Message From: numpydiscussionadmin@... [mailto:numpydiscussionadmin@...]On Behalf Of Berthold Hollmann Sent: Tuesday, November 28, 2000 3:39 AM To: dubois@... Cc: NumpyDiscussion@... Sourceforge. Net Subject: Re: [Numpydiscussion] 17.2 source release "Paul F. Dubois" <pauldubois@...> writes: > I have released 17.2. This is essentially a catchuptoCVS release. If you > don't upgrade nothing astonishing will be missing from your life, unless you > are using MA, which has been worked on some since 17.1. > > I installed a performance enhancement submitted by Pete Shinners, who said: > "I've got a quick optimization for the arrayobject.c source. > it speeds my usage of numpy up by about 100%. i've tested with > other numpy apps and noticed a minimum of about 20% speed." Hello, Is it 17.1.2 or is it missing on Sourceforge? O only see a 17.1.2 which has a corresponding file date, but no 10.2 Greetings Berthold  email: hoel@... ) tel. : +49 (40) 3 61 49  73 74 ( C[_] These opinions might be mine, but never those of my employer. _______________________________________________ Numpydiscussion mailing list Numpydiscussion@... http://lists.sourceforge.net/mailman/listinfo/numpydiscussion 
From: <hoel@ge...>  20001128 11:44:22

"Paul F. Dubois" <pauldubois@...> writes: > I have released 17.2. This is essentially a catchuptoCVS release. If you > don't upgrade nothing astonishing will be missing from your life, unless you > are using MA, which has been worked on some since 17.1. > > I installed a performance enhancement submitted by Pete Shinners, who said: > "I've got a quick optimization for the arrayobject.c source. > it speeds my usage of numpy up by about 100%. i've tested with > other numpy apps and noticed a minimum of about 20% speed." Hello, Is it 17.1.2 or is it missing on Sourceforge? O only see a 17.1.2 which has a corresponding file date, but no 10.2 Greetings Berthold  email: hoel@... ) tel. : +49 (40) 3 61 49  73 74 ( C[_] These opinions might be mine, but never those of my employer. 
From: Paul F. Dubois <pauldubois@ho...>  20001128 00:49:04

I have released 17.2. This is essentially a catchuptoCVS release. If you don't upgrade nothing astonishing will be missing from your life, unless you are using MA, which has been worked on some since 17.1. I installed a performance enhancement submitted by Pete Shinners, who said: "I've got a quick optimization for the arrayobject.c source. it speeds my usage of numpy up by about 100%. i've tested with other numpy apps and noticed a minimum of about 20% speed." 
From: Paul F. Dubois <pauldubois@ho...>  20001127 23:08:38

This optimization will be in the next release. Thanks! Original Message From: numpydiscussionadmin@... [mailto:numpydiscussionadmin@...]On Behalf Of Pete Shinners Sent: Monday, October 02, 2000 10:58 AM To: Numpy Discussion Subject: [Numpydiscussion] quick optimization i've got a quick optimization for the arrayobject.c source. it speeds my usage of numpy up by about 100%. i've tested with other numpy apps and noticed a minimum of about 20% speed. anyways, in "do_sliced_copy", change out the following block: if (src_nd == 0 && dest_nd == 0) { for(j=0; j<copies; j++) { memcpy(dest, src, elsize); dest += elsize; } return 0; } with this slightly larger one: if (src_nd == 0 && dest_nd == 0) { switch(elsize) { case sizeof(char): memset(dest, *src, copies); break; case sizeof(short): for(j=copies; j; j, dest += sizeof(short)) *(short*)dest = *(short*)src; break; case sizeof(long): for(j=copies; j; j, dest += sizeof(int)) *(int*)dest = *(int*)src; break; case sizeof(double): for(j=copies; j; j, dest += sizeof(double)) *(double*)dest = *(double*)src; break; default: for(j=copies; j; j, dest += elsize) memcpy(dest, src, elsize); } return 0; } anyways, you can see it's no brilliant algorithm change, but for me, getting a free 2X speedup is a big help. i'm hoping something like this can get merged into the next releases? after walking through the numpy code, i was surprised how almost every function falls back to do_sliced_copy (guess that's why it's at the top of the source?). that made it a quick target for making optimization changes. _______________________________________________ Numpydiscussion mailing list Numpydiscussion@... http://lists.sourceforge.net/mailman/listinfo/numpydiscussion 
From: Nor Pirzkal <npirzkal@es...>  20001127 14:08:31

Hi, i am having troubles freezing some python code with call the Numeric 17.1.1 module. Here is an example of what happens under Linux Red Hat 6.2 and Python 2.0: $ cat t.py import Numeric a = Numeric.ones([10,10]) b = a * 10 print b $ python ~/Python2.0/Tools/freeze/freeze.py o tt t.py ... $ cd tt $ ./t Traceback (most recent call last): File "t.py", line 1, in ? File "/scisoft/python/lib/python2.0/sitepackages/Numeric/Numeric.py", line 79, in ? import multiarray ImportError: /scisoft/python/lib/python2.0/sitepackages/Numeric/multiarray.so: undefined symbol: _Py_NoneStruct The very same code, under Solaris 2.6 and Python 2.0 works just fine. Code which does not use the Numeric package freeze just fine under Linux, so I think that this point to some problem/incompatibility of Numeric with freeze.py. Does anybody have a suggestion, or a workaround?? Nor 
From: Daehyok Shin <sdhyok@em...>  20001123 19:17:09

Would you tell me what's going on recently in making multiarray as a standard type of python? Daehyok Shin (Peter) 
From: Chris Barker <cbarker@jp...>  20001122 22:55:46

Hi all, I'm cross posting this to the NumPy List and the MacPython list, because it involves NumPy on the Mac, so I'm not sure which group can be most helpfull. It took me a while to get this far, and I finally got everything to compile, but now I have a crashing problem. It seems to be a problem with PyArray_Type not being seen as a PyObject. I am pretty new to C, but I have consulted with a number of folks I work with that know a lot more than I do, and this seems to be a pretty esoteric C issue. It also seems to be compiler dependent, because I have all this working fine on Linux with gcc. I have chopped my problem down into a very small function that demonstrates the problem. The function takes a contiguous NumPy array of Floats (doubles) and multiplies every element by two (in place), and returns None. Here is the code as it works on Linux: #include "Python.h" #include "arrayobject.h" /* A function that doubles an array of Floats in place*/ static PyObject * minimal_doublearray(PyObject *self, PyObject *args) { PyArrayObject *array; int i, num_elements; double *data_array ; if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &array)) return NULL; if (array>descr>type_num != PyArray_DOUBLE) { PyErr_SetString(PyExc_ValueError, "array must be of type Float"); return NULL; } data_array = (double *) array>data; /*num_elements = PyArray_Size((PyObject *) array);*/ num_elements = PyArray_Size(array); printf("The size of the array is: %i elements\n",num_elements); for (i= 0; i < num_elements; i++) data_array[i] = 2 * data_array[i]; return Py_None; } static PyMethodDef minimalMethods[] = { {"doublearray", minimal_doublearray, METH_VARARGS}, {NULL, NULL} /* Sentinel */ }; void initminimal() { (void) Py_InitModule("minimal", minimalMethods); } Note that the call to: "PyArray_Size(array)" gives me a "./minimal.c:28: warning: passing arg 1 from incompatible pointer type " on gcc on linux. In CodeWarrior on the Mac, it is an fatal error. With the typcast (see previous commented out line) it gives no warnings, and compiles on both systems. Here is a small script to test it: #!/usr/bin/env python from Numeric import * import minimal print "\nTesting doublearray" a = arange(10.0) print a minimal.doublearray(a) print a print "the second version should be doubled" This works fine on Linux, and gives the appropriate errors if the wrong type object is passed in. On the Mac, it crashes. Trying various things, I found that it crashes on the if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &array)) return NULL; line. If I use: if (!PyArg_ParseTuple(args, "O", &array)) return NULL; It works. Then it crashes on the: num_elements = PyArray_Size((PyObject *) array); line. If I use another way to determine the number of elements, like: num_elements = 1; for (i=0,i<array>nd,i++) num_elements = num_elements * array>dimensions[1]; Then I can get it to work. What is going on? anyone have any suggestions? MacPython 1.5.2c The NumPy that came with MacPython 1.5.2c CodeWarrior pro 5. Thanks for any suggestions, Chris  Christopher Barker, Ph.D. cbarker@...    http://www.jps.net/cbarker @@ @@ @@ @@@ @@@ @@@ Water Resources Engineering  @  @  @ Coastal and Fluvial Hydrodynamics      
From: Anderl <Andreas.R<eigber@dl...>  20001120 17:11:50

Hello, I'm quite new to numpy, trying to migrate from IDL. I'm not able to find any 'shift' function in numpy, i.e. a function that shifts the content of an array by a certain number of elements. a = [1,2,3,4,5,6] shift(a,2) = [3,4,5,6,1,2] In IDL this works even on multidemiensional arrays, b=shift(a,4,7,5) shifts by 4 in the first, 7 in the second and 5 in the third component. Is there some similar module existing? Thanks & best regards, Andreas  Andreas Reigber Institut fuer Hochfrequenztechnik DLR  Oberpfaffenhofen Tel. : ++498153282367 Postfach 1116 eMail: Andreas.Reigber@... D82230 Wessling  
From: Paul F. Dubois <pauldubois@ho...>  20001118 16:38:19

To make numpy work with fpectl someone needs to add some macro calls to its source. This has not been done. Please visit sourceforge.net/projects/numpy and get a new release of Numpy. Perhaps that will correct your problem. Original Message From: numpydiscussionadmin@... [mailto:numpydiscussionadmin@...]On Behalf Of JeanBernard Addor Sent: Friday, November 17, 2000 5:57 PM To: numpydiscussion@... Subject: [Numpydiscussion] Fatal Python error: Unprotected floating point exception Hi Numpy people! My nice numpy code generates very few Inf numbers, wich destroy the results of my longer simulations. I was dreaming to have the processor raising an interruption and python caching it to locate and understand quickly why and how it is happening and correct the code. I currently use Python 1.5.2 with Numeric 11 on debian linux 2 . I made some very desappointing test with the module fpectl The last result I got is: Fatal Python error: Unprotected floating point exception Abort Do I have to understand that my Numpy is not compatible with fpectl? Any idea if a more up to date Numpy would be compatible? I find no info on: http://sourceforge.net/projects/numpy Thanks for your help. JeanBernard _______________________________________________ Numpydiscussion mailing list Numpydiscussion@... http://lists.sourceforge.net/mailman/listinfo/numpydiscussion 
From: JeanBernard Addor <jbaddor@ph...>  20001118 01:55:45

Hi Numpy people! My nice numpy code generates very few Inf numbers, wich destroy the results of my longer simulations. I was dreaming to have the processor raising an interruption and python caching it to locate and understand quickly why and how it is happening and correct the code. I currently use Python 1.5.2 with Numeric 11 on debian linux 2 . I made some very desappointing test with the module fpectl The last result I got is: Fatal Python error: Unprotected floating point exception Abort Do I have to understand that my Numpy is not compatible with fpectl? Any idea if a more up to date Numpy would be compatible? I find no info on: http://sourceforge.net/projects/numpy Thanks for your help. JeanBernard 
From: Pete Shinners <pete@vi...>  20001108 00:06:46

i have image data in a 2D array that i'd like to change the size of. currently, i'm just doing a dirty sampled scaling that i came up with. this works, but can only do a 2x resize. dst[::2,::2] = src dst[1::2,::2] = src dst[:,1::2] = dst[:,::2] this isn't too bad, but it's definitely one of my bottlenecks, can the code to do this be sped through some ingenious use of numeric? i'm also trying to figure out a clean, fast way to do bilinear scaling on my data. it seems like there would be something in numeric to do linear resampling, but i haven't figured it out. i'd like to be able to resample stuff like audio data as well as just image data. thanks for any pointers into this. 
From: Janko Hauser <jhauser@if...>  20001103 21:17:00

Sorry I forgot to mention that these two operations can be done inplace, but the result can not be stored inplace, as the shape is changing. So you need to live with one copy, if your array a is of type 'i'. >>> a=array([1,2,3,4]) >>> multiply(a,bits,a) array([ 1, 512, 196608, 67108864]) >>> a array([ 1, 512, 196608, 67108864]) HTH, __Janko multiply(array([1,2,3,4]),bits,a) 
From: Janko Hauser <jhauser@if...>  20001103 21:08:36

Chris Barker writes: > "Paul F. Dubois" wrote: > > >>> y=array([1,2,3], '1') > > >>> y > > array([1, 2, 3],'1') > > >>> y.astype(Int32) > > array([1, 2, 3],'i') > > Actually, this is exactly NOT what I want to do. In this case, each 1 > byte interger was converted to a 4byte integer, of the same VALUE. What > I want is to convert each SET of four bytes into a SINGLE 4 byte integer > as it: > > >>> a = array([1,2,3,4],'1') > >>> a = fromstring(a.tostring(),Int32) > >>> a > array([67305985],'i') > A brut force way would be to do the transformation yourself :) >>> bits=array([1,256,256*256,256*256*256]) >>> sum(array([1,2,3,4])*bits) 67305985 So you need to reshape your array into (?,4) and multiply by bits. And regarding your numpyio question, you can also read characters, which are then put into an array by itself. It seems you have a very messy file format (but the data world is never easy) HTH, __Janko  Institut fuer Meereskunde phone: 49431597 3989 Dept. Theoretical Oceanography fax : 49431565876 Duesternbrooker Weg 20 email: jhauser@... 24105 Kiel, Germany 
From: Chris Barker <cbarker@jp...>  20001103 19:30:42

Janko Hauser wrote: > Use the numpyio module from Travis. With this it should be possible to > read the data directly and do any typecode conversion you want > with. It has fread and fwrite functions, and it can be used with any > NumPy type like Int0 in your case. It's part of the signaltools > package. > > http://oliphant.netpedia.net/signaltools_0.5.2.tar.gz I've downloaded it , and it looks pretty handy. It does include a byteswapinplace, which I need. What is not clear to me from the minimal docs is whether I can read file set up like: long long char long long char .... and have it put the longs into one array, and the chars into another. Also, It wasn't clear whether I could put use it to read a file that has already been opened, starting at the file's current position. I am working with a file that has a text header, so I can't just suck in the whole thing until I've parsed out the header. I can figure out the answer to these questions with some reding of the source, but it wasn't obvious at first glance, so it would be great if someone knows the answer off the top of there head. Travis? By the way, there seem to be a few methods that produce a copy, rather than doing things in place, where it seems more intuitive to do it in place. byteswapped() and astype() come to mind. With byteswapped, I imagine it's rare that you would want to keep a copy around. With astype it would also be rare to keep a copy around, but since it changes the size of the array, I imagine it would be a lot harder to code as an inplace operation. Is there a reason these operations are not available inplace? or is it just that no one has seen enough of a need to write the code. Chris  Christopher Barker, Ph.D. cbarker@...    http://www.jps.net/cbarker @@ @@ @@ @@@ @@@ @@@ Water Resources Engineering  @  @  @ Coastal and Fluvial Hydrodynamics      
From: Chris Barker <cbarker@jp...>  20001103 19:18:50

"Paul F. Dubois" wrote: > >>> y=array([1,2,3], '1') > >>> y > array([1, 2, 3],'1') > >>> y.astype(Int32) > array([1, 2, 3],'i') Actually, this is exactly NOT what I want to do. In this case, each 1 byte interger was converted to a 4byte integer, of the same VALUE. What I want is to convert each SET of four bytes into a SINGLE 4 byte integer as it: >>> a = array([1,2,3,4],'1') >>> a = fromstring(a.tostring(),Int32) >>> a array([67305985],'i') The four one byte items in a are turned into one four byte item. What I want is to be able to do this in place, rather than have tostring() create a copy. I think fromstring may create a copy as well, having a possible total of three copies around at once. Does anyone know how many copies will be around at once with this line of code? Chris  Christopher Barker, Ph.D. cbarker@...    http://www.jps.net/cbarker @@ @@ @@ @@@ @@@ @@@ Water Resources Engineering  @  @  @ Coastal and Fluvial Hydrodynamics      
From: Scott Ransom <ransom@cf...>  20001103 15:43:44

Bernd Rinn wrote: > > Hello, > > does anyone know why the performance of FFT.real with fftpacklite.c is > so unballanced for n=2**i and different values of i? My example is: Hi, The problem is that your routine gives arrays of length 2**n1 not 2**n! So for the arrays where the CPU time is huge, you are FFTing a length that is a prime number (or at least not easily factorable). FFTPACK has to do a brute force DFT instead of a FFT! (Ah, the beauty of n*log(n)...) You can see that this is the case by printing the lengths of the arrays:  from Numeric import array,Float from FFT import real_fft from time import time i=1 while i<20: n = 2**long(i) a=array(range(long(1),n),Float) print len(a) i=i+1  What you should try instead is the following:  from Numeric import arange,Float from FFT import real_fft from time import time i=1 while i<20: n = 2**i a=arange(n, typecode=Float) anfang = time() b = real_fft(a) ende=time() print "i=", i, " time: ", endeanfang i=i+1  Which gives the following runtimes on my Pentium 450 under Linux: i= 1 time: 0.000313997268677 i= 2 time: 0.000239014625549 i= 3 time: 0.000229954719543 i= 4 time: 0.000240087509155 i= 5 time: 0.000240087509155 i= 6 time: 0.000257968902588 i= 7 time: 0.000322103500366 i= 8 time: 0.000348091125488 i= 9 time: 0.000599980354309 i= 10 time: 0.000900983810425 i= 11 time: 0.0018150806427 i= 12 time: 0.00341892242432 i= 13 time: 0.00806891918182 i= 14 time: 0.0169370174408 i= 15 time: 0.038006067276 i= 16 time: 0.0883399248123 i= 17 time: 0.199723005295 i= 18 time: 0.661148071289 i= 19 time: 0.976199030876 Hope this helps, Scott  Scott M. Ransom Address: HarvardSmithsonian CfA Phone: (617) 4954142 60 Garden St. MS 10 email: ransom@... Cambridge, MA 02138 GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 