You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Konrad H. <hi...@cn...> - 2000-11-30 14:26:54
|
> I had another look at the definition of "ones" and of another routine > I frequently use: arange. It appears that even without rewriting them > in C, some speedup can be achieved: > > - in ones(), the + 1 should be done "in place", saving about 15%, more > if you run out of processor cache: I'd also try assignment in place: def ones(shape, typecode='l', savespace=0): a = zeros(shape, typecode, savespace) a[len(shape)*[slice(0, None)]] = 1 return a Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: <ro...@ho...> - 2000-11-29 21:56:45
|
I had another look at the definition of "ones" and of another routine I frequently use: arange. It appears that even without rewriting them in C, some speedup can be achieved: - in ones(), the + 1 should be done "in place", saving about 15%, more if you run out of processor cache: amigo[167]~%3% /usr/local/bin/python test_ones.py Numeric.ones 10 -> 0.098ms Numeric.ones 100 -> 0.103ms Numeric.ones 1000 -> 0.147ms Numeric.ones 10000 -> 0.830ms Numeric.ones 100000 -> 11.900ms Numeric.zeros 10 -> 0.021ms Numeric.zeros 100 -> 0.022ms Numeric.zeros 1000 -> 0.026ms Numeric.zeros 10000 -> 0.290ms Numeric.zeros 100000 -> 4.000ms Add inplace 10 -> 0.091ms Add inplace 100 -> 0.094ms Add inplace 1000 -> 0.127ms Add inplace 10000 -> 0.690ms Add inplace 100000 -> 8.100ms Reshape 1 10 -> 0.320ms Reshape 1 100 -> 0.436ms Reshape 1 1000 -> 1.553ms Reshape 1 10000 -> 12.910ms Reshape 1 100000 -> 141.200ms Also notice that zeros() is 4-5 times faster than ones(), so it may pay to reimplement ones in C as well (it is used in indices() and arange()). The "resize 1" alternative is much slower. - in arange, additional 10% can be saved by adding brackets around (start+(stop-stop)) (in addition to the gain by the faster "ones"): amigo[168]~%3% /usr/local/bin/python test_arange.py Numeric.arange 10 -> 0.390ms Numeric.arange 100 -> 0.410ms Numeric.arange 1000 -> 0.670ms Numeric.arange 10000 -> 4.100ms Numeric.arange 100000 -> 59.000ms Optimized 10 -> 0.340ms Optimized 100 -> 0.360ms Optimized 1000 -> 0.580ms Optimized 10000 -> 3.500ms Optimized 100000 -> 48.000ms Regards, Rob Hooft |
From: Daehyok S. <sd...@em...> - 2000-11-29 21:36:28
|
Initialization on huge arrays is frequent operations in scientific programming. It must be efficient as much as possible. So, I was surprisized to see the inner codes of ones() in Numpy. It maybe use calloc() rather than malloc() in C level, then for(..) for addition. Why not use malloc() and for(...) simultaneously in C level with a command such as: a = arrray(1,shape=(10000,10000)) Daehyok Shin ----- Original Message ----- From: "Rob W. W. Hooft" <ro...@ho...> To: "Daehyok Shin" <sd...@em...> Cc: <num...@li...> Sent: Wednesday, November 29, 2000 1:00 PM Subject: Re: [Numpy-discussion] Initialization of array? > >>>>> "DS" == Daehyok Shin <sd...@em...> writes: > > DS> When I initialize an array, I use a = ones(shape)*initial_val > > DS> But, I am wondering if Numpy has more efficient way. For example, > DS> a = array(initial_value, shape) > > Looking at the definition of "ones": > > def ones(shape, typecode='l', savespace=0): > """ones(shape, typecode=Int, savespace=0) returns an array of the given > dimensions which is initialized to all ones. > """ > return zeros(shape, typecode, savespace)+array(1, typecode) > > It looks like you could try a=zeros(shape)+initial_val instead. > > Hm.. I might do some experimenting. > > Rob > > -- > ===== ro...@ho... http://www.hooft.net/people/rob/ ===== > ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== > ===== PGPid 0xFA19277D ========================== Use Linux! ========= > |
From: Daehyok S. <sd...@em...> - 2000-11-29 21:28:05
|
Comparing the performance, resize() seems to be more effient than ones(). Daehyok Shin ----- Original Message ----- From: "Chris Barker" <cb...@jp...> Cc: <num...@li...> Sent: Wednesday, November 29, 2000 11:13 AM Subject: Re: [Numpy-discussion] Initialization of array? > Daehyok Shin wrote: > > When I initialize an array, I use > > a = ones(shape)*initial_val > > > > But, I am wondering if Numpy has more efficient way. > > For example, > > a = array(initial_value, shape) > > I don't know if it's any more efficient (what you have is pretty fast > already), but another option is to use resize: > > >>> shape = (3,4) > > >>> initial_val = 5.0 > > >>> resize(initial_val,shape) > > array([[ 5., 5., 5., 5.], > [ 5., 5., 5., 5.], > [ 5., 5., 5., 5.]]) > >>> > > -Chris > > > -- > Christopher Barker, > Ph.D. > cb...@jp... --- --- --- > http://www.jps.net/cbarker -----@@ -----@@ -----@@ > ------@@@ ------@@@ ------@@@ > Water Resources Engineering ------ @ ------ @ ------ @ > Coastal and Fluvial Hydrodynamics ------- --------- -------- > ------------------------------------------------------------------------ > ------------------------------------------------------------------------ > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion > |
From: <ro...@ho...> - 2000-11-29 21:01:14
|
>>>>> "DS" == Daehyok Shin <sd...@em...> writes: DS> When I initialize an array, I use a = ones(shape)*initial_val DS> But, I am wondering if Numpy has more efficient way. For example, DS> a = array(initial_value, shape) Looking at the definition of "ones": def ones(shape, typecode='l', savespace=0): """ones(shape, typecode=Int, savespace=0) returns an array of the given dimensions which is initialized to all ones. """ return zeros(shape, typecode, savespace)+array(1, typecode) It looks like you could try a=zeros(shape)+initial_val instead. Hm.. I might do some experimenting. Rob -- ===== ro...@ho... http://www.hooft.net/people/rob/ ===== ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== ===== PGPid 0xFA19277D ========================== Use Linux! ========= |
From: Chris B. <cb...@jp...> - 2000-11-29 19:06:18
|
Daehyok Shin wrote: > When I initialize an array, I use > a = ones(shape)*initial_val > > But, I am wondering if Numpy has more efficient way. > For example, > a = array(initial_value, shape) I don't know if it's any more efficient (what you have is pretty fast already), but another option is to use resize: >>> shape = (3,4) >>> initial_val = 5.0 >>> resize(initial_val,shape) array([[ 5., 5., 5., 5.], [ 5., 5., 5., 5.], [ 5., 5., 5., 5.]]) >>> -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Daehyok S. <sd...@em...> - 2000-11-29 17:27:12
|
When I initialize an array, I use a = ones(shape)*initial_val But, I am wondering if Numpy has more efficient way. For example, a = array(initial_value, shape) Peter |
From: Paul F. D. <pau...@ho...> - 2000-11-28 14:55:50
|
Thank you for pointing out my error; it is of course 17.1.2. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Berthold Hollmann Sent: Tuesday, November 28, 2000 3:39 AM To: du...@us... Cc: Numpy-Discussion@Lists. Sourceforge. Net Subject: Re: [Numpy-discussion] 17.2 source release "Paul F. Dubois" <pau...@ho...> writes: > I have released 17.2. This is essentially a catch-up-to-CVS release. If you > don't upgrade nothing astonishing will be missing from your life, unless you > are using MA, which has been worked on some since 17.1. > > I installed a performance enhancement submitted by Pete Shinners, who said: > "I've got a quick optimization for the arrayobject.c source. > it speeds my usage of numpy up by about 100%. i've tested with > other numpy apps and noticed a minimum of about 20% speed." Hello, Is it 17.1.2 or is it missing on Sourceforge? O only see a 17.1.2 which has a corresponding file date, but no 10.2 Greetings Berthold -- email: ho...@Ge... ) tel. : +49 (40) 3 61 49 - 73 74 ( C[_] These opinions might be mine, but never those of my employer. _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: <ho...@ge...> - 2000-11-28 11:44:22
|
"Paul F. Dubois" <pau...@ho...> writes: > I have released 17.2. This is essentially a catch-up-to-CVS release. If you > don't upgrade nothing astonishing will be missing from your life, unless you > are using MA, which has been worked on some since 17.1. > > I installed a performance enhancement submitted by Pete Shinners, who said: > "I've got a quick optimization for the arrayobject.c source. > it speeds my usage of numpy up by about 100%. i've tested with > other numpy apps and noticed a minimum of about 20% speed." Hello, Is it 17.1.2 or is it missing on Sourceforge? O only see a 17.1.2 which has a corresponding file date, but no 10.2 Greetings Berthold -- email: ho...@Ge... ) tel. : +49 (40) 3 61 49 - 73 74 ( C[_] These opinions might be mine, but never those of my employer. |
From: Paul F. D. <pau...@ho...> - 2000-11-28 00:49:04
|
I have released 17.2. This is essentially a catch-up-to-CVS release. If you don't upgrade nothing astonishing will be missing from your life, unless you are using MA, which has been worked on some since 17.1. I installed a performance enhancement submitted by Pete Shinners, who said: "I've got a quick optimization for the arrayobject.c source. it speeds my usage of numpy up by about 100%. i've tested with other numpy apps and noticed a minimum of about 20% speed." |
From: Paul F. D. <pau...@ho...> - 2000-11-27 23:08:38
|
This optimization will be in the next release. Thanks! -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Pete Shinners Sent: Monday, October 02, 2000 10:58 AM To: Numpy Discussion Subject: [Numpy-discussion] quick optimization i've got a quick optimization for the arrayobject.c source. it speeds my usage of numpy up by about 100%. i've tested with other numpy apps and noticed a minimum of about 20% speed. anyways, in "do_sliced_copy", change out the following block: if (src_nd == 0 && dest_nd == 0) { for(j=0; j<copies; j++) { memcpy(dest, src, elsize); dest += elsize; } return 0; } with this slightly larger one: if (src_nd == 0 && dest_nd == 0) { switch(elsize) { case sizeof(char): memset(dest, *src, copies); break; case sizeof(short): for(j=copies; j; --j, dest += sizeof(short)) *(short*)dest = *(short*)src; break; case sizeof(long): for(j=copies; j; --j, dest += sizeof(int)) *(int*)dest = *(int*)src; break; case sizeof(double): for(j=copies; j; --j, dest += sizeof(double)) *(double*)dest = *(double*)src; break; default: for(j=copies; j; --j, dest += elsize) memcpy(dest, src, elsize); } return 0; } anyways, you can see it's no brilliant algorithm change, but for me, getting a free 2X speedup is a big help. i'm hoping something like this can get merged into the next releases? after walking through the numpy code, i was surprised how almost every function falls back to do_sliced_copy (guess that's why it's at the top of the source?). that made it a quick target for making optimization changes. _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Nor P. <npi...@es...> - 2000-11-27 14:08:31
|
Hi, i am having troubles freezing some python code with call the Numeric 17.1.1 module. Here is an example of what happens under Linux Red Hat 6.2 and Python 2.0: $ cat t.py import Numeric a = Numeric.ones([10,10]) b = a * 10 print b $ python ~/Python-2.0/Tools/freeze/freeze.py -o tt t.py ... $ cd tt $ ./t Traceback (most recent call last): File "t.py", line 1, in ? File "/scisoft/python/lib/python2.0/site-packages/Numeric/Numeric.py", line 79, in ? import multiarray ImportError: /scisoft/python/lib/python2.0/site-packages/Numeric/multiarray.so: undefined symbol: _Py_NoneStruct The very same code, under Solaris 2.6 and Python 2.0 works just fine. Code which does not use the Numeric package freeze just fine under Linux, so I think that this point to some problem/incompatibility of Numeric with freeze.py. Does anybody have a suggestion, or a work-around?? Nor |
From: Daehyok S. <sd...@em...> - 2000-11-23 19:17:09
|
Would you tell me what's going on recently in making multiarray as a standard type of python? Daehyok Shin (Peter) |
From: Chris B. <cb...@jp...> - 2000-11-22 22:55:46
|
Hi all, I'm cross posting this to the NumPy List and the MacPython list, because it involves NumPy on the Mac, so I'm not sure which group can be most helpfull. It took me a while to get this far, and I finally got everything to compile, but now I have a crashing problem. It seems to be a problem with PyArray_Type not being seen as a PyObject. I am pretty new to C, but I have consulted with a number of folks I work with that know a lot more than I do, and this seems to be a pretty esoteric C issue. It also seems to be compiler dependent, because I have all this working fine on Linux with gcc. I have chopped my problem down into a very small function that demonstrates the problem. The function takes a contiguous NumPy array of Floats (doubles) and multiplies every element by two (in place), and returns None. Here is the code as it works on Linux: #include "Python.h" #include "arrayobject.h" /* A function that doubles an array of Floats in place*/ static PyObject * minimal_doublearray(PyObject *self, PyObject *args) { PyArrayObject *array; int i, num_elements; double *data_array ; if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &array)) return NULL; if (array->descr->type_num != PyArray_DOUBLE) { PyErr_SetString(PyExc_ValueError, "array must be of type Float"); return NULL; } data_array = (double *) array->data; /*num_elements = PyArray_Size((PyObject *) array);*/ num_elements = PyArray_Size(array); printf("The size of the array is: %i elements\n",num_elements); for (i= 0; i < num_elements; i++) data_array[i] = 2 * data_array[i]; return Py_None; } static PyMethodDef minimalMethods[] = { {"doublearray", minimal_doublearray, METH_VARARGS}, {NULL, NULL} /* Sentinel */ }; void initminimal() { (void) Py_InitModule("minimal", minimalMethods); } Note that the call to: "PyArray_Size(array)" gives me a "./minimal.c:28: warning: passing arg 1 from incompatible pointer type " on gcc on linux. In CodeWarrior on the Mac, it is an fatal error. With the typcast (see previous commented out line) it gives no warnings, and compiles on both systems. Here is a small script to test it: #!/usr/bin/env python from Numeric import * import minimal print "\nTesting doublearray" a = arange(10.0) print a minimal.doublearray(a) print a print "the second version should be doubled" This works fine on Linux, and gives the appropriate errors if the wrong type object is passed in. On the Mac, it crashes. Trying various things, I found that it crashes on the if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &array)) return NULL; line. If I use: if (!PyArg_ParseTuple(args, "O", &array)) return NULL; It works. Then it crashes on the: num_elements = PyArray_Size((PyObject *) array); line. If I use another way to determine the number of elements, like: num_elements = 1; for (i=0,i<array->nd,i++) num_elements = num_elements * array->dimensions[1]; Then I can get it to work. What is going on? anyone have any suggestions? MacPython 1.5.2c The NumPy that came with MacPython 1.5.2c CodeWarrior pro 5. Thanks for any suggestions, -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Anderl <And...@dl...> - 2000-11-20 17:11:50
|
Hello, I'm quite new to numpy, trying to migrate from IDL. I'm not able to find any 'shift' function in numpy, i.e. a function that shifts the content of an array by a certain number of elements. a = [1,2,3,4,5,6] shift(a,2) = [3,4,5,6,1,2] In IDL this works even on multidemiensional arrays, b=shift(a,4,7,5) shifts by 4 in the first, 7 in the second and 5 in the third component. Is there some similar module existing? Thanks & best regards, Andreas ------------------------------------------------------------------- Andreas Reigber Institut fuer Hochfrequenztechnik DLR - Oberpfaffenhofen Tel. : ++49-8153-282367 Postfach 1116 eMail: And...@dl... D-82230 Wessling ------------------------------------------------------------------- |
From: Paul F. D. <pau...@ho...> - 2000-11-18 16:38:19
|
To make numpy work with fpectl someone needs to add some macro calls to its source. This has not been done. Please visit sourceforge.net/projects/numpy and get a new release of Numpy. Perhaps that will correct your problem. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Jean-Bernard Addor Sent: Friday, November 17, 2000 5:57 PM To: num...@li... Subject: [Numpy-discussion] Fatal Python error: Unprotected floating point exception Hi Numpy people! My nice numpy code generates very few Inf numbers, wich destroy the results of my longer simulations. I was dreaming to have the processor raising an interruption and python caching it to locate and understand quickly why and how it is happening and correct the code. I currently use Python 1.5.2 with Numeric 11 on debian linux 2 . I made some very desappointing test with the module fpectl The last result I got is: Fatal Python error: Unprotected floating point exception Abort Do I have to understand that my Numpy is not compatible with fpectl? Any idea if a more up to date Numpy would be compatible? I find no info on: http://sourceforge.net/projects/numpy Thanks for your help. Jean-Bernard _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Jean-Bernard A. <jb...@ph...> - 2000-11-18 01:55:45
|
Hi Numpy people! My nice numpy code generates very few Inf numbers, wich destroy the results of my longer simulations. I was dreaming to have the processor raising an interruption and python caching it to locate and understand quickly why and how it is happening and correct the code. I currently use Python 1.5.2 with Numeric 11 on debian linux 2 . I made some very desappointing test with the module fpectl The last result I got is: Fatal Python error: Unprotected floating point exception Abort Do I have to understand that my Numpy is not compatible with fpectl? Any idea if a more up to date Numpy would be compatible? I find no info on: http://sourceforge.net/projects/numpy Thanks for your help. Jean-Bernard |
From: Pete S. <pe...@vi...> - 2000-11-08 00:06:46
|
i have image data in a 2D array that i'd like to change the size of. currently, i'm just doing a dirty sampled scaling that i came up with. this works, but can only do a 2x resize. dst[::2,::2] = src dst[1::2,::2] = src dst[:,1::2] = dst[:,::2] this isn't too bad, but it's definitely one of my bottlenecks, can the code to do this be sped through some ingenious use of numeric? i'm also trying to figure out a clean, fast way to do bilinear scaling on my data. it seems like there would be something in numeric to do linear resampling, but i haven't figured it out. i'd like to be able to resample stuff like audio data as well as just image data. thanks for any pointers into this. |
From: Janko H. <jh...@if...> - 2000-11-03 21:17:00
|
Sorry I forgot to mention that these two operations can be done inplace, but the result can not be stored inplace, as the shape is changing. So you need to live with one copy, if your array a is of type 'i'. >>> a=array([1,2,3,4]) >>> multiply(a,bits,a) array([ 1, 512, 196608, 67108864]) >>> a array([ 1, 512, 196608, 67108864]) HTH, __Janko multiply(array([1,2,3,4]),bits,a) |
From: Janko H. <jh...@if...> - 2000-11-03 21:08:36
|
Chris Barker writes: > "Paul F. Dubois" wrote: > > >>> y=array([1,2,3], '1') > > >>> y > > array([1, 2, 3],'1') > > >>> y.astype(Int32) > > array([1, 2, 3],'i') > > Actually, this is exactly NOT what I want to do. In this case, each 1 > byte interger was converted to a 4byte integer, of the same VALUE. What > I want is to convert each SET of four bytes into a SINGLE 4 byte integer > as it: > > >>> a = array([1,2,3,4],'1') > >>> a = fromstring(a.tostring(),Int32) > >>> a > array([67305985],'i') > A brut force way would be to do the transformation yourself :-) >>> bits=array([1,256,256*256,256*256*256]) >>> sum(array([1,2,3,4])*bits) 67305985 So you need to reshape your array into (?,4) and multiply by bits. And regarding your numpyio question, you can also read characters, which are then put into an array by itself. It seems you have a very messy file format (but the data world is never easy) HTH, __Janko -- Institut fuer Meereskunde phone: 49-431-597 3989 Dept. Theoretical Oceanography fax : 49-431-565876 Duesternbrooker Weg 20 email: jh...@if... 24105 Kiel, Germany |
From: Chris B. <cb...@jp...> - 2000-11-03 19:30:42
|
Janko Hauser wrote: > Use the numpyio module from Travis. With this it should be possible to > read the data directly and do any typecode conversion you want > with. It has fread and fwrite functions, and it can be used with any > NumPy type like Int0 in your case. It's part of the signaltools > package. > > http://oliphant.netpedia.net/signaltools_0.5.2.tar.gz I've downloaded it , and it looks pretty handy. It does include a byteswap-in-place, which I need. What is not clear to me from the minimal docs is whether I can read file set up like: long long char long long char .... and have it put the longs into one array, and the chars into another. Also, It wasn't clear whether I could put use it to read a file that has already been opened, starting at the file's current position. I am working with a file that has a text header, so I can't just suck in the whole thing until I've parsed out the header. I can figure out the answer to these questions with some reding of the source, but it wasn't obvious at first glance, so it would be great if someone knows the answer off the top of there head. Travis? By the way, there seem to be a few methods that produce a copy, rather than doing things in place, where it seems more intuitive to do it in place. byteswapped() and astype() come to mind. With byteswapped, I imagine it's rare that you would want to keep a copy around. With astype it would also be rare to keep a copy around, but since it changes the size of the array, I imagine it would be a lot harder to code as an in-place operation. Is there a reason these operations are not available in-place? or is it just that no one has seen enough of a need to write the code. -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Chris B. <cb...@jp...> - 2000-11-03 19:18:50
|
"Paul F. Dubois" wrote: > >>> y=array([1,2,3], '1') > >>> y > array([1, 2, 3],'1') > >>> y.astype(Int32) > array([1, 2, 3],'i') Actually, this is exactly NOT what I want to do. In this case, each 1 byte interger was converted to a 4byte integer, of the same VALUE. What I want is to convert each SET of four bytes into a SINGLE 4 byte integer as it: >>> a = array([1,2,3,4],'1') >>> a = fromstring(a.tostring(),Int32) >>> a array([67305985],'i') The four one byte items in a are turned into one four byte item. What I want is to be able to do this in place, rather than have tostring() create a copy. I think fromstring may create a copy as well, having a possible total of three copies around at once. Does anyone know how many copies will be around at once with this line of code? -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Scott R. <ra...@cf...> - 2000-11-03 15:43:44
|
Bernd Rinn wrote: > > Hello, > > does anyone know why the performance of FFT.real with fftpacklite.c is > so unballanced for n=2**i and different values of i? My example is: Hi, The problem is that your routine gives arrays of length 2**n-1 not 2**n! So for the arrays where the CPU time is huge, you are FFTing a length that is a prime number (or at least not easily factorable). FFTPACK has to do a brute force DFT instead of a FFT! (Ah, the beauty of n*log(n)...) You can see that this is the case by printing the lengths of the arrays: ------------------------------- from Numeric import array,Float from FFT import real_fft from time import time i=1 while i<20: n = 2**long(i) a=array(range(long(1),n),Float) print len(a) i=i+1 ------------------------------- What you should try instead is the following: ------------------------------- from Numeric import arange,Float from FFT import real_fft from time import time i=1 while i<20: n = 2**i a=arange(n, typecode=Float) anfang = time() b = real_fft(a) ende=time() print "i=", i, " time: ", ende-anfang i=i+1 ------------------------------- Which gives the following run-times on my Pentium 450 under Linux: i= 1 time: 0.000313997268677 i= 2 time: 0.000239014625549 i= 3 time: 0.000229954719543 i= 4 time: 0.000240087509155 i= 5 time: 0.000240087509155 i= 6 time: 0.000257968902588 i= 7 time: 0.000322103500366 i= 8 time: 0.000348091125488 i= 9 time: 0.000599980354309 i= 10 time: 0.000900983810425 i= 11 time: 0.0018150806427 i= 12 time: 0.00341892242432 i= 13 time: 0.00806891918182 i= 14 time: 0.0169370174408 i= 15 time: 0.038006067276 i= 16 time: 0.0883399248123 i= 17 time: 0.199723005295 i= 18 time: 0.661148071289 i= 19 time: 0.976199030876 Hope this helps, Scott -- Scott M. Ransom Address: Harvard-Smithsonian CfA Phone: (617) 495-4142 60 Garden St. MS 10 email: ra...@cf... Cambridge, MA 02138 GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 |
From: Paul F. D. <pau...@ho...> - 2000-11-03 15:36:07
|
>>> y=array([1,2,3], '1') >>> y array([1, 2, 3],'1') >>> y.astype(Int32) array([1, 2, 3],'i') >>> -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Chris Barker Sent: Thursday, November 02, 2000 1:46 PM Cc: num...@so... Subject: [Numpy-discussion] "formstring()" in place? I have a narge array of type "1" (single bytes). I need to convert it to Int32, in the manner that fromstring() would. Right now, I am doing: Array = fromstring(Array.tostring(),'f') This works fine, but what concerns me is that I need to do this on potentially HUGE arrays, and if I understand this right, I am going to create a copy with tostring, and then another copy with fromstring, that then gets referenced to Array, at which point the first original copy gets de-referenced, and should be deleted, and the temporary one gets deleted at some point in this process. I don't know when stuff created in the middle of a statement gets deleted, so I could potentially have three copies of the data around at the same time, and at least two. Since it is exactly the same C array, I'd like to be able to do this without making any copies at all. Is it possible? It seems like it should be a simple matter of changing the typecode and shape, but is this possible? While I'm asking questions: can I byteswap in place as well? The greater problem: To give a little background, and to see if anyone has a better idea of how to do what I am doing, I thought I'd run through the task that I really need to do. I am reading a binary file full of a lot of data. I have some control over the form of the file, but it needs to be compact, so I can't just make everything the same large type. The file is essentially a whole bunch of records, each of which is a collection of a couple of different types, and which I would eventually like to get into a couple of NumPy arrays. My first cut at the problem was to read each record one at a time in a loop, and use the struct module to convert everything. This worked fine, but was pretty darn slow, so I am now doing it all with NumPy like this (one example, I have more complex ones): num_bytes = 9 # number of bytes in a record: two longs and a char # read all the data into a single byte array data = fromstring(file.read(num_bytes*num_timesteps*num_LEs),'1') # rearrange into 3-d array data.shape = (num_timesteps,num_LEs,num_bytes) # extract LE data: LEs = data[:,:,:8] # extract flag data flags = data[:,:,8] # convert LE data to longs LEs = fromstring(LEs.tostring(),Int32) if endian == 'L': # byteswap if required LEs = LEs.byteswapped() # convert to 3-d array LEs.shape = (num_timesteps,num_LEs,2) Anyone have any better ideas on how to do this? Thanks, -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Bernd R. <Ber...@un...> - 2000-11-03 14:44:04
|
Hello, does anyone know why the performance of FFT.real with fftpacklite.c is so unballanced for n=2**i and different values of i? My example is: ======================================================================= from Numeric import array,Float from FFT import real_fft from time import time i=1 while i<20: n = 2**long(i) a=array(range(long(1),n),Float) anfang = time() b = real_fft(a) ende=time() print "i=", i, " time: ", ende-anfang i+=1 ======================================================================= and the result shows (on a Pentium-III-700 under Linux): ================================================================= i= 1 time: 0.000182032585144 i= 2 time: 0.000133991241455 i= 3 time: 0.00012195110321 i= 4 time: 0.000123977661133 i= 5 time: 0.000131964683533 i= 6 time: 0.000155925750732 i= 7 time: 0.000362992286682 i= 8 time: 0.000240921974182 i= 9 time: 0.000506043434143 i= 10 time: 0.00064492225647 i= 11 time: 0.00177395343781 i= 12 time: 0.0025269985199 i= 13 time: 0.886229038239 i= 14 time: 0.0219050645828 i= 15 time: 0.0808279514313 i= 16 time: 0.327404975891 i= 17 time: 482.979220986 i= 18 time: 0.803207993507 i= 19 time: 7782.23972797 ================================================================= when I am using an array a of length 2**19 and giving the command b=real_fft(a,n=2**long(20)) the time drops from over two hours CPU-time to about 1.5 seconds. I know that fftpacklite.c is not specially optimized, but isn't a FFT method with vectors lenghts that are powers of 2 be supposed to show a more predictible run-time behavior? Could perhaps anyone point me to a free FFTPACK FORTRAN package for Linux with g77 that performs better than the default package? Any hint would be greatly appreciated. Best regards, Bernd Rinn P.S.: Please CC to Ber...@un... since I am not a member of the list. -- Bernd Rinn Fakultät für Physik Universität Konstanz Tel. 07531/88-3812, e-mail: Ber...@un... PGP-Fingerprint: 1F AC 31 64 FF EF A9 67 6E 0D 4C 26 0B E7 ED 5C |