You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Konrad H. <hi...@cn...> - 2003-03-17 17:14:58
|
On Monday 17 March 2003 16:05, chr...@ph... wrote: > Is this a typo or do I do something strange? > > with best regards, > > Christiaan Kok > > /* C API address pointer */ > #if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) > extern void **PyArray_API; > #else > #if defined(PY_ARRAY_UNIQUE_SYMBOL) > void **PyArray_API /* <--------------------------- semicolon here > removed! */ #else > static void **PyArray_API; > #endif > #endif There should be a semicolon there according to my understanding of C synt= ax! Check the C extension which you are compiling. It very probably defines PY_ARRAY_UNIQUE_SYMBOL to something before including arrayobject.h. I sus= pect=20 that there is something wrong with that definition. Konrad. --=20 -------------------------------------------------------------------------= ------ Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------= ------ |
From: <chr...@ph...> - 2003-03-17 15:08:48
|
Dear people, When trying to compile a C extension to Numeric 23.0 the compiler complains (with a syntax error) about a semicolon in line 284 of arrayobject.h. When I remove this semicolon my application seems to run. System: Windows 2000 Visual C++ 6.0 Numeric 23.0 Python 2.2.1 Is this a typo or do I do something strange? with best regards, Christiaan Kok /* C API address pointer */ #if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) extern void **PyArray_API; #else #if defined(PY_ARRAY_UNIQUE_SYMBOL) void **PyArray_API /* <--------------------------- semicolon here removed! */ #else static void **PyArray_API; #endif #endif |
From: Blair H. <b....@ir...> - 2003-03-16 23:25:22
|
I have upgraded from NumPy 21 to NumPy 23 and am surprised to find that my matrix assignment doesn't work like it used to. For example: >>> from Matrix import * >>> m = Matrix( [[1,2],[3,4]]) >>> m[1,1] 4 >>> m[1,1] = 2 Traceback (most recent call last): File "<interactive input>", line 1, in ? File "C:\PYTHON22\lib\site-packages\Numeric\Matrix.py", line 183, in __setitem__ value = value[0,0] IndexError: invalid index >>> Apparently, I should have written >>> m[1,1] = Matrix( 2 ) Is this really indended, or am I confused? The change appears to have been introduced to NumPy in release 21.3 I note that similar code using an array does work: >>> from Numeric import * >>> m = array( [[1,2],[3,4]]) >>> m[1,1] 4 >>> m[1,1] = 3 >>> m[1,1] 3 |
From: Konrad H. <hi...@cn...> - 2003-03-12 18:20:18
|
On Wednesday 12 March 2003 18:06, Mathieu Malaterre wrote: > =09I have been using some scripts with Numeric 21 that worked great but > after an upgrade to (22 or 23), I can't find the equivalent. Could > someone point me out some documentation, please. After updating Numeric, you must recompile all extension modules that use= =20 Numeric. In your script, that's the netCDF module in Scientific Python. I= =20 suppose you didn't, because that gives exactly the type of errors that yo= u=20 see. Konrad. --=20 -------------------------------------------------------------------------= ------ Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------= ------ |
From: Sebastian H. <ha...@ms...> - 2003-03-12 17:50:33
|
> > You are using Numeric not numarray , right? > > Correct. I need this stuff for production use, so I just can't quite move to > numarray yet. > > > In any case : > > My thinking goes as follows: > > I was looking for a wrapping-solution that would be most transparent to the > > people that would write the C/C++ code (or even Fortran, BTW) > > Since I already had some experience using SWIG, that's what I wanted to use > > to handle the "numarray binding". > > SWIG has a quite strong (meaning: flexible, general) way of handling data > > types. They call it "type maps": > > Once you have that in place (later more) the whole "binding" looks like > > this: (an example) > > [Snip] > > Does your stuff handle non-contiguity transparently? I couldn't quite find a > solution for that, which means that a **pointer I get can't be blindly passed > to -say- a Numerical Recipes routine which expects to treat it like a matrix. > If it's contiguous all is fine, but otherwise one needs to either make a > copy or handle the 'skipping' by hand in the column-index loops (as shown in > my examples). > > I tried to come up with a solution in C, but managed to convince myself that > it just can't be done. I was thinking of some typecasting black magic where > the pointers were recast for the pointer arithmetic to hide the column > offsets, with the proper type still being returned at dereferencing time. I > finally decided this was just impossible in C because the compiler wants to > know the pointer sizes at compile time. But maybe I'm wrong and just didn't > think about the problem hard enough. > > At any rate, I'd be interested in seeing your typemap stuff. Especially if it > either works with Numeric or it can be modified for Numeric use with > reasonably limited effort. > Fernando, Sorry, I forgot to comment on the "non-contiguity " question... First: My last post contained practically everything there is to typemapping needed when you use SWIG. ("practically" means that I have many versions of that for handling other types (like short, int, complex, ...) and fro handling 2d arrays.) Do YOU know SWIG ? "handle non-contiguity transparently" What are exactly the possible scenarios of "non-contiguous" ? If, for example having a 3d array, only going along the first axis is non-c. but along the other ones still is, I could imagine that it would not be to problematic to "forward" that scheme into the C side, that is: in a non-transparent way. The only alternative I see would be introducing some macros to translate each memory pointer. But that would also not be transparent ( since the C side would have to use these "wired" macros) and I is like to be slower, because of all the new pointer arithmetic (inside those macros) I would favor the the first scenario. Regards, Sebastian |
From: Chris B. <Chr...@no...> - 2003-03-12 17:43:46
|
On Wednesday, March 12, 2003, at 09:06 AM, Mathieu Malaterre wrote: > from Scientific.IO.NetCDF import * > File "<stdin>", line 1, in ? > ValueError: Invalid type for array > Did you re-build the NetCDF module? Ifnot, that's probably your problem. Extensions that use Numeric (like NetCDF) have to be built against the version of Numeric that you are running them with. I had a very similar problem with NetCDf, and re-building it with the version of Numeric I was using solved it. -Chris Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Mathieu M. <Mat...@cr...> - 2003-03-12 17:09:45
|
Hi all I have been using some scripts with Numeric 21 that worked great but after an upgrade to (22 or 23), I can't find the equivalent. Could someone point me out some documentation, please. Here is a script that used to work with Numeric-21: #---- from Scientific.IO.NetCDF import * file = NetCDFFile('test.nc', 'w') var = file.createVariable('zspace','l',()) var.dummy = 'foo' #everything is fine ! var.bug = 8 Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: Invalid type for array var.bug = 18.0 Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: Invalid type for array var.bug = '213.7' #everything is fine ! #---- Numerics don't want real type anymore ? Thanks for any help mathieu -- Mathieu Malaterre CREATIS 28 Avenue du Doyen LEPINE B.P. Lyon-Montchat 69394 Lyon Cedex 03 http://www.creatis.insa-lyon.fr/~malaterre/ |
From: <chr...@ph...> - 2003-03-12 08:46:03
|
> Well, in case you (or others) find it useful, I'm including here a little > library I wrote for accessing general Numeric 2-d arrays (contiguous or not) > in an easy manner. > > Here's a snippet of a simple (included) example of a function to print an > integer array: > > static PyObject *idisp(PyObject *self, PyObject *args) > { > PyArrayObject *array; > int **arr; // for the data area > int i,j,cs; > > if (!PyArg_ParseTuple(args, "O!",&PyArray_Type,&array ) ) > return NULL; > > arr = imatrix_data(array,&cs); > for (i=0;i<array->dimensions[0];++i) { > for (j=0;j<cs*array->dimensions[1];j+=cs) > printf("%5d ",arr[i][j]); > printf("\n"); > } > free(arr); > > Py_INCREF(Py_None); > return Py_None; > } > > You get the **arr pointer and you can then manipulate it as a[i][j] > conveniently. The supplied example file may be enough for many to write their > Numpy C extensions without much trouble. Could someone also give an example of a small C code which has as input a numeric array and returns an other numeric array? My code either coredumps or leaks memory. wbr Christiaan Kok |
From: Janko H. <jh...@co...> - 2003-03-11 08:21:39
|
On Thu, 6 Mar 2003 09:37:19 -0800 "Paul Dubois" <pa...@pf...> wrote: > This is my last release as Head Nummie. The developers will now > choose a new victim, er, I mean leader, as set out in our charter. I > will still be an active developer and user. I want to thank all of > you for your help and civility over the years I have done this job. > And thank you to my supervisors over the years at Lawrence Livermore > National Laboratory for encouraging me to do it. > So let me take this opportunity to express my thankfulness for your leadership in the past years. Great job, steady development and clear headed leadership (:-) With regards, __Janko Hauser -- Dr. Janko Hauser Software Engineering c o m . u n i t G m b H online-schmiede seit 1994 http://www.comunit.de/ mailto:jh...@co... Eiffestr. 598 20537 Hamburg | Germany Fon 040 | 21 11 05 25 Fax 040 | 21 11 05 26 |
From: Michiel J. L. de H. <md...@im...> - 2003-03-11 02:35:19
|
Dear pythoneers, I noticed recently that a call to eigenvalues in numpy's LinearAlgebra does not return with Python run from Cygwin. The eigenvalues functions runs fine on Linux and Windows. The problem seems to occur in the routine int dlamc4_(integer *emin, doublereal *start, integer *base); in Src/dlapack_lite.c. To my surprise, by adding some printf statements, this routine can be made to work. So this seems to be related to optimization issues. Indeed, if I compile dlapack_lite.c without optimization, then eigenvalues works correctly without having to add any printf's. Now in the LAPACK FAQ, I found the following statement: > 1.26) Problems compiling dlamch.f? > > The routine dlamch.f (and its dependent subroutines dlamc1, dlamc2, > dlamc3, dlamc4, dlamc5) MUST be compiled without optimization. If > you downloaded the entire lapack distribution this will be taken > care of by the LAPACK/SRC/Makefile. However, if you downloaded a > specific LAPACK routine plus dependencies, you need to take care > that slamch.f (if you downloaded a single precision real or single > precision complex routine) or dlamch.f (if you downloaded a double > precision real or double precision complex routine) has been > included. When browsing through Src/dlapack_lite.c, it seems that some tricks are being used (such as calling DLAMC3) to avoid optimization to take place. So I can see two ways to resolve this bug: 1) Come up with more tricks to avoid optimization to take place when compiling dlamc4_. 2) Keep the dlamc* routines in a separate file, and let setup.py compile it without optimization. Any ideas / preferences on which way to go? Unfortunately I don't know much about optimization issues, so all suggestions are welcome. Michiel de Hoon University of Tokyo, Human Genome Center. -- Michiel de Hoon, Assistant Professor University of Tokyo, Institute of Medical Science Human Genome Center 4-6-1 Shirokane-dai, Minato-ku Tokyo 108-8639 Japan http://bonsai.ims.u-tokyo.ac.jp/~mdehoon |
From: Sebastian H. <ha...@ms...> - 2003-03-10 20:32:45
|
<snip> > Well, in case you (or others) find it useful, I'm including here a little > library I wrote for accessing general Numeric 2-d arrays (contiguous or not) > in an easy manner. > > Here's a snippet of a simple (included) example of a function to print an > integer array: > > static PyObject *idisp(PyObject *self, PyObject *args) > { > PyArrayObject *array; > int **arr; // for the data area > int i,j,cs; > > if (!PyArg_ParseTuple(args, "O!",&PyArray_Type,&array ) ) > return NULL; > > arr = imatrix_data(array,&cs); > for (i=0;i<array->dimensions[0];++i) { > for (j=0;j<cs*array->dimensions[1];j+=cs) > printf("%5d ",arr[i][j]); > printf("\n"); > } > free(arr); > > Py_INCREF(Py_None); > return Py_None; > } > > You get the **arr pointer and you can then manipulate it as a[i][j] > conveniently. The supplied example file may be enough for many to write their > Numpy C extensions without much trouble. > > > Again: thanks so much. > > BTW: Is there general interest in my SWIG typemaps. (SWIG is maybe the > > easiest way to wrap C/C++ functions (and classes) > > into Python (and/or Perl, Java, Ruby,...) ? I think especially for > > numerical stuff, that "link" in of interest. > > I'd love to see them. So far I've either used high-level stuff like > weave.inline() or just written the extensions by hand (as in the code I'm > supplying here). I'd like to see this, especially if there is an easy way to > handle contiguity issues with it. You are using Numeric not numarray , right? In any case : My thinking goes as follows: I was looking for a wrapping-solution that would be most transparent to the people that would write the C/C++ code (or even Fortran, BTW) Since I already had some experience using SWIG, that's what I wanted to use to handle the "numarray binding". SWIG has a quite strong (meaning: flexible, general) way of handling data types. They call it "type maps": Once you have that in place (later more) the whole "binding" looks like this: (an example) C side: double sebFunc(float *arr, int nx, int ny, int nz) { double a =0; for(int i=0;i<nx*ny*nz;i++) a+= arr[i]; return a; } Then there is the SWIG interface file (suggested file extension ".i"): double sebFunc(float *array3d, int nx, int ny, int nz); ((This has (in firts approx.) the same syntax as a normal C-header file - just that SWIG "bites" on the argument-variable names, in this case (float *array3d, int nx, int ny, int nz) and realizes (because of my "type map") that this should be a python-(numeric) array Then you call it from python just like that... import myModule print myModule.sebFunc(array) So the magic is all hidden in the the typemap: For that I have a separate SWIG-interface that gets #included into the above mentioned one. Here is that interface file ( just the part for 3d float arrays) %typecheck(SWIG_TYPECHECK_FLOAT) (float *array2d, int nx, int ny) { if(!PyArray_Check($input)) $1=0; else if(((PyArrayObject*)$input)->descr->type_num != tFloat32) $1=0; else if(((PyArrayObject*)$input)->nd != 2) $1=0; else $1=1; } %typemap(python,in) (float *array3d, int nx, int ny, int nz) (PyArrayObject *temp=NULL) { debugPrintf("debug: float *array3d -> NA_InputArray\n"); PyArrayObject *NAimg = NA_InputArray($input, tFloat32, NUM_C_ARRAY); if (!NAimg) { printf("**** no (float) numarray *****\n"); return 0; } temp = NAimg; $1 = (float *) (NAimg->data + NAimg->byteoffset); switch(NAimg->nd) { case 1: $2 = NAimg->dimensions[0]; $3=1; break; case 2: $2 = NAimg->dimensions[1]; $3=NAimg->dimensions[0]; break; default: debugPrintf(" **** numarray dim >2 (ns=%d)\n", NAimg->nd); _SWIG_exception(SWIG_RuntimeError, "numarray dim > 2"); return 0; } %typemap(python,freearg) (float *array3d, int nx, int ny, int nz) { Py_XDECREF(temp$argnum); } The first part (%typecheck) is needed so that SWIG call even handle overloaded functioned correctly. (BTW, SWIG happily wraps my template functions to; just with classes (that keep a reference to an array) I not that sure yet [reference counting]) I think I don't have all necessary error handling parts implemented yet. But this is already a VERY useful tool for me - and I am in fact working right now on convincing some ( C and Fortran only) people to try this ;-) [ once they see how cool it is to call their (Fortran or C) directly from Python, maybe they start to realize that there is "something new out there" (emm, I meant Python ] So what is the status on this list: Are people familiar with (know and/or using) SWIG ? Cheers, Sebastian |
From: Todd M. <jm...@st...> - 2003-03-08 12:01:03
|
Sebastian Haase wrote: >Thanks, SO MUCH ... >I almost went crazy yesterday - and was really desperate by the time= I wrote >that email. >Somehow I newer needed that offset field until now. So: when I do t= he >NA_InputArray call I get a "proper" C-array >just that it does NOT necessarily start at NAimg->data >but rather at >NAimg->data + NAimg->byteoffset > =20 > The numarray-0.4 manual (available at:=20 http://prdownloads.sourceforge.net/numpy/numarray-0.4.pdf?download)= =20 documents how to write Python stubs using the "high level" API on Pag= e=20 70. The thing you appear to be missing is a call to NA_OFFSETDATA wh= ich=20 gets a pointer to the data in an array by adding the buffer pointer a= nd=20 byteoffset together. Todd >Again: thanks so much. >BTW: Is there general interest in my SWIG typemaps. (SWIG is maybe t= he >easiest way to wrap C/C++ functions (and classes) >into Python (and/or Perl, Java, Ruby,...) ? I think especially for >numerical stuff, that "link" in of interest. > >Sebastian > > >----- Original Message ----- >From: "Francesc Alted" <fa...@op...> >To: "Sebastian Haase" <ha...@ms...>; ><Num...@li...> >Sent: Thursday, March 06, 2003 9:40 PM >Subject: Re: [Numpy-discussion] Please help - pointer to slice > > >A Divendres 07 Mar=E7 2003 01:42, Sebastian Haase va escriure: > =20 > >>if I try to debug this by doing: >>for z in range(nz): >> print repr(arr[z]._data) >> >>that also tells be that python / numarray thinks all slices are loc= ated at >>the same address, that is: >>every slice looks just like the full array. >> =20 >> > >You can access the slices by taking into account the byte offset >attribute. Look at the next example: > >In [20]: a=3Dnumarray.arange(100, shape=3D(10,10)) > >In [21]: a._data >Out[21]: <memory at 083e58e8 with size:400 held by object at 083e58d= 0> > >In [22]: a[1]._data >Out[22]: <memory at 083e58e8 with size:400 held by object at 083e58d= 0> > >as you already know, both memory buffers point to the same address. = I guess >this is implemented in that way so as to not copy data unnecessarily= . > >now, look at: > >In [23]: a._byteoffset >Out[23]: 0 > >In [24]: a[1]._byteoffset >Out[24]: 40 > >So, you can know where the data actually starts by looking at the >_byteoffset property. In fact, you should always look at it in order= to get >proper results!. > >In C, you can access to this information by looking at the byteoffse= t field >in the PyArrayObject struct (look at chapter 10 in the User's Manual= ). > >Hope that helps, > >-- >Francesc Alted > > > > > >------------------------------------------------------- >This SF.net email is sponsored by: Etnus, makers of TotalView, The d= ebugger=20 >for complex code. Debugging C/C++ programs can leave you feeling los= t and=20 >disoriented. TotalView can help you find your way. Available on majo= r UNIX=20 >and Linux platforms. Try it free. www.etnus.com >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > =20 > |
From: Todd M. <jm...@st...> - 2003-03-08 11:55:43
|
Sebastian Haase wrote: >Hi, >I am just looking into numarray.fromfuntion. >I have e.g. this: > >def func(x,y,z): > return 1.2*(x+y+z) > >a = na.fromfunction(func, (nz,ny,nx)) >b = a.astype(na.Float32) > >Is there a way to tell fromfunction to _directly_ generate a Float32 typed >array. > I don't think so. Numeric does not have this capability either, although it appears easy enough to add it. >As I see it only "standard" Python type are possible now. >Or: Is there a scalar conversion function like numarray.Float32( 3.141 ) ? > You can create rank-0 numarrays which are essentially typed scalars; these are not, however, widely used. >>> import numarray >>> a = numarray.array(3.141, type=numarray.Float32) > >Thanks, >Sebastian Haase > > > > >------------------------------------------------------- >This SF.net email is sponsored by: Etnus, makers of TotalView, The debugger >for complex code. Debugging C/C++ programs can leave you feeling lost and >disoriented. TotalView can help you find your way. Available on major UNIX >and Linux platforms. Try it free. www.etnus.com >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Fernando P. <fp...@co...> - 2003-03-08 02:00:45
|
Sebastian Haase wrote: > Thanks, SO MUCH ... > I almost went crazy yesterday - and was really desperate by the time I wrote > that email. > Somehow I newer needed that offset field until now. So: when I do the > NA_InputArray call I get a "proper" C-array > just that it does NOT necessarily start at NAimg->data > but rather at > NAimg->data + NAimg->byteoffset Well, in case you (or others) find it useful, I'm including here a little library I wrote for accessing general Numeric 2-d arrays (contiguous or not) in an easy manner. Here's a snippet of a simple (included) example of a function to print an integer array: static PyObject *idisp(PyObject *self, PyObject *args) { PyArrayObject *array; int **arr; // for the data area int i,j,cs; if (!PyArg_ParseTuple(args, "O!",&PyArray_Type,&array ) ) return NULL; arr = imatrix_data(array,&cs); for (i=0;i<array->dimensions[0];++i) { for (j=0;j<cs*array->dimensions[1];j+=cs) printf("%5d ",arr[i][j]); printf("\n"); } free(arr); Py_INCREF(Py_None); return Py_None; } You get the **arr pointer and you can then manipulate it as a[i][j] conveniently. The supplied example file may be enough for many to write their Numpy C extensions without much trouble. > Again: thanks so much. > BTW: Is there general interest in my SWIG typemaps. (SWIG is maybe the > easiest way to wrap C/C++ functions (and classes) > into Python (and/or Perl, Java, Ruby,...) ? I think especially for > numerical stuff, that "link" in of interest. I'd love to see them. So far I've either used high-level stuff like weave.inline() or just written the extensions by hand (as in the code I'm supplying here). I'd like to see this, especially if there is an easy way to handle contiguity issues with it. Best, f. |
From: Sebastian H. <ha...@ms...> - 2003-03-07 20:15:49
|
Hi, I am just looking into numarray.fromfuntion. I have e.g. this: def func(x,y,z): return 1.2*(x+y+z) a = na.fromfunction(func, (nz,ny,nx)) b = a.astype(na.Float32) Is there a way to tell fromfunction to _directly_ generate a Float32 typed array. As I see it only "standard" Python type are possible now. Or: Is there a scalar conversion function like numarray.Float32( 3.141 ) ? Thanks, Sebastian Haase |
From: Sebastian H. <ha...@ms...> - 2003-03-07 19:26:04
|
Thanks, SO MUCH ... I almost went crazy yesterday - and was really desperate by the time I wrote that email. Somehow I newer needed that offset field until now. So: when I do the NA_InputArray call I get a "proper" C-array just that it does NOT necessarily start at NAimg->data but rather at NAimg->data + NAimg->byteoffset Again: thanks so much. BTW: Is there general interest in my SWIG typemaps. (SWIG is maybe the easiest way to wrap C/C++ functions (and classes) into Python (and/or Perl, Java, Ruby,...) ? I think especially for numerical stuff, that "link" in of interest. Sebastian ----- Original Message ----- From: "Francesc Alted" <fa...@op...> To: "Sebastian Haase" <ha...@ms...>; <Num...@li...> Sent: Thursday, March 06, 2003 9:40 PM Subject: Re: [Numpy-discussion] Please help - pointer to slice A Divendres 07 Març 2003 01:42, Sebastian Haase va escriure: > if I try to debug this by doing: > for z in range(nz): > print repr(arr[z]._data) > > that also tells be that python / numarray thinks all slices are located at > the same address, that is: > every slice looks just like the full array. You can access the slices by taking into account the byte offset attribute. Look at the next example: In [20]: a=numarray.arange(100, shape=(10,10)) In [21]: a._data Out[21]: <memory at 083e58e8 with size:400 held by object at 083e58d0> In [22]: a[1]._data Out[22]: <memory at 083e58e8 with size:400 held by object at 083e58d0> as you already know, both memory buffers point to the same address. I guess this is implemented in that way so as to not copy data unnecessarily. now, look at: In [23]: a._byteoffset Out[23]: 0 In [24]: a[1]._byteoffset Out[24]: 40 So, you can know where the data actually starts by looking at the _byteoffset property. In fact, you should always look at it in order to get proper results!. In C, you can access to this information by looking at the byteoffset field in the PyArrayObject struct (look at chapter 10 in the User's Manual). Hope that helps, -- Francesc Alted |
From: Francesc A. <fa...@op...> - 2003-03-07 05:40:29
|
A Divendres 07 Mar=E7 2003 01:42, Sebastian Haase va escriure: > if I try to debug this by doing: > for z in range(nz): > print repr(arr[z]._data) > > that also tells be that python / numarray thinks all slices are located= at > the same address, that is: > every slice looks just like the full array. You can access the slices by taking into account the byte offset attribute. Look at the next example: In [20]: a=3Dnumarray.arange(100, shape=3D(10,10)) In [21]: a._data Out[21]: <memory at 083e58e8 with size:400 held by object at 083e58d0> In [22]: a[1]._data Out[22]: <memory at 083e58e8 with size:400 held by object at 083e58d0> as you already know, both memory buffers point to the same address. I gue= ss this is implemented in that way so as to not copy data unnecessarily. now, look at: In [23]: a._byteoffset Out[23]: 0 In [24]: a[1]._byteoffset Out[24]: 40 So, you can know where the data actually starts by looking at the _byteoffset property. In fact, you should always look at it in order to g= et proper results!. In C, you can access to this information by looking at the byteoffset fie= ld in the PyArrayObject struct (look at chapter 10 in the User's Manual). Hope that helps, --=20 Francesc Alted |
From: Sebastian H. <ha...@ms...> - 2003-03-07 00:42:40
|
Hi, I am using SWIG to wrap some C++ function operating on numarray arrays. Everything works quite well -- up until yesterday... I have some 3d data and was thinking if the actual operation I want to do is done "slice by slice", I could just loop over Z in python and have my C function defined like this: void DoSomething(float * array2d, int nx, int ny) The python loop looks like this: arr = numarray.array(shape=(100,100,100), type=numarray.Float32) for z in range(nz): DoSomething( arr[z] ) SOMEHOW the pointer that get transfered to the C++ side is always the same as for the full (3d) array. if I try to debug this by doing: for z in range(nz): print repr(arr[z]._data) that also tells be that python / numarray thinks all slices are located at the same address, that is: every slice looks just like the full array. Please help, I don't know what to do... Sebastian Haase |
From: Paul D. <pa...@pf...> - 2003-03-06 21:32:43
|
I tried to put the kinds module at cvsroot/numpy/Extras/kinds. I didn't succeed. We now have a cvsroot/numpy/kinds (and an Extras from which I have already removed the contents of kinds after it got installed without the 'kinds' part.) CVS just never has mapped to my brain. I don't get something about it. If another developer knows how to fix this, great. Otherwise that is where it is going to sit. |
From: Paul D. <pa...@pf...> - 2003-03-06 17:37:33
|
Numeric-23.0.tar.gz is available for download at http://prdownloads.sourceforge.net/numpy/Numeric-23.0.tar.gz?download The Windows zip and exe are ready to upload but I am having trouble with ftp. I will try later today if another developer doesn't beat me to it. A special thanks to all those who contributed bug fixes, especially = Travis Oliphant, and all of you bug reporters out there. This is my last release as Head Nummie. The developers will now choose a = new victim, er, I mean leader, as set out in our charter. I will still be an active developer and user. I want to thank all of you for your help and civility over the years I have done this job. And thank you to my supervisors over the years at Lawrence Livermore National Laboratory for encouraging me to do it. -- Paul Dubois Here comes the news: Version 23 March 2003 Important notice: Two packages have been removed from optional ones: PropertiedClasses, kinds. MA has been rewritten to use standard property and will not work for=20 ancient Pythons. (Pre 2.1, I think). Use the MA / Propertied Classes=20 from Numeric 22 if you can't use this one. The kinds package (subject of PEP-0242) will be released as a = separate=20 package shortly. PEP-0242 was withdrawn because this facility did not = seem to be worth putting in the standard library, but kinds is correct as is. [ 695200 ] Richard Everson (R.M...@ex...) has donated a = dotblas=20 Package that gets Numeric to use optimized BLAS libraries for = dot innerproduct, and vdot --- a conjugate vector dot product he introduced. setup.py must be altered by the user in a manner similar to the alterations to use optimized BLAS for LinearAlgebra [675777] new-style classes as objects in a sequence were not being = detected correctly by array_objecttype. Corrected array_objecttype to handle them. (Oliphant) [contribution] Fernando Perez has donated a revised version of the = tutorial file view.py that seems to be less likely to hang the interpreter. [68392923] dimensions+scalar -> crash (jneb) Found divergent value of MAX_DIMS in ufuncobject.c;=20 The value was 20 there and 40 in two other places. But 40 is = ludicrous, we would never have that much memory available. Changed them all to 30. [ unreported ] Changed PY_VERSION_HEX check for version 2.2=20 to 0x0202000 as it should have been so that true_division numeric ops can be supported [ 614808 ] Inconsistent use of tabs and spaces Fixed as suggested by Jimmy Retzlaff LinearAlgebra.py=20 Matrix.py=20 RNG/__init__.py=20 RNG/Statistics.py=20 [ 621032 ] needless work in multiarraymodule.c=20 Fixes suggested by Greg Smith applied. Also recoded OBJECT_DotProduct to eliminate a warning error. [ 630584 ] generalized_inverse of complex array=20 Fix suggested by Greg Smith applied. [ 652061 ] PyArray_As2D doesn't check pointer. Fix suggested by Andrea Riciputi applied. [ 655512 ] inverse_real_fft incorrect many sizes Fix given by mbriest applied. [unreported] a.real increased reference count of a and raised error = when a is not complex. Fixed to apparent intended behavior of returning an array with the same data. (Oliphant) Patch for 64-bit machines applied. Appears to work ok on 32 bit but = don't have the machine to test the patch.(Dubois) [ 627771 ] matrixmultiply broken for non-contig (fixed, test case added) (Greg Smith) [unreported] Fixed ArrayPrinter when NaN's show up in Float32 precision. [ 545336 ] Bug in RandomArray.randint Changed the function ranf to type double (Chuck Harris) Harris is probably right that all floats should be double in this module but it may be this has performance or storage consequences that would=20 bite somebody. I support RNG, not this one. -- Dubois |
From: Jochen <jo...@jo...> - 2003-03-06 16:35:29
|
On Thu, 06 Mar 2003 14:24:58 +0900 Michiel Jan Laurens de Hoon wrote: Michiel> You probably all know Travis Oliphant's SpecialFuncs Michiel> (previously called cephes) module, which contains a large Michiel> number of special functions. The special-funcs of GSL are wrapped in pygsl. Greetings, Jochen --=20 Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert=E9, =C9galit=E9, Fraternit=E9 GnuPG key: CC1B0B4D Sex, drugs and rock-n-roll |
From: Paul F D. <pa...@pf...> - 2003-03-06 14:14:36
|
FWIW, I am about to create an 'Extras' area in the Numeric cvs = repository where non-core, distutils-enabled addons can be put by the numpy = developers. These would be things that install as their own packages, not a part of Numeric or Numarray. > -----Original Message----- > From: num...@li...=20 > [mailto:num...@li...] On=20 > Behalf Of Michiel Jan Laurens de Hoon > Sent: Wednesday, March 05, 2003 9:25 PM > To: num...@li... > Subject: [Numpy-discussion] Integrating cephes with numpy >=20 >=20 > Hi everybody, >=20 > You probably all know Travis Oliphant's SpecialFuncs=20 > (previously called=20 > cephes) module, which contains a large number of special functions.=20 > Currently it is not easy to install this module, since SpecialFuncs=20 > became a part of SciPy, which is great in itself but cannot=20 > be installed=20 > easily as other parts of SciPy depend on a host of other packages. As=20 > Scipy will get extended in the future, this problem will only=20 > get worse.=20 > Recently I have been asked several times for such a module=20 > for special=20 > functions, but I don't know where I can refer people to=20 > without qualms,=20 > especially for newbies. >=20 > Since the 'cephes' part of SpecialFuncs is basically an extension of=20 > umathmodule.c in numpy, it seems that numpy would be the=20 > natural place=20 > for cephes. So I would suggest to integrate cephes with=20 > numpy, either by=20 > adding cephes' special functions to umathmodule.c or as a separate=20 > module (similar to the LinearAlgebra or RandomArray parts of=20 > numpy). In=20 > the process, we can solve some installation problems in cephes which=20 > seem to be recurring frequently (see the numpy mailing list for some=20 > examples). >=20 > For the moment, I slapped together a version of the cephes=20 > module that=20 > can be installed more easily; however, I would think it is=20 > better to do=20 > this the right way and to avoid multiple variations of basically the=20 > same package floating around in cyberspace. >=20 > Any opinions on this? If this seems like a good idea, I'd be=20 > happy to do=20 > some additional coding if needed to set this up, though=20 > Travis Oliphant=20 > has basically done everything already so I wouldn't think=20 > much further=20 > coding is needed. >=20 > --Michiel de Hoon, University of Tokyo. >=20 > --=20 > Michiel de Hoon, Assistant Professor > University of Tokyo, Institute of Medical Science > Human Genome Center > 4-6-1 Shirokane-dai, Minato-ku > Tokyo 108-8639 > Japan > http://bonsai.ims.u-tokyo.ac.jp/~mdehoon >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Etnus, makers of=20 > TotalView, The debugger=20 > for complex code. Debugging C/C++ programs can leave you=20 > feeling lost and=20 > disoriented. TotalView can help you find your way. Available=20 > on major UNIX=20 > and Linux platforms. Try it free. www.etnus.com=20 > _______________________________________________ > Numpy-discussion mailing list Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion >=20 |
From: Pearu P. <pe...@ce...> - 2003-03-06 10:20:38
|
On Thu, 6 Mar 2003, Michiel Jan Laurens de Hoon wrote: > You probably all know Travis Oliphant's SpecialFuncs (previously called > cephes) module, which contains a large number of special functions. > Currently it is not easy to install this module, since SpecialFuncs > became a part of SciPy, which is great in itself but cannot be installed > easily as other parts of SciPy depend on a host of other packages. As > Scipy will get extended in the future, this problem will only get worse. Scipy subpackage 'special' can be installed standalone from scipy and all its dependencies that are irrelevant for the special package (just cd to scipy/special and do 'python setup_special.py install'). So, there should be no need for looking for a new/another home for cephes wrappers, especially because it would require double-maintaining basically of the same code. Pearu |
From: Michiel J. L. de H. <md...@im...> - 2003-03-06 05:20:57
|
Hi everybody, You probably all know Travis Oliphant's SpecialFuncs (previously called cephes) module, which contains a large number of special functions. Currently it is not easy to install this module, since SpecialFuncs became a part of SciPy, which is great in itself but cannot be installed easily as other parts of SciPy depend on a host of other packages. As Scipy will get extended in the future, this problem will only get worse. Recently I have been asked several times for such a module for special functions, but I don't know where I can refer people to without qualms, especially for newbies. Since the 'cephes' part of SpecialFuncs is basically an extension of umathmodule.c in numpy, it seems that numpy would be the natural place for cephes. So I would suggest to integrate cephes with numpy, either by adding cephes' special functions to umathmodule.c or as a separate module (similar to the LinearAlgebra or RandomArray parts of numpy). In the process, we can solve some installation problems in cephes which seem to be recurring frequently (see the numpy mailing list for some examples). For the moment, I slapped together a version of the cephes module that can be installed more easily; however, I would think it is better to do this the right way and to avoid multiple variations of basically the same package floating around in cyberspace. Any opinions on this? If this seems like a good idea, I'd be happy to do some additional coding if needed to set this up, though Travis Oliphant has basically done everything already so I wouldn't think much further coding is needed. --Michiel de Hoon, University of Tokyo. -- Michiel de Hoon, Assistant Professor University of Tokyo, Institute of Medical Science Human Genome Center 4-6-1 Shirokane-dai, Minato-ku Tokyo 108-8639 Japan http://bonsai.ims.u-tokyo.ac.jp/~mdehoon |
From: Fernando P. <fp...@co...> - 2003-02-28 22:37:05
|
Hi all, > Subject: [Numpy-discussion] Last call for v. 23 > > I am going to make a release of Numeric, 23.0. Fellow developers who are inspired > to fix a bug are urged to do so immediately. > > This will be a bug fix release. in the scipy mailing list there were some discussions about view.py as included in NumTut. It seems that many folks (including myself) have had problems with it, and they seem to be threading-related. The symptom is that once view is imported, the interactive interpreter essentially locks up, and typing becomes nearly impossible. I know next to nothing about threading, but in an attempt to fix the problem I stripped view.py bare of everything I didn't understand, until it worked :) Basically I removed all PIL and threading-related code, and left only the bare Tk code in place. Naive as this approach was, it seems to have worked. Some folks reported success, and David Ascher (the original author of view.py) suggested I submit this to the Numpy team as an update to the tutorial. There's a good chance the current view is just broken and nobody has bothered to use it in a long time. I'm attaching the new view here as a file, but if there is a different protocol I should follow, please let me know (patch, etc). As I said, this was the most simple-minded thing I could do to make it work. So if you are interested in accepting this, it might be wise to have a look at it first. On the upside, pretty much all I did was to _remove_ code, not to add anything. So the analysis should be easy (the new code is far simpler and shorter than the original). I've tested it personally under python 2.2.1 (the stock Redhat 8.0 install). Best, Fernando Perez. |