You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Sebastian H. <ha...@ms...> - 2002-12-03 23:08:12
|
> On Tue, Dec 03, 2002 at 02:34:06PM -0800, Sebastian Haase wrote: > > Hi all, > > I downloaded numarray 0.4 about 5 minutes after I got the announce but > > my naive 'python2.2 ./setup.py build' gives this > > > > haase@baboon:~/numarray-0.4: python2.2 ./setup.py build > > running build > > running build_py > > not copying Lib/ndarray.py (output up-to-date) > [...] > > not copying Lib/memmap.py (output up-to-date) > > running build_ext > > building '_conv' extension > > error: file 'Src/_convmodule.c' does not exist > > > > What am I missing? I'm running Linux (debian woody) on i386. > > Looks like you have to run 'python2.2 ./setup.py install' instead. Looking at > setup.py, something special is done when the target is 'install'. > > [I think this is a bad idea, as I like to build stuff as my user, and > install as root. This requires me to build it as root.] > Thanks for the hint - I'll try that. But I would consider that a bug, right? Sebastian |
From: <co...@ar...> - 2002-12-03 22:42:52
|
On Tue, Dec 03, 2002 at 02:34:06PM -0800, Sebastian Haase wrote: > Hi all, > I downloaded numarray 0.4 about 5 minutes after I got the announce but > my naive 'python2.2 ./setup.py build' gives this > > haase@baboon:~/numarray-0.4: python2.2 ./setup.py build > running build > running build_py > not copying Lib/ndarray.py (output up-to-date) [...] > not copying Lib/memmap.py (output up-to-date) > running build_ext > building '_conv' extension > error: file 'Src/_convmodule.c' does not exist > > What am I missing? I'm running Linux (debian woody) on i386. Looks like you have to run 'python2.2 ./setup.py install' instead. Looking at setup.py, something special is done when the target is 'install'. [I think this is a bad idea, as I like to build stuff as my user, and install as root. This requires me to build it as root.] -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/cookedm/ |co...@ph... |
From: <co...@ar...> - 2002-12-03 22:39:32
|
On Tue, Dec 03, 2002 at 11:03:20AM +0100, Konrad Hinsen wrote: > co...@ar... (David M. Cooke) writes: > > > The idea is that a PyArrayObject has a member 'base', which is DECREF'd > > when the array is deallocated. The idea is for when arrays are slices of > > Indeed, but this is an undocumented implementation feature. Use at your > own risk. Nope, documented implementation feature. From the C API documentation, PyObject * base Used internally in arrays that are created as slices of other arrays. Since the new array shares its data area with the old one, the original array's reference count is incremented. When the subarray is garbage collected, the base array's reference count is decremented. Looking through Numeric's code, nothing requires base to be an array object. Besides, Numeric isn't going to change substantially before Numarray replaces it (although I don't know the analogue of this trick in Numarray). The usefulness of this trick (IMHO) outweighs the small chance of the interface changing. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/cookedm/ |co...@ph... |
From: Sebastian H. <ha...@ms...> - 2002-12-03 22:34:20
|
Hi all, I downloaded numarray 0.4 about 5 minutes after I got the announce but my naive 'python2.2 ./setup.py build' gives this haase@baboon:~/numarray-0.4: python2.2 ./setup.py build running build running build_py not copying Lib/ndarray.py (output up-to-date) not copying Lib/numtest.py (output up-to-date) not copying Lib/codegenerator.py (output up-to-date) not copying Lib/ufunc.py (output up-to-date) not copying Lib/testdata.py (output up-to-date) not copying Lib/numarray.py (output up-to-date) not copying Lib/ieeespecial.py (output up-to-date) not copying Lib/recarray.py (output up-to-date) not copying Lib/template.py (output up-to-date) not copying Lib/arrayprint.py (output up-to-date) not copying Lib/typeconv.py (output up-to-date) not copying Lib/numinclude.py (output up-to-date) not copying Lib/numarrayext.py (output up-to-date) not copying Lib/_ufunc.py (output up-to-date) not copying Lib/chararray.py (output up-to-date) not copying Lib/numtestall.py (output up-to-date) not copying Lib/numerictypes.py (output up-to-date) not copying Lib/memmap.py (output up-to-date) running build_ext building '_conv' extension error: file 'Src/_convmodule.c' does not exist What am I missing? I'm running Linux (debian woody) on i386. Thanks, Sebastian |
From: Konrad H. <hi...@cn...> - 2002-12-03 10:06:38
|
co...@ar... (David M. Cooke) writes: > The idea is that a PyArrayObject has a member 'base', which is DECREF'd > when the array is deallocated. The idea is for when arrays are slices of Indeed, but this is an undocumented implementation feature. Use at your own risk. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: <co...@ar...> - 2002-12-03 01:46:20
|
On Mon, Dec 02, 2002 at 11:59:22AM +0100, Joachim Saul wrote: > Hi there, > > given the following (simplified) scenario: > > typedef struct { > PyObject_HEAD > float bar[10]; > } FooObject; > > I want to be able to set and retrieve the elements of bar from > Python using e.g. > > >>> foo = Foo() > >>> foo.bar[4] = 1.23 > >>> x = foo.bar[4] > > I have chosen an approach using 'PyArray_FromDimsAndData'. In fact > I programmed it after studying 'arrayobject.c', namely the part in > 'array_getattr' where the 'flat' attribute is accessed. > > foo_getattr(FooObject *self, char *name) > { > if (!strcmp(name, "bar")) { > int n=10; > PyObject *bar = PyArray_FromDimsAndData(1, &n, > PyArray_FLOAT, (char*)self->bar); > if (bar == NULL) return NULL; > return bar; > } > > return Py_FindMethod(foo_methods, (PyObject*)self, name); > } > > And it works! :-) > > BUT how about refcounts here? 'PyArray_FromDimsAndData' will > return an array which only contains a reference to foo's original > bar array; that's why I can both set and access the latter the way > described. And no memory leak is created. > > But what if I create a reference to foo.bar, and later delete foo, > i.e. > > >>> b = foo.bar > >>> del foo > > Now the data pointer in b refers to freed data! In the mentioned > 'array_getattr' this apeears to be solved by increasing the > refcount; in the above example this would mean 'Py_INCREF(self)' > before returning 'bar'. Then if deleting 'foo', its memory is not > freed because the refcount is not zero. But AFAICS in this case > (as well as in the Numeric code) the INCREF prevents the object > from EVER being freed. Who would DECREF the object? Something similiar came up a few weeks ago: how do you pass data owned by something else as a Numeric array, while keeping track of when to delete the data? It's so simple I almost kicked myself when I saw it, from the code at http://pobox.com/~kragen/sw/arrayfrombuffer/ which allows you to use memory-mapped files as arrays. The idea is that a PyArrayObject has a member 'base', which is DECREF'd when the array is deallocated. The idea is for when arrays are slices of other arrays, deallocating the slice will decrease the reference count of the original. However, we can subvert this by using our own base, that knows how to deallocate our data. In your case, the DECREF'ing is all you need, so you could use foo_getattr(FooObject *self, char *name) { if (!strcmp(name, "bar")) { int n=10; PyObject *bar = PyArray_FromDimsAndData(1, &n, PyArray_FLOAT, (char*)self->bar); if (bar == NULL) return NULL; /***** new stuff here *******/ Py_INCREF(self); ((PyArrayObject *)bar)->base = self; /***********/ return bar; } return Py_FindMethod(foo_methods, (PyObject*)self, name); } So, now with >>> b = foo.bar >>> del foo b will still reference the original foo object. Now, do >>> del b and foo's data will be DECREF'd, freeing it if b had the only reference. This can be extended: say you've allocated memory from some memory pool that has to be freed with, say, 'my_pool_free'. You can create a Numeric array from this without copying by PyArrayObject *A = (PyArrayObject *)PyArray_FromDimsAndData(1, dims, PyArray_DOUBLE, (char *)data); A->base = PyCObject_FromVoidPtr(data, my_pool_free); Then A will be a PyArrayObject, that, when the last reference is deleted, will DECREF A->base, which will free the memory. Easy, huh? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/cookedm/ |co...@ph... |
From: Perry G. <pe...@st...> - 2002-12-03 01:20:59
|
You are right on both counts. These are indeed coercion bugs and we will fix them. The first should end up being an Int32 actually. Thanks for pointing this out. Perry Greenfield > > I was looking at the coercion of arrays in the new numarray. The > following is > not intuitive to me: > > >>> a = array(1, Int8) > >>> b = array(1, UInt16) > >>> c = a + b > >>> c.type() > UInt16 > > Should the result not be a signed integer type? > > Also the following result is strange to me: > > >>> a = array(1, Float64) > >>> b = array(1, Complex32) > >>> c = a + b > >>> c.type() > Complex32 > > Complex64 would seem to be the more apropiate type here for the result. > > Could somebody comment if these are bugs or not? > > Cheers, Peter > > -- > Dr. Peter J. Verveer > Cell Biology and Cell Biophysics Programme > European Molecular Biology Laboratory > Meyerhofstrasse 1 > D-69117 Heidelberg > Germany > Tel. : +49 6221 387245 > Fax : +49 6221 387242 > Email: Pet...@em... > |
From: Travis O. <oli...@ee...> - 2002-12-02 23:32:02
|
On 2 Dec 2002, Konrad Hinsen wrote: > Joachim Saul <li...@js...> writes: > > > But what if I create a reference to foo.bar, and later delete foo, > > i.e. > > > > >>> b = foo.bar > > >>> del foo > > > > Now the data pointer in b refers to freed data! In the mentioned Forgive me for jumping in. But why should the data be deleted when you do this? Shouldn't del foo merely decrease the reference count of foo.bar? Because there are still outstanding references to foo.bar (i.e. b) then the data itself shouldn't be freed. Perhaps I don't understand the question well enough. -Travis |
From: <ve...@em...> - 2002-12-02 23:27:06
|
Hi, I was looking at the coercion of arrays in the new numarray. The following is not intuitive to me: >>> a = array(1, Int8) >>> b = array(1, UInt16) >>> c = a + b >>> c.type() UInt16 Should the result not be a signed integer type? Also the following result is strange to me: >>> a = array(1, Float64) >>> b = array(1, Complex32) >>> c = a + b >>> c.type() Complex32 Complex64 would seem to be the more apropiate type here for the result. Could somebody comment if these are bugs or not? Cheers, Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme European Molecular Biology Laboratory Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 Email: Pet...@em... |
From: Konrad H. <hi...@cn...> - 2002-12-02 11:40:27
|
Joachim Saul <li...@js...> writes: > But what if I create a reference to foo.bar, and later delete foo, > i.e. > > >>> b = foo.bar > >>> del foo > > Now the data pointer in b refers to freed data! In the mentioned And that is why the condition for using PyArray_FromDimsAndData is that the data space passed is not freed before the end of the process. > survive the actual 'foo' object. However, I want to program it the > 'clean' way; any hints on how to do it properly would therefore be > highly welcome. I see only one clean solution: implement your own array-like object that represents foo.bar. This object would keep a reference to foo and release it when it is itself destroyed. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Joachim S. <li...@js...> - 2002-12-02 11:00:22
|
Hi there, given the following (simplified) scenario: typedef struct { PyObject_HEAD float bar[10]; } FooObject; I want to be able to set and retrieve the elements of bar from Python using e.g. >>> foo = Foo() >>> foo.bar[4] = 1.23 >>> x = foo.bar[4] I have chosen an approach using 'PyArray_FromDimsAndData'. In fact I programmed it after studying 'arrayobject.c', namely the part in 'array_getattr' where the 'flat' attribute is accessed. foo_getattr(FooObject *self, char *name) { if (!strcmp(name, "bar")) { int n=10; PyObject *bar = PyArray_FromDimsAndData(1, &n, PyArray_FLOAT, (char*)self->bar); if (bar == NULL) return NULL; return bar; } return Py_FindMethod(foo_methods, (PyObject*)self, name); } And it works! :-) BUT how about refcounts here? 'PyArray_FromDimsAndData' will return an array which only contains a reference to foo's original bar array; that's why I can both set and access the latter the way described. And no memory leak is created. But what if I create a reference to foo.bar, and later delete foo, i.e. >>> b = foo.bar >>> del foo Now the data pointer in b refers to freed data! In the mentioned 'array_getattr' this apeears to be solved by increasing the refcount; in the above example this would mean 'Py_INCREF(self)' before returning 'bar'. Then if deleting 'foo', its memory is not freed because the refcount is not zero. But AFAICS in this case (as well as in the Numeric code) the INCREF prevents the object from EVER being freed. Who would DECREF the object? Or am I misunderstanding something here? In my actual code I can perfectly live with the above solution because I only need to access foo's data using 'foo.bar[i]' and probably never need to create a reference to 'bar' which might survive the actual 'foo' object. However, I want to program it the 'clean' way; any hints on how to do it properly would therefore be highly welcome. Cheers, Joachim |
From: Rob <ro...@py...> - 2002-11-28 12:44:02
|
Hi all, I haven't mentioned it for almost a year now, since it never happened :) , but I really am this time supposed to have my site (see sig) in IEEE Antennas and Propagation Society magazine. The Dec 02 issue. They goofed up and gave their apologies as it was originally supposed to be in this year's June issue. Now you guys are giving up Numpy and starting Numarray :) I hope the last Numpy distribution will still be available on the main site, so people can run my programs. Later, I can go in and convert them to Numarray. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com |
From: Perry G. <pe...@st...> - 2002-11-27 20:47:25
|
> That's good news!! > Since I just signed up to this list I have some (more general questions) > 1) How active is this list ? Now I get like 1-2 emails a day (but 3 months > ago or so I got like 20 ...) Yes, it has been slower lately (there are sometimes related discussions on the scipy mailing lists that appear to have more traffic lately. > 2) Are most people here talking about Numeric (NumPy) or numarray ? > Who is actively writing/implementing numarray and is there a specific > mailing list for that ? > No specific mailing list for numarray. I'd guess that currently the largest user community for numarray is the astronomical one, primarily because we are distributing software that requires it to the community. Probably not many developers for yet now, but we are starting to look at making scipy compatible with numarray, and settle some remaining interface issues (but I'm going to wait until after Thankgiving before starting that). > 3) I was just starting with some C code to generate numarray lists (a week > ago) and now > the main data struct (NDarray) just disappeared... is that > good news !? > (Maybe the question should be: What is a "first class" python > object ? ) Good news. Probably not if you wrote code using it ;-), but we changed it so that numarray would be more compatible with existing Numeric C extensions, and that was the price for doing so. I think it is good news for those that have have existing C extensions for whenever they plan to migrate to numarray. Todd should answer detailed questions about the C interface, but he cleverly decided to go on vacation after releasing 0.4 until December 9. > 4) In NDarray there was a special pointer (void *imag) for complex data. > (without much documentation actually..) > How are complex array handled in numarray 0.4? Examples would be nice > !! ;-) > Writing up documentation for C-API issues is a big need and a high priority. > |
From: Sebastian H. <ha...@ms...> - 2002-11-27 20:36:46
|
That's good news!! Since I just signed up to this list I have some (more general questions) 1) How active is this list ? Now I get like 1-2 emails a day (but 3 months ago or so I got like 20 ...) 2) Are most people here talking about Numeric (NumPy) or numarray ? Who is actively writing/implementing numarray and is there a specific mailing list for that ? 3) I was just starting with some C code to generate numarray lists (a week ago) and now the main data struct (NDarray) just disappeared... is that good news !? (Maybe the question should be: What is a "first class" python object ? ) 4) In NDarray there was a special pointer (void *imag) for complex data. (without much documentation actually..) How are complex array handled in numarray 0.4? Examples would be nice !! ;-) Keep up all the good work. Thanks, Sebastian ----- Original Message ----- From: "Todd Miller" <jm...@st...> Newsgroups: comp.lang.python.announce,comp.lang.python To: <num...@li...> Sent: Tuesday, November 26, 2002 10:44 AM Subject: [Numpy-discussion] ANN: numarray-0.4 released > Numarray 0.4 > --------------------------------- > > Numarray is an array processing package designed to efficiently > manipulate large multi-dimensional arrays. Numarray is modelled after > Numeric and features c-code generated from python template scripts, > the capacity to operate directly on arrays in files, and improved type > promotions. > > Version 0.4 is a relatively large update with these features: > > 1. C-basetypes have been added to NDArray and NumArray to accelerate > simple indexing and attribute access *and* improve Numeric compatability. > > 2. List <-> NumArray transformations have been sped up. > > 3. There's an ieeespecial module which should make it easier to find > and manipulate NANs and INFs. > > 4. There's now a boxcar function in the Convolve package for doing > fast 2D smoothing. Jochen Kupper also contributed a lineshape module > which is also part of the Convolve package. > > 5. Bug fixes for every reported bug between now and July-02. > > 6. Since I still haven't fixed the add-on Packages packaging, I built > windows binaries for all 4 packages so you don't have to build them > from source yourself. > > But... basetypes (and reorganization) aren't free: > > 1. The "native" aspects of the numarray C-API have changed in > backwards incompatible ways. In particular, the NDInfo struct is now > completely gone, since it was completely redundant to the new > basetypes which are modelled after Numeric's PyArrayObject. If you > actually *have* a numarray extension that this breaks, and it bugs > you, send it to me and I'll fix it for you. If there's enough > response, I'll automate the process of updating extension wrappers. > But I expect not. > > 2. I expect to hear about bugs which can cause numarray/Python to dump > core. Of course, I have no clue where they are. So... there may be > rapid re-releases to compensate. > > 3. Old pickles are not directly transferrable to numarray-0.4, but may > instead require some copy_reg fuddling because basetypes change the > pickle format. If you have old pickles you need to migrate, send me > e-mail and I'll help you figure out how to do it. > > 4. Make *really* sure you delete any old numarray modules you have > laying around. These can screw up numarray-0.4 royally. > > 5. Note for astronomers: PyFITS requires an update to work with > numarray-0.4. This should be available shortly, if it is not already. > My point is that you may be unable to use both numarray-0.4 and PyFITS > today. > > WHERE > ----------- > > Numarray-0.4 windows executable installers, source code, and manual is > here: > > http://sourceforge.net/project/showfiles.php?group_id=1369 > > Numarray is hosted by Source Forge in the same project which hosts Numeric: > > http://sourceforge.net/projects/numpy/ > > The web page for Numarray information is at: > > http://stsdas.stsci.edu/numarray/index.html > > Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at > the Source Forge project for NumPy at: > > http://sourceforge.net/tracker/?group_id=1369 > > REQUIREMENTS > -------------------------- > > numarray-0.4 requires Python 2.2.0 or greater. > > > AUTHORS, LICENSE > ------------------------------ > > Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC > Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science > Institute. Thanks go to Jochen Kupper of the University of North > Carolina for his work on Numarray and for porting the Numarray manual > to TeX format. > > Numarray is made available under a BSD-style License. See > LICENSE.txt in the source distribution for details. > > -- > Todd Miller jm...@st... > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Get the new Palm Tungsten T > handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0002en > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Chris B. <Chr...@no...> - 2002-11-27 19:18:20
|
Marc Poinot wrote: > > I'm not sure this is a problem, It's not. > and when I display it I have some extra numbers > at the end of the "correct" number. What you are seeing is the best decimal representation of the binary number that is stored in that double. While the extra bits in binary of the double over the float should be zero, that does not mean that the extra decimal digits will be zero also. In this case, you are trying to store the value of 1.0 / 10.0, That value can not be represented exactly in binary. The value: 0.10000000149 is as close as you can get with a C float. so you are getting the right answer (subject to the limitations of floating point representation and arithmetic), as demonstrated by your example: > printf("%.12g\n",(float) dgv); > 0.10000000149 > produces (this is a "CORRECT" behavior for printf, we're printing > too much digits) It depends what you mean by too many. The above example shows what the best decimal value you can get with 12 digits from your float value, which is the same as what Python has in it's double. By the way, your four printf examples also demonstrate that you are getting exactly the same results by casting a float to a double within C, as when you do it passing to Python (which you should expect. A Python Float is a C double, after all) By default, in a print statement, Python displays all the digits that are required to reproduce the number. If you don't want to see all those digits, do what you did in C: >>> d = 0.10000000149 >>> print d 0.10000000149 >>> print "%g"%d 0.1 >>> print "%.12g"%d 0.10000000149 By the way, see: http://www.python.org/doc/current/tut/node14.html For more explaination. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Marc P. <Mar...@on...> - 2002-11-27 13:15:01
|
I'm not sure this is a problem, but I'm looking for a solution for this and I wonder if one could give a piece of advice: I have a C extension using doubles and floats. I return a float casted to double to Python, from my extension, and when I display it I have some extra numbers at the end of the "correct" number. In the extension, dgv is a float (in this exemple dgv=0.1). PyTuple_SET_ITEM(tp0, i, PyFloat_FromDouble((double)dgv)); I print it in Python: print tuple[0] Which produces: 0.10000000149 I get to much numbers, because the print should not try to get more then the 4 bytes float. It looks that the floatobject.c files is setting a number precision for printing, which is forced to 12. (#define PREC_STR 12) This work if you use a "double", but not a "double" casted from a "float". This problem occurs either on SGI and DEC. With stdio: printf("%.g\n", (float) dgv); printf("%.g\n", (double)dgv); printf("%.12g\n",(float) dgv); printf("%.12g\n",(double)dgv); produces (this is a "CORRECT" behavior for printf, we're printing too much digits): 0.1 0.1 0.10000000149 0.10000000149 Any idea ? How can I say to Python to forget the precision, or set it as global. Marcvs [alias Yes I could only compute with integers, but... ] |
From: Todd M. <jm...@st...> - 2002-11-26 18:44:28
|
Numarray 0.4 --------------------------------- Numarray is an array processing package designed to efficiently manipulate large multi-dimensional arrays. Numarray is modelled after Numeric and features c-code generated from python template scripts, the capacity to operate directly on arrays in files, and improved type promotions. Version 0.4 is a relatively large update with these features: 1. C-basetypes have been added to NDArray and NumArray to accelerate simple indexing and attribute access *and* improve Numeric compatability. 2. List <-> NumArray transformations have been sped up. 3. There's an ieeespecial module which should make it easier to find and manipulate NANs and INFs. 4. There's now a boxcar function in the Convolve package for doing fast 2D smoothing. Jochen Kupper also contributed a lineshape module which is also part of the Convolve package. 5. Bug fixes for every reported bug between now and July-02. 6. Since I still haven't fixed the add-on Packages packaging, I built windows binaries for all 4 packages so you don't have to build them from source yourself. But... basetypes (and reorganization) aren't free: 1. The "native" aspects of the numarray C-API have changed in backwards incompatible ways. In particular, the NDInfo struct is now completely gone, since it was completely redundant to the new basetypes which are modelled after Numeric's PyArrayObject. If you actually *have* a numarray extension that this breaks, and it bugs you, send it to me and I'll fix it for you. If there's enough response, I'll automate the process of updating extension wrappers. But I expect not. 2. I expect to hear about bugs which can cause numarray/Python to dump core. Of course, I have no clue where they are. So... there may be rapid re-releases to compensate. 3. Old pickles are not directly transferrable to numarray-0.4, but may instead require some copy_reg fuddling because basetypes change the pickle format. If you have old pickles you need to migrate, send me e-mail and I'll help you figure out how to do it. 4. Make *really* sure you delete any old numarray modules you have laying around. These can screw up numarray-0.4 royally. 5. Note for astronomers: PyFITS requires an update to work with numarray-0.4. This should be available shortly, if it is not already. My point is that you may be unable to use both numarray-0.4 and PyFITS today. WHERE ----------- Numarray-0.4 windows executable installers, source code, and manual is here: http://sourceforge.net/project/showfiles.php?group_id=1369 Numarray is hosted by Source Forge in the same project which hosts Numeric: http://sourceforge.net/projects/numpy/ The web page for Numarray information is at: http://stsdas.stsci.edu/numarray/index.html Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at the Source Forge project for NumPy at: http://sourceforge.net/tracker/?group_id=1369 REQUIREMENTS -------------------------- numarray-0.4 requires Python 2.2.0 or greater. AUTHORS, LICENSE ------------------------------ Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science Institute. Thanks go to Jochen Kupper of the University of North Carolina for his work on Numarray and for porting the Numarray manual to TeX format. Numarray is made available under a BSD-style License. See LICENSE.txt in the source distribution for details. -- Todd Miller jm...@st... |
From: Konrad H. <hi...@cn...> - 2002-11-24 18:03:29
|
<ve...@em...> writes: > I noticed that in Numeric and in numarray it is possible to create > arrays with axes of zero length. For instance: zeros([1, 0]). There > seems not be much that can be done with them. What is the reason for > their existence? They often result as special cases from some operations. Think of them as the array equivalents of empty lists. Creating zero-size arrays explicitly can be useful when suitable starting values for iterations are needed. > My real question is: When writing an extension in C, how to deal with such > arrays? Should I treat them as empty arrays, that do not have any data? Exactly. Konrad. |
From: Konrad H. <hi...@cn...> - 2002-11-24 18:00:17
|
Jos=E9 Fonseca <j_r...@ya...> writes: > files. This would make much easier to install, but I'm personally > against statically link libraries, as they inevitable lead to code > duplication in memory. For example, while I don't manage to build a Me too. Konrad. |
From: F. <j_r...@ya...> - 2002-11-24 16:52:41
|
On Fri, Nov 22, 2002 at 01:33:08PM +0000, José Fonseca wrote: > The source is available at > http://jrfonseca.dyndns.org/work/phd/python/modules/arpack/dist/arpack-1.0.tar.bz2 > . The ARPACK library is not included and must be obtained and compiled > seperately, and setup.py must be modified to reflect your system > BLAS/LAPACK library. Two header files, arpack.h and arpackmodule.h, were missing from the above package. This has been corrected now. Thanks to Greg Whittier for pointing that out. A little more detailed install instructions are were also added. I'm also considering where I should bundle or not ARPACK in the source package, and then use scipy_distutils to compile the fortran source files. This would make much easier to install, but I'm personally against statically link libraries, as they inevitable lead to code duplication in memory. For example, while I don't manage to build a shared version, the ATLAS and LAPACK libraries appear both triplicated in on my programs - from Numeric, ARPACK and UMFPACK. This leads to a huge code bloat and a misuse of the processor code cache resulting in a lower performance. What is the opinion of the other subscribers regarding this? José Fonseca __________________________________________________ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com |
From: <ve...@em...> - 2002-11-24 16:40:13
|
Hi all, I noticed that in Numeric and in numarray it is possible to create arrays with axes of zero length. For instance: zeros([1, 0]). There seems not be much that can be done with them. What is the reason for their existence? My real question is: When writing an extension in C, how to deal with such arrays? Should I treat them as empty arrays, that do not have any data? Cheers, Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme EMBL Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 Email: Pet...@em... |
From: F. <j_r...@ya...> - 2002-11-22 13:33:11
|
I've made a Numeric Python binding of the ARPACK library. ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems, available at http://www.caam.rice.edu/software/ARPACK/ . These bindings have the following features: - Correspondence for all ARPACK calls - In place operation for all matrices - Easy access to ARPACK debugging control variables - Online help for all calls, with the correct Python/C 0-based indexing (automatically converted from the sources with the aid of a sed script). - Include ports of [unfortunately not all] original examples These bindings weren't generated with any automatic binding generation tool. Even though I initially tried both PyFortran and f2py, both showed to be inappropriate to handle the specificity of the ARPACK API. ARPACK uses a 'reverse communication interface' where basically the API sucessively returns to caller which must take some update steps, and re-call the API with most arguments untouched. The intelligent (and silent) argument conversions made by the above tools made very difficult to implement and debug the most simple example. Also, for large-scale problems we wouldn't want any kind of array conversion/transposing happening behind the scenes as that would completely kill performance. The source is available at http://jrfonseca.dyndns.org/work/phd/python/modules/arpack/dist/arpack-1.0.tar.bz2 . The ARPACK library is not included and must be obtained and compiled seperately, and setup.py must be modified to reflect your system BLAS/LAPACK library. The bindings are API-centric. Nevertheless a Python wrapper to these calls can easily be made, where all kind of type conversions and dummy-safe actions can be made. I have done one myselve for a Sparse matrix eigenvalue determination using UMFPACK for sparse matrix factorization (for which a simple binding - just for double precision matrices - is available at http://jrfonseca.dyndns.org/work/phd/python/modules/umfpack/ ). I hope you find this interesting. Regards, José Fonseca __________________________________________________ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com |
From: John H. <jdh...@ac...> - 2002-11-20 22:36:10
|
If I import pygsl.rng before importing Numeric, I get an abort mother:~> python Python 2.2.2 (#1, Oct 15 2002, 08:14:58) [GCC 3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pygsl.rng >>> import Numeric python: Modules/gcmodule.c:366: delete_garbage: Assertion `((((PyGC_Head *)( op)-1))->gc.gc_refs >= 0)' failed. Abort If I import them in the other order, I have no problems. Numeric 22.0 pygsl version = "0.1a" gsl-1.2 Any ideas? Thanks, John Hunter |
From: Francesc A. <fa...@op...> - 2002-11-19 19:32:23
|
Announcing PyTables 0.2 ----------------------- What's new ----------- - Numerical Python arrays supported! - Much improved documentation - Programming API almost stable - Improved navegability across the object tree - Added more unit tests (there are almost 50) - Dropped HDF5_HL dependency (a tailored version is included in sources now) - License changed from LGPL to BSD What is ------- The goal of PyTables is to enable the end user to manipulate easily scientific data tables and Numerical Python objects (new in 0.2!) in a persistent hierarchical structure. The foundation of the underlying hierachical data organization is the excellent HDF5 library (http://hdf.ncsa.uiuc.edu/HDF5). Right now, PyTables provides limited support of all the HDF5 functions, but I hope to add the more interesting ones (for PyTables needs) in the near future. Nonetheless, this package is not intended to serve as a complete wrapper for the entire HDF5 API. A table is defined as a collection of records whose values are stored in fixed-length fields. All records have the same structure and all values in each field have the same data type. The terms "fixed-length" and strict "data types" seems to be quite a strange requirement for an interpreted language like Python, but they serve a useful function if the goal is to save very large quantities of data (such as is generated by many scientifc applications, for example) in an efficient manner that reduces demand on CPU time and I/O. In order to emulate records (C structs in HDF5) in Python, PyTables implements a special metaclass that detects errors in field assignments as well as range overflows. PyTables also provides a powerful interface to process table data. Quite a bit effort has been invested to make browsing the hierarchical data structure a pleasant experience. PyTables implements just three (orthogonal) easy-to-use methods for browsing. What is HDF5? ------------- For those people who know nothing about HDF5, it is is a general purpose library and file format for storing scientific data made at NCSA. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic constructs, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. How fast is it? --------------- Despite to be an alpha version and that there is lot of room for improvements (it's still CPU bounded!), PyTables can read and write tables quite fast. But, if you want some (very preliminary) figures (just to know orders of magnitude), in a AMD Athlon@900 it can currently read from 40000 up to 60000 records/s and write from 5000 up to 13000 records/s. Raw data speed in read mode ranges from 1 MB/s up to 2 MB/s, and it drops to the 200 KB/s - 600 KB/s range for writes. Go to http://pytables.sf.net/bench.html for a somewhat more detailed description of this small (and synthetic) benchmark. Anyway, this is only the beginning (premature optimization is the root of all evils, you know ;-). Platforms --------- I'm using Linux as the main development platform, but PyTables should be easy to compile/install on other UNIX machines. Thanks to Scott Prater, this package has passed all the tests on a UltraSparc platform with Solaris 7. It also compiles and passes all the tests on a SGI Origin2000 with MIPS R12000 processors and running IRIX 6.5. If you are using Windows and you get the library to work, please let me know. An example? ----------- At the bottom of this message there is some code (less that 100 lines and only less than half being real code) that shows basic capabilities of PyTables. Web site -------- Go to the PyTables web site for more details: http://pytables.sf.net/ Final note ---------- This is second alpha release, and probably last alpha, so it is still time if you want to suggest some API addition/change or addition/change of any useful missing capability. Let me know of any bugs, suggestions, gripes, kudos, etc. you may have. -- Francesc Alted fa...@op... *-*-*-**-*-*-**-*-*-**-*-*- Small code example *-*-*-**-*-*-**-*-*-**-*-*-* """Small but almost complete example showing the PyTables mode of use. As a result of execution, a 'tutorial1.h5' file is created. You can look at it with whatever HDF5 generic utility, like h5ls, h5dump or h5view. """ import sys from Numeric import * from tables import * #'-**-**-**-**-**-**- user record definition -**-**-**-**-**-**-**-' # Define a user record to characterize some kind of particles class Particle(IsRecord): name = '16s' # 16-character String idnumber = 'Q' # unsigned long long (i.e. 64-bit integer) TDCcount = 'B' # unsigned byte ADCcount = 'H' # unsigned short integer grid_i = 'i' # integer grid_j = 'i' # integer pressure = 'f' # float (single-precision) energy = 'd' # double (double-precision) print print '-**-**-**-**-**-**- file creation -**-**-**-**-**-**-**-' # The name of our HDF5 filename filename = "tutorial1.h5" print "Creating file:", filename # Open a file in "w"rite mode h5file = openFile(filename, mode = "w", title = "Test file") print print '-**-**-**-**-**-**- group an table creation -**-**-**-**-**-**-**-' # Create a new group under "/" (root) group = h5file.createGroup("/", 'detector', 'Detector information') print "Group '/detector' created" # Create one table on it table = h5file.createTable(group, 'readout', Particle(), "Readout example") print "Table '/detector/readout' created" # Get a shortcut to the record object in table particle = table.record # Fill the table with 10 particles for i in xrange(10): # First, assign the values to the Particle record particle.name = 'Particle: %6d' % (i) particle.TDCcount = i % 256 particle.ADCcount = (i * 256) % (1 << 16) particle.grid_i = i particle.grid_j = 10 - i particle.pressure = float(i*i) particle.energy = float(particle.pressure ** 4) particle.idnumber = i * (2 ** 34) # This exceeds long integer range # Insert a new particle record table.appendAsRecord(particle) # Flush the buffers for table table.flush() print print '-**-**-**-**-**-**- table data reading & selection -**-**-**-**-**-' # Read actual data from table. We are interested in collecting pressure values # on entries where TDCcount field is greater than 3 and pressure less than 50 pressure = [ x.pressure for x in table.readAsRecords() if x.TDCcount > 3 and x.pressure < 50 ] print "Last record read:" print x print "Field pressure elements satisfying the cuts ==>", pressure # Read also the names with the same cuts names = [ x.name for x in table.readAsRecords() if x.TDCcount > 3 and x.pressure < 50 ] print print '-**-**-**-**-**-**- array object creation -**-**-**-**-**-**-**-' print "Creating a new group called '/columns' to hold new arrays" gcolumns = h5file.createGroup(h5file.root, "columns", "Pressure and Name") print "Creating a Numeric array called 'pressure' under '/columns' group" h5file.createArray(gcolumns, 'pressure', array(pressure), "Pressure column selection") print "Creating another Numeric array called 'name' under '/columns' group" h5file.createArray('/columns', 'name', array(names), "Name column selection") # Close the file h5file.close() print "File '"+filename+"' created" |
From: Konrad H. <hi...@cn...> - 2002-11-19 13:28:51
|
> No. I want to set the memory zone of the array but once this zone is > set, I want numpy to manage it as if it was owner of the memory. That is the most frequent case for which there is no clean solution. There ought to be an array constructor that takes a pointer to a deallocation function which is called to free the data space. You can do myarray->flags |= OWN_DATA, then the data space will be freed using the standard free() function. But this is undocumented, and works only if the standard OS memory allocation calls were used to allocate the memory. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |