You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ee...> - 2006-10-30 23:02:46
|
David Huard wrote: > Ok, > I'll update numpy and give it another try tonight. > I just fixed some reference-count problems in f2py today. These were of the variety that there was a missing decref that would cause the reference count of certain often-used data-types to increase without bound and eventually wrap (to 0) in long-running processes using f2py. I suspect this is the fundamental problem in both cases. -Travis |
From: Matthew B. <mat...@gm...> - 2006-10-30 22:52:33
|
Hi, I notice that the value for: zeros((1,), dtype=object).dtype.hasobject is now 63, whereas previously it had been 1. Is this intended? Thanks, Matthew |
From: A. M. A. <per...@gm...> - 2006-10-30 22:49:42
|
On 30/10/06, Fernando Perez <fpe...@gm...> wrote: > On 10/30/06, Charles R Harris <cha...@gm...> wrote: > > > I suspect the real problem is that the refcount keeps going up. Even if it > > was unsigned it would eventually wrap to zero and with a bit of luck get > > garbage collected. So probably something isn't decrementing the refcount. > > Oops, my bad: I meant *unsigned long long*, so that the refcount is a > 64-bit object. By the time it wraps around, you'll have run out of > memory long ago. Having 32 bit ref counters can potentially mean you > run out of the counter before you run out of RAM on a system with > sufficient memory. Yes, this is a feature(?) of python as it currently stands (I checked 2.5) - reference counts are 32-bit signed integers, so if you have an object that has enough references, python will be exceedingly unhappy: http://mail.python.org/pipermail/python-dev/2002-September/028679.html It is of course possible that you actually have that many references to some object, but it seems to me you'd notice twenty-four gigabytes of pointers floating around... A. M. Archibald |
From: Fernando P. <fpe...@gm...> - 2006-10-30 22:41:32
|
On 10/30/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > > >On 10/30/06, David Huard <dav...@gm...> wrote: > > > > > >>Hi, > >>I have a script that crashes, but only if it runs over 9~10 hours, with the > >>following backtrace from gdb. The script uses PyMC, and repeatedly calls (> > >>1000000) likelihood functions written in fortran and wrapped with f2py. > >>Numpy: 1.0.dev3327 > >>Python: 2.4.3 > >> > >> > > > >This sounds awfully reminiscent of the bug I recently mentioned: > > > >http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 > > > >We left a fresh run over the weekend, but my office mate is currently > >out of the office and his terminal is locked, so I don't know what the > >result is. I'll report shortly: we followed Travis' instructions and > >ran with a fresh SVN build which includes the extra warnings he added > >to the dealloc routines. You may want to try the same advice, perhaps > >with information from both of us the gurus may zero in on the problem, > >if indeed it is the same. > > > I talked about the reference counting issue. One problem is not > incrementing the reference count when it needs to be. The other problem > could occur if the reference-count was not decremented when it needed to > be and the reference count wrapped from MAX_LONG to 0. This could also > create the problem and would be expected for "long-running" processes. I just posted the log from that run in the other thread. I'm not sure if that helps you any though. I'm running the code again to see if we see your new warning fire, and will report back. Cheers, f |
From: Travis O. <oli...@ee...> - 2006-10-30 22:36:34
|
Fernando Perez wrote: >On 10/30/06, David Huard <dav...@gm...> wrote: > > >>Hi, >>I have a script that crashes, but only if it runs over 9~10 hours, with the >>following backtrace from gdb. The script uses PyMC, and repeatedly calls (> >>1000000) likelihood functions written in fortran and wrapped with f2py. >>Numpy: 1.0.dev3327 >>Python: 2.4.3 >> >> > >This sounds awfully reminiscent of the bug I recently mentioned: > >http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 > >We left a fresh run over the weekend, but my office mate is currently >out of the office and his terminal is locked, so I don't know what the >result is. I'll report shortly: we followed Travis' instructions and >ran with a fresh SVN build which includes the extra warnings he added >to the dealloc routines. You may want to try the same advice, perhaps >with information from both of us the gurus may zero in on the problem, >if indeed it is the same. > I talked about the reference counting issue. One problem is not incrementing the reference count when it needs to be. The other problem could occur if the reference-count was not decremented when it needed to be and the reference count wrapped from MAX_LONG to 0. This could also create the problem and would be expected for "long-running" processes. -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-30 22:31:55
|
On 10/30/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > >This sounds awfully reminiscent of the bug I recently mentioned: > > > >http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 > > > > > > It actually looks very much like it. I think the problem may be in f2py > or in one of the C-API calls where-in there is a reference-count problem > with the built-in data-type objects. > > NumPy won't try to free those anymore which will solve the immediate > problem, but there is still a reference-count problem somewhere. > > The reference to the data-type objects is consumed by constructors that > take PyArray_Descr * arguments. So, you often need to INCREF before > passing to those constructors. It looks like this INCREF is forgotten > in some extension module (perhaps in f2py or PyMC). It's possible it's > in NumPy itself, though I've re-checked the code lots of times looking > for that specific problem. As a data point, our code has almost no manual memory management in C, but lots and lots of f2py-generated wrappers, as well as a lot of weave.inline-generated code. We do have hand-written C extensions, but most of them operate on externally allocated arrays. The one little snippet where we manually manage memory is a copy of numpy's innerproduct() which I simplified and tuned for our purposes; it just does: ret = (PyArrayObject *)PyArray_SimpleNew(nd,dimensions, ap1->descr->type_num); if (ret == NULL) goto fail; [ do computational loop to fill in ret array, no memory management here ] return (PyObject *)ret; fail: Py_XDECREF(ret); return NULL; That's the full extent of our manual memory management, and I don't see any problem with it, but maybe there is: I copied this from numpy months ago and haven't really looked again. Cheers, f |
From: Travis O. <oli...@ee...> - 2006-10-30 22:13:58
|
Fernando Perez wrote: >On 10/30/06, David Huard <dav...@gm...> wrote: > > >>Hi, >>I have a script that crashes, but only if it runs over 9~10 hours, with the >>following backtrace from gdb. The script uses PyMC, and repeatedly calls (> >>1000000) likelihood functions written in fortran and wrapped with f2py. >>Numpy: 1.0.dev3327 >>Python: 2.4.3 >> >> > >This sounds awfully reminiscent of the bug I recently mentioned: > >http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 > > It actually looks very much like it. I think the problem may be in f2py or in one of the C-API calls where-in there is a reference-count problem with the built-in data-type objects. NumPy won't try to free those anymore which will solve the immediate problem, but there is still a reference-count problem somewhere. The reference to the data-type objects is consumed by constructors that take PyArray_Descr * arguments. So, you often need to INCREF before passing to those constructors. It looks like this INCREF is forgotten in some extension module (perhaps in f2py or PyMC). It's possible it's in NumPy itself, though I've re-checked the code lots of times looking for that specific problem. -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-30 22:01:59
|
On 10/30/06, Charles R Harris <cha...@gm...> wrote: > I suspect the real problem is that the refcount keeps going up. Even if it > was unsigned it would eventually wrap to zero and with a bit of luck get > garbage collected. So probably something isn't decrementing the refcount. Oops, my bad: I meant *unsigned long long*, so that the refcount is a 64-bit object. By the time it wraps around, you'll have run out of memory long ago. Having 32 bit ref counters can potentially mean you run out of the counter before you run out of RAM on a system with sufficient memory. Cheers, f |
From: Charles R H. <cha...@gm...> - 2006-10-30 21:54:17
|
On 10/30/06, Fernando Perez <fpe...@gm...> wrote: > > On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > > > I've placed them in SVN (r3384): > > > > arraydescr_dealloc needs to do something like. > > > > if (self->fields == Py_None) { > > print something > > incref(self) > > return; > > } > > Here is some more info. We left a long-running job over the weekend > with the prints you suggested. Oddly, something happened at the OS > level which killed our SSH connection to that machine, but the above > numpy dealloc() warning never printed (we logged this). > > What did happen is that the refcount you suggested we print: > > sys.getrefcount(numpy.dtype('float')) > > eventually seems to have wrapped around and gone negative. I'm > attaching the log file with those print statements, the key point is > that this happens eventually: > > PSVD Iteration 19 > Ref count 1989827662 > bar 444 > PSVD Iteration 0 > Ref count 2021353399 > PSVD Iteration 1 > Ref count 2143386207 > PSVD Iteration 2 > Ref count -2001245193 > PSVD Iteration 3 > Ref count -1915816437 > PSVD Iteration 4 > Ref count -1902698473 > > That refcount is for dtype('float') as indicated above. Is it not a > problem that this particular refcount goes negative? Eventually it > may continue increasing and hit a zero, point at which I imagine that > the bad dealloc will occur. > > Are refcounts stored in signed 32-bit ints? Why? I'd have naively > expected them to be stored in unsigned longs to avoid wraparound > problems, but maybe I'm completely missing the real problem here. I suspect the real problem is that the refcount keeps going up. Even if it was unsigned it would eventually wrap to zero and with a bit of luck get garbage collected. So probably something isn't decrementing the refcount. Chuck |
From: Lisandro D. <da...@gm...> - 2006-10-30 21:15:37
|
On 10/30/06, Jonathan Makem <jma...@qu...> wrote: > > Hi, > > I work with Abaqus 6.6.1 finite element software. To access the resul= ts > from a simulation, the Abaqus scripting interface is used in Python by > importing OdbAccess modules. These modules can only be accessed using the > version of Python that is installed with Abaqus. However, I cannot instal= l > numpy for this version of Python. And I can't use Python 2.4 or 2.5 becau= se > then I can't use the odbAcces modules. Is it possible to use numpy with t= he > Abaqus version of Python? If so, how? > Jonny. Could you specify which version of Python is used with ABAQUS? can you launch the ABAQUS-provided Python interpreter and do: >>> import sys >>> print sys.version Perhaps the 'odbAcces' module is a module provided by ABAQUS? Which platform are you using? Linux? Windows? --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: Lisandro D. <da...@gm...> - 2006-10-30 21:08:28
|
On 10/30/06, Travis Oliphant <oli...@ie...> wrote: > > If anybody has a desire to see the array interface into Python, please > help by voicing an opinion on python-dev in the discussion about adding > data-type objects to Python. There are a few prominent people who > don't get why applications would need to share data-type information > about memory areas. I need help giving reasons why. > > -Travis Python-Dev is sometimes a hard place to get into. I remember posting some proposals or even bugs (considered bugs for me and others) and getting rather crude responses. Travis said "a few prominent people...". I consider Travis a *very* prominent people in Python world. His **terrific** contribution to NumPy reveals a really smart, always-looking-ahead way of doing things. However, I've seen before strong and disparate opposition to his proposals in Python-Dev. Perhaps the reason for this is simple: few Python core developers are involved in scientific computing and do not have a clear idea of what it is needed for this. I really believe that NumPy/Scipy community should try to raise his voice on Python-Dev. Many NumPy/Scipy users/developers really want to run high-performance Python code. Python is being used in supercomputers, some applications taking advantage of Python (SPaSM) have even won the Gordon Bell Performance Prize. An 25 Tflop/s application involving Python programing language is really a good example of what can be achieved with Python and compiled code interaction. In short, I fully support Travis in his initiative to standardize access to low level binary data, and encourage others like me who really want this to post to Python-Dev. From my part, I will try to post my reasons in connection with my (small) experience developing MPI for Python. Regars, --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: Fernando P. <fpe...@gm...> - 2006-10-30 20:08:21
|
On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > I've placed them in SVN (r3384): > > arraydescr_dealloc needs to do something like. > > if (self->fields == Py_None) { > print something > incref(self) > return; > } Here is some more info. We left a long-running job over the weekend with the prints you suggested. Oddly, something happened at the OS level which killed our SSH connection to that machine, but the above numpy dealloc() warning never printed (we logged this). What did happen is that the refcount you suggested we print: sys.getrefcount(numpy.dtype('float')) eventually seems to have wrapped around and gone negative. I'm attaching the log file with those print statements, the key point is that this happens eventually: PSVD Iteration 19 Ref count 1989827662 bar 444 PSVD Iteration 0 Ref count 2021353399 PSVD Iteration 1 Ref count 2143386207 PSVD Iteration 2 Ref count -2001245193 PSVD Iteration 3 Ref count -1915816437 PSVD Iteration 4 Ref count -1902698473 That refcount is for dtype('float') as indicated above. Is it not a problem that this particular refcount goes negative? Eventually it may continue increasing and hit a zero, point at which I imagine that the bad dealloc will occur. Are refcounts stored in signed 32-bit ints? Why? I'd have naively expected them to be stored in unsigned longs to avoid wraparound problems, but maybe I'm completely missing the real problem here. We've started another run to see if we can get the actual crash to happen, will report. Cheers, f |
From: David H. <dav...@gm...> - 2006-10-30 19:57:58
|
Ok, I'll update numpy and give it another try tonight. Regards, David 2006/10/30, Fernando Perez <fpe...@gm...>: > > On 10/30/06, David Huard <dav...@gm...> wrote: > > Hi, > > I have a script that crashes, but only if it runs over 9~10 hours, with > the > > following backtrace from gdb. The script uses PyMC, and repeatedly calls > (> > > 1000000) likelihood functions written in fortran and wrapped with f2py. > > Numpy: 1.0.dev3327 > > Python: 2.4.3 > > This sounds awfully reminiscent of the bug I recently mentioned: > > http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 > > We left a fresh run over the weekend, but my office mate is currently > out of the office and his terminal is locked, so I don't know what the > result is. I'll report shortly: we followed Travis' instructions and > ran with a fresh SVN build which includes the extra warnings he added > to the dealloc routines. You may want to try the same advice, perhaps > with information from both of us the gurus may zero in on the problem, > if indeed it is the same. > > Note that I'm not positive it's the same problem, and our backtraces > aren't quite the same. But the rest of the scenario is similar: > low-level memory crash from glibc, very long run is needed to fire the > bug, potentially millions of calls to both numpy and to f2py-wrapped > in-house libraries. > > Cheers, > > f > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Fernando P. <fpe...@gm...> - 2006-10-30 19:19:01
|
On 10/30/06, David Huard <dav...@gm...> wrote: > Hi, > I have a script that crashes, but only if it runs over 9~10 hours, with the > following backtrace from gdb. The script uses PyMC, and repeatedly calls (> > 1000000) likelihood functions written in fortran and wrapped with f2py. > Numpy: 1.0.dev3327 > Python: 2.4.3 This sounds awfully reminiscent of the bug I recently mentioned: http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 We left a fresh run over the weekend, but my office mate is currently out of the office and his terminal is locked, so I don't know what the result is. I'll report shortly: we followed Travis' instructions and ran with a fresh SVN build which includes the extra warnings he added to the dealloc routines. You may want to try the same advice, perhaps with information from both of us the gurus may zero in on the problem, if indeed it is the same. Note that I'm not positive it's the same problem, and our backtraces aren't quite the same. But the rest of the scenario is similar: low-level memory crash from glibc, very long run is needed to fire the bug, potentially millions of calls to both numpy and to f2py-wrapped in-house libraries. Cheers, f |
From: Travis O. <oli...@ie...> - 2006-10-30 19:02:12
|
If anybody has a desire to see the array interface into Python, please help by voicing an opinion on python-dev in the discussion about adding data-type objects to Python. There are a few prominent people who don't get why applications would need to share data-type information about memory areas. I need help giving reasons why. -Travis |
From: Jonathan M. <jma...@qu...> - 2006-10-30 17:13:23
|
Hi, I work with Abaqus 6.6.1 finite element software. To access the results from a simulation, the Abaqus scripting interface is used in Python by importing OdbAccess modules. These modules can only be accessed using the version of Python that is installed with Abaqus. However, I cannot install numpy for this version of Python. And I can't use Python 2.4 or 2.5 because then I can't use the odbAcces modules. Is it possible to use numpy with the Abaqus version of Python? If so, how? Regards, Jonny. |
From: Francesc A. <fa...@ca...> - 2006-10-30 16:49:13
|
El dl 30 de 10 del 2006 a les 14:58 +0100, en/na Joris De Ridder va escriure: > IMO, record arrays seem powerful, but also intimidating at a first glance. Agreed, specially if you start to nest datatypes. > I think many didactical examples will help getting them into common use. > I made some effort to get updated examples in the Numpy Example List: > > www.scipy.org/Numpy_Example_List#dtype > www.scipy.org/Numpy_Example_List#array > > Could people who already have some experience with it, have a look at > them and give me their opinion? Looks good. I've taken the freedom to add some examples of nested types and recarrays. -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth |
From: Tom D. <tom...@al...> - 2006-10-30 16:32:42
|
Yes I definately agree about the latter! On 10/29/06, Charles R Harris <cha...@gm...> wrote: > > > > On 10/29/06, Tom Denniston <tom...@al...> wrote: > > > > Oh. My mistake. I thought I had an array of 2 objects which were ints. > > I actually had an array of one list of 2 ints. It works properly if I > > construct the array properly. > > > > I think there is actually a bug here: > > In [61]: sort(array([3,2], dtype=object)) > Out[61]: array([2, 3], dtype=object) > > In [62]: argmax(array([2,3], dtype=object)) > Out[62]: 0 > > See, the sort works fine. I suspect argmax is using the wrong comparison > function. I was just pointing out that sometimes it is hard to know what is > going on with objects. > > > Chuck > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: David H. <dav...@gm...> - 2006-10-30 14:26:47
|
Hi, I have a script that crashes, but only if it runs over 9~10 hours, with the following backtrace from gdb. The script uses PyMC, and repeatedly calls (> 1000000) likelihood functions written in fortran and wrapped with f2py. Numpy: 1.0.dev3327 Python: 2.4.3 Does this backtrace give enough info to track the problem or do the gurus need more ? Thanks, David *** glibc detected *** free(): invalid pointer: 0x00002aaaac1257e0 *** Program received signal SIGABRT, Aborted. [Switching to Thread 46912504440528 (LWP 25269)] 0x00002aaaab09011d in raise () from /lib/libc.so.6 (gdb) backtrace #0 0x00002aaaab09011d in raise () from /lib/libc.so.6 #1 0x00002aaaab09184e in abort () from /lib/libc.so.6 #2 0x00002aaaab0c4e41 in __fsetlocking () from /lib/libc.so.6 #3 0x00002aaaab0ca90e in malloc_usable_size () from /lib/libc.so.6 #4 0x00002aaaab0cac56 in free () from /lib/libc.so.6 #5 0x00002aaaabff7770 in PyArray_FromArray (arr=0x1569500, newtype=0x2aaaac1257e0, flags=0) at arrayobject.c:7804 #6 0x00002aaaabfece56 in PyArray_FromAny (op=0x1569500, newtype=0x0, min_depth=0, max_depth=0, flags=0, context=0x0) at arrayobject.c:8257 #7 0x00002aaaabff40b1 in PyArray_MultiIterNew (n=2) at arrayobject.c:10253 #8 0x00002aaaabff44bc in _broadcast_cast (out=0x62b5, in=0x6, castfunc=0x2aaaabfbf5a0 <DOUBLE_to_FLOAT>, iswap=-1, oswap=6) at arrayobject.c:7445 #9 0x00002aaaabffe301 in PyArray_CastToType (mp=0x156dca0, at=<value optimized out>, fortran_=0) at arrayobject.c:7344 #10 0x00002aaaabffe785 in PyArray_FromScalar (scalar=0x1573b30, outcode=0x2aaaac1257e0) at scalartypes.inc.src:219 #11 0x00002aaaabfecff5 in PyArray_FromAny (op=0x1573b30, newtype=0x2aaaac1257e0, min_depth=0, max_depth=<value optimized out>, flags=0, context=0x0) at arrayobject.c:8260 #12 0x00002aaab6038b7b in array_from_pyobj (type_num=11, dims=0x7fffff8f6200, rank=1, intent=<value optimized out>, obj=0x1573b30) at build/src.linux-x86_64-2.4/fortranobject.c:653 #13 0x00002aaab6034aa9 in f2py_rout_flib_beta ( capi_self=<value optimized out>, capi_args=<value optimized out>, capi_keywds=<value optimized out>, f2py_func=0x2aaab603e830 <beta_>) at build/src.linux-x86_64-2.4/PyMC/flibmodule.c:2601 #14 0x0000000000414490 in PyObject_Call () #15 0x0000000000475de5 in PyEval_EvalFrame () #16 0x00000000004bdf69 in PyDescr_NewGetSet () #17 0x00000000004143eb in PyIter_Next () #18 0x000000000046ba53 in _PyUnicodeUCS4_IsNumeric () #19 0x0000000000477ab1 in PyEval_EvalFrame () #20 0x00000000004783ff in PyEval_EvalCodeEx () #21 0x000000000047699b in PyEval_EvalFrame () #22 0x0000000000476ab6 in PyEval_EvalFrame () #23 0x0000000000476ab6 in PyEval_EvalFrame () #24 0x00000000004783ff in PyEval_EvalCodeEx () #25 0x000000000047699b in PyEval_EvalFrame () #26 0x00000000004783ff in PyEval_EvalCodeEx () #27 0x000000000047699b in PyEval_EvalFrame () #28 0x00000000004783ff in PyEval_EvalCodeEx () #29 0x000000000047699b in PyEval_EvalFrame () #30 0x00000000004783ff in PyEval_EvalCodeEx () #31 0x0000000000478512 in PyEval_EvalCode () #32 0x000000000049c222 in PyRun_FileExFlags () #33 0x000000000049c4ae in PyRun_SimpleFileExFlags () #34 0x0000000000410a80 in Py_Main () #35 0x00002aaaab07d49b in __libc_start_main () from /lib/libc.so.6 #36 0x000000000040ffba in _start () |
From: Joris De R. <jo...@st...> - 2006-10-30 13:59:07
|
On Friday 27 October 2006 19:06, Francesc Altet wrote: [FA]: for example: [FA]: [FA]: In [67]: dtype([('f1', int16)]) [FA]: Out[67]: dtype([('f1', '<i2')]) [FA]: [FA]: In [68]: dtype([('f1', int16, (2,2))]) [FA]: Out[68]: dtype([('f1', '<i2', (2, 2))]) [FA]: [FA]: In [69]: dtype([('f1', int16, (2,2)), ('f3', int32)]) [FA]: Out[69]: dtype([('f1', '<i2', (2, 2)), ('f3', '<i4')]) Thanks, Francesc, for the examples! IMO, record arrays seem powerful, but also intimidating at a first glance. I think many didactical examples will help getting them into common use. I made some effort to get updated examples in the Numpy Example List: www.scipy.org/Numpy_Example_List#dtype www.scipy.org/Numpy_Example_List#array Could people who already have some experience with it, have a look at them and give me their opinion? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm |
From: Joris De R. <jo...@st...> - 2006-10-30 12:53:47
|
On Friday 27 October 2006 19:01, Francesc Altet wrote: [FA]: A Divendres 27 Octubre 2006 17:58, Joris De Ridder va escriure: [FA]: > Hi, [FA]: > [FA]: > Is the following behaviour of astype() intentional in NumPy 1.0? [FA]: > [FA]: > >>> x = array([1,2,3]) [FA]: > >>> x.astype(None) [FA]: > [FA]: > array([ 1., 2., 3.]) [FA]: > [FA]: > That is, the int32 is converted to float64. [FA]: [FA]: Yes, I think the behaviour is intended. This is because 'float64' is the [FA]: default type in NumPy from some months ago (before the default was 'int_') OK, updated the astype() example in the www.scipy.org/Numpy_Example_List . J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm |
From: Yves <yve...@gm...> - 2006-10-30 10:53:52
|
Forget it. N.isnan() does the job. YVES On Mon, Oct 30, 2006 at 11:47:57AM +0100, Yves wrote: > Hi, > > Is there a way of checking for nan's in an array, as comparing to N.nan > doesn't seem to work. > > In [1]: import numpy as N > > In [2]: a = N.asarray(0.)/0 > Warning: invalid value encountered in divide > > In [3]: a > Out[3]: nan > > In [4]: a==N.nan > Out[4]: False > > In [5]: b = a.copy() > > In [6]: a==b > Out[6]: False > > In [7]: b > Out[7]: nan > > In [8]: N.__version__ > Out[8]: '1.0.dev3390' > > > Many thanks, > YVES > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Yves <yve...@gm...> - 2006-10-30 10:48:24
|
Hi, Is there a way of checking for nan's in an array, as comparing to N.nan doesn't seem to work. In [1]: import numpy as N In [2]: a = N.asarray(0.)/0 Warning: invalid value encountered in divide In [3]: a Out[3]: nan In [4]: a==N.nan Out[4]: False In [5]: b = a.copy() In [6]: a==b Out[6]: False In [7]: b Out[7]: nan In [8]: N.__version__ Out[8]: '1.0.dev3390' Many thanks, YVES |
From: David C. <da...@ar...> - 2006-10-30 08:27:31
|
Hi there, I would like to improve a bit the packaging of pyaudio, and for that, I need to do the following when configuring my package: - to be able to retrieve a header file (sndfile.h). I need the full pathname, because I need to read its content - to be able to retrieve a library file. I need the full pathname, including the library prefix and suffix/extension, because I need to load it with ctypes. I've shamelessly copied and pasted code from numpy.distutils.system_info, and I manage to retrieve informations for my library, and to retrieve the full pathname for sndfile.h, but I don't know how to get the full shared library file. For now, my sndfile_info looks like that: class sndfile_info(system_info): #variables to override section = 'sndfile' notfounderror = SndfileNotFoundError libname = 'sndfile' header = 'sndfile.h' def __init__(self): system_info.__init__(self) def calc_info(self): """ Compute the informations of the library """ # Look for the shared library sndfile_libs = self.get_libs('sndfile_libs', self.libname) lib_dirs = self.get_lib_dirs() for i in lib_dirs: tmp = self.check_libs(i, sndfile_libs) if tmp is not None: info = tmp break else: return # Look for the header file include_dirs = self.get_include_dirs() inc_dir = None for d in include_dirs: p = self.combine_paths(d,self.header) if p: inc_dir = os.path.dirname(p[0]) headername = os.path.abspath(p[0]) break if inc_dir is not None: dict_append(info, include_dirs=[inc_dir], headername = headername) self.set_info(**info) return Any help to get the full library name would be appreciated (I cannot use find_library from ctypes.util, because I have no guarantee that it will return the same one than distutils), Thanks, David |
From: Charles R H. <cha...@gm...> - 2006-10-30 02:17:42
|
On 10/29/06, Tom Denniston <tom...@al...> wrote: > > Oh. My mistake. I thought I had an array of 2 objects which were ints. I > actually had an array of one list of 2 ints. It works properly if I > construct the array properly. > I think there is actually a bug here: In [61]: sort(array([3,2], dtype=object)) Out[61]: array([2, 3], dtype=object) In [62]: argmax(array([2,3], dtype=object)) Out[62]: 0 See, the sort works fine. I suspect argmax is using the wrong comparison function. I was just pointing out that sometimes it is hard to know what is going on with objects. Chuck |