You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Mark H. <ma...@hy...> - 2006-10-24 09:57:52
|
On Mon, 23, Oct, 2006 at 11:50:27AM +0100, Mark Hymers spoke thus.. > Hi, >=20 > I've just done a Mac OS X PPC build of the SVN trunk and am getting this > failure too. <snip> > FAIL: Ticket #112 </snip> I've just been looking into this a bit further (though I may be heading down the wrong road) and come across this which doesn't exactly look right. Again this is on a PPC Mac OS X 10.4 install: In [1]: import numpy In [2]: numpy.__version__=20 Out[2]: '1.0.dev3390' In [3]: print numpy.finfo(numpy.float32).min, numpy.finfo(numpy.float32).ma= x, numpy.finfo(numpy.float32).eps -3.40282346639e+38 3.40282346639e+38 1.19209289551e-07 In [4]: print numpy.finfo(numpy.float64).min, numpy.finfo(numpy.float64).ma= x, numpy.finfo(numpy.float64).eps -1.79769313486e+308 1.79769313486e+308 2.22044604925e-16 In [5]: print numpy.finfo(numpy.float128).min, numpy.finfo(numpy.float128).= max, numpy.finfo(numpy.float128).eps Warning: overflow encountered in add Warning: invalid value encountered in subtract Warning: invalid value encountered in subtract Warning: overflow encountered in add Warning: invalid value encountered in subtract Warning: invalid value encountered in subtract 9223372034707292160.0 -9223372034707292160.0 1.38178697010200053743e-76 Anyone got any comments /thoughts on this? Should I file it as a bug? I just tested this on an x86 Linux box (running Debian though that should be irrelevant) and numpy.float128 doesn't exist on x86 Linux but float96 does gives: >>> print numpy.finfo(numpy.float96).min, numpy.finfo(numpy.float96).max, n= umpy.finfo(numpy.float96).eps -1.18973149535723176502e+4932 1.18973149535723176502e+4932 1.08420217248550= 443401e-19 which seems right. Any ideas? Cheers, Mark --=20 Mark Hymers <mark at hymers dot org dot uk> "The relationship between journalists and politicians has often been likened to that between a dog and a lamp post, although I have never worked out who is supposed to be which." Nick Assinder, BBC Online Political Correspondent |
From: Michael S. <mic...@gm...> - 2006-10-24 06:50:21
|
I am currently running numpy rc2 (I haven't tried your reimplementation yet as I am still using python 2.3). I am wondering whether the new maskedarray is able to handle construction of arrays from masked scalar values (not sure if this is the correct term). I ran across a situation recently when I was picking individual values from a masked array, collecting them in a list and then subsequently constructing an array with these values. This does not work if any of the values choosen are masked. See example below On a more general note I am interested to find out whether there are any other languages that handle masked/missing data well and if so how this is done. My only experience is with R, which I have found to be quite good (there is a special value NA this signifies a masked value - this can be mixed in with non-masked values when defining an array). from numpy import * a = ma.array([1,2,3], mask=[True, False, False]) print a[0], type(a[0]) print a[1], type(a[1]) print list(a) a = ma.array(list(a)) -- output -- -- <class 'numpy.core.ma.MaskedArray'> 2 <type 'numpy.int32'> [array(data = 999999, mask = True, fill_value=999999) , 2, 3] C:\Python23\Lib\site-packages\numpy\core\ma.py:604: UserWarning: Cannot automatically convert masked array to numeric because data is masked in one or more locations. warnings.warn("Cannot automatically convert masked array to "\ Traceback (most recent call last): File "D:\eclipse\Table\scripts\testrecarray.py", line 23, in ? a = ma.array(list(a)) File "C:\Python23\Lib\site-packages\numpy\core\ma.py", line 562, in __init__ c = numeric.array(data, dtype=tc, copy=True, order=order) TypeError: an integer is required On 10/16/06, Pierre GM <pgm...@ma...> wrote: > Folks, > I just posted on the scipy/developers zone wiki > (http://projects.scipy.org/scipy/numpy/wiki/MaskedArray) a reimplementation > of the masked_array mopdule, motivated by some problems I ran into while > subclassing MaskedArray. > > The main differences with the initial numpy.core.ma package are that > MaskedArray is now a subclass of ndarray and that the _data section can now > be any subclass of ndarray (well, it should work in most cases, some tweaking > might required here and there). Apart from a couple of issues listed below, > the behavior of the new MaskedArray class reproduces the old one. It is quite > likely to be significantly slower, though: I was more interested in a clear > organization than in performance, so I tended to use wrappers liberally. I'm > sure we can improve that rather easily. > > The new module, along with a test suite and some utilities, are available > here: > http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/maskedarray.py > http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/masked_testutils.py > http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/test_maskedarray.py > > Please note that it's still a work in progress (even if it seems to work quite > OK when I use it). Suggestions, comments, improvements and general feedback > are more than welcome ! > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: David C. <da...@ar...> - 2006-10-24 06:29:58
|
Lars Friedrich wrote: > Andrew, > > thanks for your fast answer. > > Am Montag, den 23.10.2006, 00:45 -0700 schrieb Andrew Straw: > >> It sounds like your hardware drivers may be buggy -- you should only get >> segfaults, not (the Windows equivalent of) kernel panics, when your >> userspace code accesses wrong memory. >> > > When I started to write this thing, I had a lot of bluescreens. I think > they occured when my Python program and the hardware driver were > accessing the same memory location at the same time. I think the > hardware uses some DMA-technique. > I don't know anything about your device, but a driver directly accessing a memory buffer from a userland program sounds like a bug to me. I am far from being knowledgeable about os programming, but kernel and user space are two different adress spaces, so I don't see how a user-land program could access directly memory from kernel-space (the buffer from your hardware). There has to be copy somewhere, or fancier methods of sharing data; otherwise, this is really unsafe (using data from userland in kernel land is sure to cause problems....). >> But if you have buggy hardware drivers, I suppose it's possible that >> locking the memory will help. This wouldn't be the domain of numpy, >> however. In Linux, this is acheived with a system call such as mlock() >> or mlockall(). You'll have to figure out the appropriate call in >> Windows. Thinking about it, if your hardware drivers require the memory >> to be locked, they should do it themselves. >> > > I am not really sure about the term "locking". Does that mean that this > part is not paged, or that this part is not accessed by two entities at > the same time? Or both? > > Would I do these mlock()-calls in my C-Dll? If Yes, what would happen, > if I tried to access the numpy-array from python, during the time it is > locked? > > In this context, this means preventing the memory manager from putting the corresponding pages to the hard-drive. This is actually part of POSIX (I cannot remember which one), not linux-specific. Windows has a similar API, but I remember having seen somewhere that it is not reliable (ie it can swap pages out); I cannot find the corresponding API right now. > > It is a camera with its own PCI-frame-grabber card. I am using it in > "continuous-acquisition-mode" so I suppose, the driver is ready for > *long* uses... Anyway, if the bluescreens continue I will have to switch > to the "single-frame-acquisition-mode" and prepare a single buffer for > every frame to grab. > > Of course, there would be a different way: I could allocate the buffer > in C, in the dll. After data retrievel using the hardware API I could > then copy the data to some numpy-array, still using C. But this would > make my C-coded dll longer, thus harder to maintain. > I don't understand this either: how allocating in C changes the problem ? David |
From: A. M. A. <per...@gm...> - 2006-10-24 06:22:25
|
On 24/10/06, Lars Friedrich <lfr...@im...> wrote: > I am not really sure about the term "locking". Does that mean that this > part is not paged, or that this part is not accessed by two entities at > the same time? Or both? There are two kinds of locking, and really, you probably want both. But mlock() just ensures that the virtual memory stays in actual RAM. > Is my way a common way? I mean, letting python/numpy do the memory > allocation by creating a numpy-array with zeros in and passing its > memory location to the hardware-API? It's not necessary to do it this way. I think a more usual approach would be to create the buffer however is convenient in your C code, then provide its address to numpy. You can then use the ndarray function from python to tell it how to interpret that buffer as an array. Since the C code is creating the buffer, you can make sure it is in a special locked area of memory, ensure that the garbage collector never comes calling for it, or whatever you like. If you're having problems with driver stability, though, you may be safest having your C code copy the buffer into a numpy array in one shot - then you have complete control over when and how the DMA memory is accessed. (In C, I'm afraid, but for this sort of thing C is well-suited.) A. M. Archibald |
From: Lars F. <lfr...@im...> - 2006-10-24 05:40:47
|
Andrew, thanks for your fast answer. Am Montag, den 23.10.2006, 00:45 -0700 schrieb Andrew Straw: > It sounds like your hardware drivers may be buggy -- you should only get > segfaults, not (the Windows equivalent of) kernel panics, when your > userspace code accesses wrong memory. When I started to write this thing, I had a lot of bluescreens. I think they occured when my Python program and the hardware driver were accessing the same memory location at the same time. I think the hardware uses some DMA-technique. > But if you have buggy hardware drivers, I suppose it's possible that > locking the memory will help. This wouldn't be the domain of numpy, > however. In Linux, this is acheived with a system call such as mlock() > or mlockall(). You'll have to figure out the appropriate call in > Windows. Thinking about it, if your hardware drivers require the memory > to be locked, they should do it themselves. I am not really sure about the term "locking". Does that mean that this part is not paged, or that this part is not accessed by two entities at the same time? Or both? Would I do these mlock()-calls in my C-Dll? If Yes, what would happen, if I tried to access the numpy-array from python, during the time it is locked? > However, I'm not convinced > this is the real issue. It seems at least equally likely that your > hardware drivers were developed with particular pattern of timing when > accessing the buffers, but now you may be attempting to hold a buffer > longer (preventing the driver writing to it) than the developer ever > tested. It shouldn't blue-screen, but it does... > I think it quite likely that you have some buggy hardware drivers. What > hardware is it? It is a camera with its own PCI-frame-grabber card. I am using it in "continuous-acquisition-mode" so I suppose, the driver is ready for *long* uses... Anyway, if the bluescreens continue I will have to switch to the "single-frame-acquisition-mode" and prepare a single buffer for every frame to grab. Of course, there would be a different way: I could allocate the buffer in C, in the dll. After data retrievel using the hardware API I could then copy the data to some numpy-array, still using C. But this would make my C-coded dll longer, thus harder to maintain. Is my way a common way? I mean, letting python/numpy do the memory allocation by creating a numpy-array with zeros in and passing its memory location to the hardware-API? Thanks Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-Köhler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfr...@im... |
From: David C. <da...@ar...> - 2006-10-24 03:11:03
|
Albert Strasheim wrote: > Hey Travis > > >> -----Original Message----- >> From: num...@li... [mailto:numpy- >> dis...@li...] On Behalf Of Travis Oliphant >> Sent: Tuesday, October 24, 2006 12:32 AM >> To: Discussion of Numerical Python >> Subject: [Numpy-discussion] Release of 1.0 coming >> >> The long awaited day is coming....--- Wednesday is the target. >> >> Please submit problems before Tuesday (tomorrow). Nothing but bug-fixes >> are being changed right now. >> > > Some Valgrind warnings that you might want to look at: > http://projects.scipy.org/scipy/numpy/ticket/360 > > Maybe faltet could provide some code to reproduce this problem: > http://projects.scipy.org/scipy/numpy/ticket/355 > > I think this ndpointer issue has been resolved (Stefan?): > http://projects.scipy.org/scipy/numpy/ticket/340 > > I think ctypes 1.0.1 is required for ndpointer to work, so we might consider > some kind of version check + warning on import? > > Yes, please, I got caught on this one: ctype code not running anymore with SVN numpy. Updating ctypes from 1.0.0 to 1.0.1 did the trick, cheers, David > Maybe a Python at-exit handler can be used to avoid the add_docstring leaks > described here: > http://projects.scipy.org/scipy/numpy/ticket/195 > > Also, what's the story with f2py? It seems Pearu is still making quite a few > changes in the trunk as part of F2PY G3. > > Cheers, > > Albert > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Robert K. <rob...@gm...> - 2006-10-24 01:26:50
|
Charles R Harris wrote: > > On 10/23/06, *Tim Hochberg* <tim...@ie... > <mailto:tim...@ie...>> wrote: > > Albert Strasheim wrote: > > Hello all > > > > I'm trying to generate random 32-bit integers. None of the > following seem to > > do the trick with NumPy 1.0.dev3383: > > > > In [32]: N.random.randint (-2**31, 2**31-1) > > ValueError: low >= high > > There should be a raw output from mtrand somewhere that gives random > uint32 output which you might be able to cast somehow. Really, there > should also be a signed output somewhere but I haven't looked closely at > the mtrand interface. There is RandomState.tomaxint(), which returns signed integers >= 0 and <= sys.maxint. It didn't get exposed at the module level, for some reason, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Keith G. <kwg...@gm...> - 2006-10-24 00:26:58
|
On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > Keith Goodman wrote: > > On 10/20/06, JJ <jos...@ya...> wrote: > > > >> My suggestion is to > >> create a new attribute, such as .AR, so that the > >> following could be used: M[K.AR,:] > >> > > > > It would be even better if M[K,:] worked. Would such a patch be > > accepted? (Not that I know how to make it.) > > > What exactly do you want to work? x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) idx matrix([[1], [3]]) I'd like (if it doesn't break the consistency of numpy): -------------------------------------- x[idx, :] to give matrix([[ 4, 5, 6 , 7], 12, 13, 14 ,15]]) instead of matrix([[[ 4, 5, 6, 7]], [[12, 13, 14, 15]]]) -------------------------------------- x[:, idx] top give a 4x2 matrix instead of an error -------------------------------------- x[x[:,0] > 4, :] to give a 2x4 matrix instead of a 1x2 matrix -------------------------------------- I'd also like a pony. |
From: Charles R H. <cha...@gm...> - 2006-10-24 00:21:31
|
On 10/23/06, Tim Hochberg <tim...@ie...> wrote: > > Albert Strasheim wrote: > > Hello all > > > > I'm trying to generate random 32-bit integers. None of the following > seem to > > do the trick with NumPy 1.0.dev3383: > > > > In [32]: N.random.randint(-2**31, 2**31-1) > > ValueError: low >= high > > There should be a raw output from mtrand somewhere that gives random uint32 output which you might be able to cast somehow. Really, there should also be a signed output somewhere but I haven't looked closely at the mtrand interface. Chuck |
From: Stefan v. d. W. <st...@su...> - 2006-10-23 23:57:37
|
On Mon, Oct 23, 2006 at 05:28:05PM -0600, Travis Oliphant wrote: > Yes it has. Fixed. > > I think ctypes 1.0.1 is required for ndpointer to work, so we might c= onsider > > some kind of version check + warning on import?=20 > > =20 > Not sure about that. It worked for me using ctypes 1.0.0. You have to excercise ctypes beyond the normal unit tests for it to break (my code did, the moment the update went into numpy). I can confirm that it runs fine with ctypes 1.0.1. Regards St=E9fan |
From: Travis O. <oli...@ie...> - 2006-10-23 23:51:57
|
Keith Goodman wrote: > On 10/20/06, JJ <jos...@ya...> wrote: > >> My suggestion is to >> create a new attribute, such as .AR, so that the >> following could be used: M[K.AR,:] >> > > It would be even better if M[K,:] worked. Would such a patch be > accepted? (Not that I know how to make it.) > What exactly do you want to work? -Travis |
From: Albert S. <fu...@gm...> - 2006-10-23 23:50:46
|
Hello all > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Travis Oliphant > Sent: Tuesday, October 24, 2006 12:04 AM > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] Strange results when sorting array > withfields > <snip> > It turns out, it was not so difficult to implement this and is now in SVN. > > So, the VOID_compare now does something a little more intelligent when > fields are defined which means that record arrays can now be > lexicographically sorted more easily than using lexsort and take (as > long as the fields are ordered according to how you want the sort to > proceed). Thanks very much for this! I played around and it works like a charm. Cheers, Albert |
From: Travis O. <oli...@ie...> - 2006-10-23 23:47:15
|
JJ wrote: > Hello. > I have a suggestion that might make slicing using > matrices more user-friendly. I often have a matrix of > row or column numbers that I wish to use as a slice. > If K was a matrix of row numbers (nx1) and M was a nxm > matrix, then I would use ans = M[K.A.ravel(),:] to > I had thought that something like that would be useful. It's called .A1 -Travis |
From: David G. <Dav...@no...> - 2006-10-23 23:47:09
|
Hey, any chance it has something to do with running an up-to-date numpy with an "out-of-date" Python (2.3.4 is pretty old, isn't it?) DG Fernando Perez wrote: > Hi all, > > two colleagues have been seeing occasional crashes from very > long-running code which uses numpy. We've now gotten a backtrace from > one such crash, unfortunately it uses a build from a few days ago: > > In [3]: numpy.__version__ > Out[3]: '1.0b5.dev3097' > > In [4]: scipy.__version__ > Out[4]: '0.5.0.2180' > > Because it takes so long to get the code to crash (several days of > 100%CPU usage), I can't make a new one right now, but I'll be happy to > restart the same run with a current SVN build if necessary, and post > the results in a few days. > > In the meantime, here's a gdb backtrace we were able to get by setting > MALLOC_CHECK_ to 2 and running the python process from within gdb: > > Program received signal SIGABRT, Aborted. > [Switching to Thread 1073880896 (LWP 26280)] > 0x40000402 in __kernel_vsyscall () > (gdb) bt > #0 0x40000402 in __kernel_vsyscall () > #1 0x0042c7d5 in raise () from /lib/tls/libc.so.6 > #2 0x0042e149 in abort () from /lib/tls/libc.so.6 > #3 0x0046b665 in free_check () from /lib/tls/libc.so.6 > #4 0x00466e65 in free () from /lib/tls/libc.so.6 > #5 0x005a4ab7 in PyObject_Free () from /usr/lib/libpython2.3.so.1.0 > #6 0x403f6336 in arraydescr_dealloc (self=0x40424020) at arrayobject.c:10455 > #7 0x403fab3e in PyArray_FromArray (arr=0xe081cb0, newtype=0x40424020, flags=0) > at arrayobject.c:7725 > #8 0x403facc3 in PyArray_FromAny (op=0xe081cb0, newtype=0x0, min_depth=0, > max_depth=0, flags=0, context=0x0) at arrayobject.c:8178 > #9 0x4043bc45 in PyUFunc_GenericFunction (self=0x943a660, args=0xa9dbf2c, > mps=0xbfc83730) at ufuncobject.c:906 > #10 0x40440a04 in ufunc_generic_call (self=0x943a660, args=0xa9dbf2c) > at ufuncobject.c:2742 > #11 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 > #12 0x0057d6d4 in PyObject_CallFunction () from /usr/lib/libpython2.3.so.1.0 > #13 0x403eabb6 in PyArray_GenericBinaryFunction (m1=Variable "m1" is > not available. > ) at arrayobject.c:3296 > #14 0x0057b7e1 in PyNumber_Check () from /usr/lib/libpython2.3.so.1.0 > #15 0x0057c1e0 in PyNumber_Multiply () from /usr/lib/libpython2.3.so.1.0 > #16 0x005d16a3 in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #17 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 > #18 0x005d3d8f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #19 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 > #20 0x00590e2e in PyFunction_SetClosure () from /usr/lib/libpython2.3.so.1.0 > #21 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 > #22 0x00584d98 in PyMethod_New () from /usr/lib/libpython2.3.so.1.0 > #23 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 > #24 0x005b584c in _PyObject_SlotCompare () from /usr/lib/libpython2.3.so.1.0 > #25 0x005aec2c in PyType_IsSubtype () from /usr/lib/libpython2.3.so.1.0 > #26 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 > #27 0x005d2b7f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #28 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 > #29 0x005d3d8f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #30 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 > #31 0x005d3d8f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #32 0x005d497b in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #33 0x005d497b in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #34 0x005d497b in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 > #35 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 > #36 0x005d5362 in PyEval_EvalCode () from /usr/lib/libpython2.3.so.1.0 > #37 0x005ee817 in PyErr_Display () from /usr/lib/libpython2.3.so.1.0 > #38 0x005ef942 in PyRun_SimpleFileExFlags () from /usr/lib/libpython2.3.so.1.0 > #39 0x005f0994 in PyRun_AnyFileExFlags () from /usr/lib/libpython2.3.so.1.0 > #40 0x005f568e in Py_Main () from /usr/lib/libpython2.3.so.1.0 > #41 0x080485b2 in main () > > # End of BT. > > This code is running on a Fedora Core 3 box, with python 2.3.4 and > numpy/scipy compiled using gcc 3.4.4. > > I realize that it's extremely difficult to help with so little > information, but unfortunately we have no small test that can > reproduce the problem. Only our large research codes, when running > for multiple days on a single run, cause this. Even very intensive > uses of the same code but which last only a few hours never show this. > > This code is a long-runing iterative algorithm, so it's basically > applying the same (complex) loop over and over until convergence, > using numpy and scipy pretty extensively throughout. > > If super Travis (or anyone else) can have a Eureka moment from the > above backtrace, that would be fantastic. If there's any other > information you think I may be able to provide, I'll be happy to do my > best. > > Cheers, > > f > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Travis O. <oli...@ie...> - 2006-10-23 23:26:50
|
Albert Strasheim wrote: > Hey Travis > > >> -----Original Message----- >> From: num...@li... [mailto:numpy- >> dis...@li...] On Behalf Of Travis Oliphant >> Sent: Tuesday, October 24, 2006 12:32 AM >> To: Discussion of Numerical Python >> Subject: [Numpy-discussion] Release of 1.0 coming >> >> The long awaited day is coming....--- Wednesday is the target. >> >> Please submit problems before Tuesday (tomorrow). Nothing but bug-fixes >> are being changed right now. >> > > Some Valgrind warnings that you might want to look at: > http://projects.scipy.org/scipy/numpy/ticket/360 > fixed. > Maybe faltet could provide some code to reproduce this problem: > http://projects.scipy.org/scipy/numpy/ticket/355 > Looked at it and couldn't see what could be wrong. Need code to reproduce the problem. > I think this ndpointer issue has been resolved (Stefan?): > http://projects.scipy.org/scipy/numpy/ticket/340 > Yes it has. Fixed. > I think ctypes 1.0.1 is required for ndpointer to work, so we might consider > some kind of version check + warning on import? > Not sure about that. It worked for me using ctypes 1.0.0. > Maybe a Python at-exit handler can be used to avoid the add_docstring leaks > described here: > http://projects.scipy.org/scipy/numpy/ticket/195 > I'm not too concerned about this. Whether we release the memory right before exiting or just let the O.S. do it when the process quits seems rather immaterial. It would be a bit of work to implement so the cost / benefit ratio seems way to high. > Also, what's the story with f2py? It seems Pearu is still making quite a few > changes in the trunk as part of F2PY G3. > Pearu told me not to hold up NumPy 1.0 because f2py g3 is still a ways away. His changes should not impact normal usage of f2py. I suspect NumPy 1.0.1 will contain f2py g3 -Travis |
From: Tim H. <tim...@ie...> - 2006-10-23 23:24:34
|
Albert Strasheim wrote: > Hello all > > I'm trying to generate random 32-bit integers. None of the following seem to > do the trick with NumPy 1.0.dev3383: > > In [32]: N.random.randint(-2**31, 2**31-1) > ValueError: low >= high > > In [43]: N.random.random_integers(-2**31, 2**31-1) > OverflowError: long int too large to convert to int > > In [45]: N.random.randint(-2**31, 2**31-1) > ValueError: low >= high > > Am I missing something obvious? > I don't think so. This doesn't help you any, but the problem is in mtrand.pyx: diff = hi - lo - 1 if diff < 0: raise ValueError("low >= high") The variables diff, hi and lo are all signed c-longs, which means the interval can only ever 2**31-1 or you overflow (this is the problem that you are seeing). It appears that the underlying rk_interval works on unsigned longs, so this is probably fixable with a little care. At the moment I don't have time to dig into the ins and outs of this though. The resulting error distribution is probably imperfect, but you could instead use some variation on int(np.random.random() * 2**32 - 2**31). -tim |
From: Fernando P. <fpe...@gm...> - 2006-10-23 23:16:09
|
On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > Fernando Perez wrote: > > If you point me to the right place in the sources, I'll be happy to > > add something to my local copy, rebuild numpy and rerun with these > > print statements in place. > > > > I've placed them in SVN (r3384): [...] Great, thanks. I'll rebuild everything from SVN. > Tracking the reference count of the built-in data-type objects should > not be too difficult. First, figure out which one is causing problems > (if you still have the gdb traceback, then go up to the > arraydescr_dealloc function and look at self->type_num and self->type). Unfortunately we closed that gdb session. > Then, put print statements throughout your code for the reference count > of this data-type object. > > Something like, > > sys.getrefcount(numpy.dtype('float')) OK, we'll log those into a file and will report after another multi-day run. Thanks again for the help! Cheers, f |
From: Travis O. <oli...@ie...> - 2006-10-23 23:04:38
|
Fernando Perez wrote: > On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > >> Fernando Perez wrote: >> >>> Hi all, >>> >>> two colleagues have been seeing occasional crashes from very >>> long-running code which uses numpy. We've now gotten a backtrace from >>> one such crash, unfortunately it uses a build from a few days ago: >>> >>> >> This looks like a reference-count problem on the data-type objects >> (probably one of the builtin ones is trying to be released). The >> reference count problem is probably hard to track down. >> >> A quick fix is to not allow the built-ins to be "freed" (the attempt >> should never be made, but if it is, then we should just incref the >> reference count and continue rather than die). >> >> Ideally, the reference count problem should be found, but other-wise >> I'll just insert some print statements if the attempt is made, but not >> actually do it as a safety measure. >> > > If you point me to the right place in the sources, I'll be happy to > add something to my local copy, rebuild numpy and rerun with these > print statements in place. > I've placed them in SVN (r3384): arraydescr_dealloc needs to do something like. if (self->fields == Py_None) { print something incref(self) return; } Most likely there is a missing Py_INCREF() before some call that uses the data-type object (and consumes it's reference count) --- do you have any Pyrex code (it's harder to get it right with Pyrex). > I realize this is probably a very difficult problem to track down, but > it really sucks to run a code for 4 days only to have it explode at > the end. Right now this is starting to be a serious problem for us as > we move our codes into large production runs, so I'm willing to put in > the necessary effort to track it down, though I'll need some guidance > from our gurus. > Tracking the reference count of the built-in data-type objects should not be too difficult. First, figure out which one is causing problems (if you still have the gdb traceback, then go up to the arraydescr_dealloc function and look at self->type_num and self->type). Then, put print statements throughout your code for the reference count of this data-type object. Something like, sys.getrefcount(numpy.dtype('float')) would be enough at a looping point in your code. Good luck, -Travis |
From: Albert S. <fu...@gm...> - 2006-10-23 23:02:21
|
Hey Travis > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Travis Oliphant > Sent: Tuesday, October 24, 2006 12:32 AM > To: Discussion of Numerical Python > Subject: [Numpy-discussion] Release of 1.0 coming > > The long awaited day is coming....--- Wednesday is the target. > > Please submit problems before Tuesday (tomorrow). Nothing but bug-fixes > are being changed right now. Some Valgrind warnings that you might want to look at: http://projects.scipy.org/scipy/numpy/ticket/360 Maybe faltet could provide some code to reproduce this problem: http://projects.scipy.org/scipy/numpy/ticket/355 I think this ndpointer issue has been resolved (Stefan?): http://projects.scipy.org/scipy/numpy/ticket/340 I think ctypes 1.0.1 is required for ndpointer to work, so we might consider some kind of version check + warning on import? Maybe a Python at-exit handler can be used to avoid the add_docstring leaks described here: http://projects.scipy.org/scipy/numpy/ticket/195 Also, what's the story with f2py? It seems Pearu is still making quite a few changes in the trunk as part of F2PY G3. Cheers, Albert |
From: Albert S. <fu...@gm...> - 2006-10-23 22:51:24
|
Hello all I'm trying to generate random 32-bit integers. None of the following seem to do the trick with NumPy 1.0.dev3383: In [32]: N.random.randint(-2**31, 2**31-1) ValueError: low >= high In [43]: N.random.random_integers(-2**31, 2**31-1) OverflowError: long int too large to convert to int In [45]: N.random.randint(-2**31, 2**31-1) ValueError: low >= high Am I missing something obvious? Cheers, Albert |
From: Fernando P. <fpe...@gm...> - 2006-10-23 22:44:26
|
On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > Fernando Perez wrote: > > Hi all, > > > > two colleagues have been seeing occasional crashes from very > > long-running code which uses numpy. We've now gotten a backtrace from > > one such crash, unfortunately it uses a build from a few days ago: > > > This looks like a reference-count problem on the data-type objects > (probably one of the builtin ones is trying to be released). The > reference count problem is probably hard to track down. > > A quick fix is to not allow the built-ins to be "freed" (the attempt > should never be made, but if it is, then we should just incref the > reference count and continue rather than die). > > Ideally, the reference count problem should be found, but other-wise > I'll just insert some print statements if the attempt is made, but not > actually do it as a safety measure. If you point me to the right place in the sources, I'll be happy to add something to my local copy, rebuild numpy and rerun with these print statements in place. I realize this is probably a very difficult problem to track down, but it really sucks to run a code for 4 days only to have it explode at the end. Right now this is starting to be a serious problem for us as we move our codes into large production runs, so I'm willing to put in the necessary effort to track it down, though I'll need some guidance from our gurus. Cheers, f |
From: Fernando P. <fpe...@gm...> - 2006-10-23 22:41:27
|
On 10/23/06, Albert Strasheim <fu...@gm...> wrote: > Hey Fernando > > Maybe you can give the code a spin under Valgrind. It's going to be slow, > but if the crash is being caused by memory corruption that happens all the > time as the process is running, maybe Valgrind will show it. > > You need some Valgrind suppressions for Python. It seems the 2.3 source tree > didn't contain these yet, so try the one from trunk: [...] Thanks, Albert. I can give it a try, though it will probably take ages to run. This already requires 3-4 days of non-stop execution to cause a crash, and valgrind can make execution times go up by a factor of 10. I'd like to have some info before a month :) Cheers, f |
From: Travis O. <oli...@ie...> - 2006-10-23 22:30:51
|
The long awaited day is coming....--- Wednesday is the target. Please submit problems before Tuesday (tomorrow). Nothing but bug-fixes are being changed right now. -Travis |
From: Travis O. <oli...@ie...> - 2006-10-23 22:09:17
|
Fernando Perez wrote: > Hi all, > > two colleagues have been seeing occasional crashes from very > long-running code which uses numpy. We've now gotten a backtrace from > one such crash, unfortunately it uses a build from a few days ago: > This looks like a reference-count problem on the data-type objects (probably one of the builtin ones is trying to be released). The reference count problem is probably hard to track down. A quick fix is to not allow the built-ins to be "freed" (the attempt should never be made, but if it is, then we should just incref the reference count and continue rather than die). Ideally, the reference count problem should be found, but other-wise I'll just insert some print statements if the attempt is made, but not actually do it as a safety measure. -Travis |
From: Albert S. <fu...@gm...> - 2006-10-23 22:08:32
|
Hey Fernando Maybe you can give the code a spin under Valgrind. It's going to be slow, but if the crash is being caused by memory corruption that happens all the time as the process is running, maybe Valgrind will show it. You need some Valgrind suppressions for Python. It seems the 2.3 source tree didn't contain these yet, so try the one from trunk: http://svn.python.org/view/python/trunk/Misc/valgrind-python.supp?rev=47113& view=auto I then run Valgrind as follows: valgrind \ --tool=memcheck \ --leak-check=yes \ --error-limit=no \ --suppressions=valgrind-python.supp \ --num-callers=20 \ --freelist-vol=536870912 \ -v \ python foo.py I recommend using the latest Valgrind (3.2.1) from here: http://www.valgrind.org/downloads/current.html#current A build from source should be as simple as ./configure && make. Cheers, Albert > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Fernando Perez > Sent: Monday, October 23, 2006 11:40 PM > To: Discussion of Numerical Python > Subject: [Numpy-discussion] Strange and hard to reproduce crash > > Hi all, > > two colleagues have been seeing occasional crashes from very > long-running code which uses numpy. We've now gotten a backtrace from > one such crash, unfortunately it uses a build from a few days ago: > > In [3]: numpy.__version__ > Out[3]: '1.0b5.dev3097' > > In [4]: scipy.__version__ > Out[4]: '0.5.0.2180' > > Because it takes so long to get the code to crash (several days of > 100%CPU usage), I can't make a new one right now, but I'll be happy to > restart the same run with a current SVN build if necessary, and post > the results in a few days. > <snip> |