From: Fernando P. <fpe...@gm...> - 2006-10-30 22:41:32
|
On 10/30/06, Travis Oliphant <oli...@ee...> wrote: > Fernando Perez wrote: > > >On 10/30/06, David Huard <dav...@gm...> wrote: > > > > > >>Hi, > >>I have a script that crashes, but only if it runs over 9~10 hours, with the > >>following backtrace from gdb. The script uses PyMC, and repeatedly calls (> > >>1000000) likelihood functions written in fortran and wrapped with f2py. > >>Numpy: 1.0.dev3327 > >>Python: 2.4.3 > >> > >> > > > >This sounds awfully reminiscent of the bug I recently mentioned: > > > >http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3312099 > > > >We left a fresh run over the weekend, but my office mate is currently > >out of the office and his terminal is locked, so I don't know what the > >result is. I'll report shortly: we followed Travis' instructions and > >ran with a fresh SVN build which includes the extra warnings he added > >to the dealloc routines. You may want to try the same advice, perhaps > >with information from both of us the gurus may zero in on the problem, > >if indeed it is the same. > > > I talked about the reference counting issue. One problem is not > incrementing the reference count when it needs to be. The other problem > could occur if the reference-count was not decremented when it needed to > be and the reference count wrapped from MAX_LONG to 0. This could also > create the problem and would be expected for "long-running" processes. I just posted the log from that run in the other thread. I'm not sure if that helps you any though. I'm running the code again to see if we see your new warning fire, and will report back. Cheers, f |