From: Dan W. <dc...@re...> - 2005-07-12 14:57:22
|
Hi, I'm using pyOpenSSL for the Fedora Extras build system after discovering that m2crypto was (1) less stable and (2) more complicated. The stuff I've been developing is available here, and implements XMLRPC server + client and HTTP server + client with two-way client server certificate verification. http://cvs.fedora.redhat.com/viewcvs/extras-buildsys/?root=fedora Interesting stuff is probably in the 'common' directory, including AuthedXMLRPCServer.py, XMLRPCServerProxy.py, HTTPServer.py, and HTTPSURLOpener.py. It's inspired by, in part, pyOpenSSL examples, m2crypto's workarounds, RHN/up2date usage of pyOpenSSL, and some other random stuff. It does mostly work, feel free to look it over for bugs or as examples. So on the problem... Both the XMLRPC server/client and the HTTPS server/client have tests built in that make heavy use of threads. The pyOpenSSL package in Fedora Core is _not_ built with OpenSSL thread safety, the patch is attached to this email. However, even with that patch, python falls over fairly quickly on multi-cpu boxes with segfaults, while single-cpu boxes work 90% of the time and segfault after a while. Turning off SSL in the testcases results in success. Debug builds of python fail fairly quickly using SSL under test cases with this message: Fatal Python error: UNREF invalid object Abort So I've thought of a number of things here: 1) The pyOpenSSL locking patch I've applied isn't working correctly, or I've forgotten some bits 2) Maybe we need to grab python locks in the pyOpenSSL locking patch in addition to the local pthreads lock 3) Perhaps pyOpenSSL needs to lock calls into OpenSSL with python locks too 4) Incorrect reference counting in pyOpenSSL? 5) Incorrect reference counting in python itself? I'd be very grateful if anyone has tips on how to debug this sort of thing, or has insights/ideas about threading, python, OpenSSL, and pyOpenSSL. I'd be happy to provide more condensed testcases than just the CVSweb link above, if that would help. Thanks! Dan |