From: Christopher T K. <squ...@WP...> - 2004-07-01 20:36:26
|
(I originally posted this in comp.lang.python and was redirected here) In a quest to speed up numarray computations, I tried writing a 'threaded array' class for use on SMP systems that would distribute its workload across the processors. I hit a snag when I found out that since the Python interpreter is not reentrant, this effectively disables parallel processing in Python. I've come up with two solutions to this problem, both involving numarray's C functions that perform the actual vector operations: 1) Surround the C vector operations with Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS, thus allowing the vector operations (which don't access Python structures) to run in parallel with the interpreter. Python glue code would take care of threading and locking. 2) Move the parallelization into the C vector functions themselves. This would likely get poorer performance (a chain of vector operations couldn't be combined into one threaded operation). I'd much rather do #1, but will playing around with the interpreter state like that cause any problems? Update from original posting: I've partially implemented method #1 for Float64s. Running on four 2.4GHz Xeons (possibly two with hyperthreading?), I get about a 30% speedup while dividing 10 million Float64s, but a small (<10%) slowdown doing addition or multiplication. The operation was repeated 100 times, with the threads created outside of the loop (i.e. the threads weren't recreated for each iteration). Is there really that much overhead in Python? I can post the code I'm using and the numarray patch if it's requested. |