Here's a little sample code.
from threading import Thread
from time import time
class Job( Thread ):
def run( self ):
v = 0L
for i in xrange( 1, 200000 ):
v = v + i
self.value = v
def _test( n=1 ):
jobs = 
for i in range( n ):
jobs.append( Job() )
start = time()
map( Job.start, jobs )
map( Job.join, jobs )
t = time() - start
print "%d: %.3f/%.3f" % (n, t, n and t/n)
map( _test, range( 5 ))
Note that the example above doesn't get a speedup from multiple threads, even on a two CPU box.
Of course, if your threads are blocking on a JDBC call, you'll see a speedup... :-)
These are microbenchmarks are thus not worth the electrons it takes to view them.
Although Jython doesn't appear to have a "global interpreter lock" a la CPython, my 2-CPU NT box shows 1 CPU utilized for 1 thread, both CPUs fully utilized for two or more threads, but the times look like:
So, more threads use more CPU, but are slightly slower, even on a two-CPU box.
If you just run the 'run' method of all the jobs, instead of starting a new thread for each of them, you get better times:
FWIW, the numbers for CPython are much worse (thank that global interpreter lock):
CPython only uses one CPU, but still incurs the overhead of thread switching - not a pretty picture.
These results were obtained on:
Jython 2.1a1 on java1.3.0 (JIT: null)
Sun JDK 1.3.0
WinNT 4.0 SP 6a
2x PIII 600
R Datta wrote:
> Hi all,
> I am working on a program to simulate stress testing (large number of
> users) on JDBC connections. I would like to multithread the JDBC operations.
> I would appreciate some input (samples, tips etc) with anyone that has
> tried multithreading within jython scripts.
> Raj Datta
> Professional Services Direct : 408-530-4932
> CrossAccess Corp Cell : 408-316-5473
> 2900 Gordon Ave #100 Fax : 408-735-0328
> Santa Clara CA 95051 Email : rdatta@...
> Jython-users mailing list