[Orbit-python-list] Spec compliance, performance, memory leaks
Status: Inactive
Brought to you by:
tack
|
From: Duncan G. <dg...@uk...> - 2001-11-29 12:22:04
|
Hi,
I've been having a quick look at ORBit-Python, and I discovered a
couple of mapping errors, a surprising lack of performance, and a
memory leak. Not bad for one little program :-) . I'm using
ORBit-Python 0.3.1 and ORBit 0.5.12.
The IDL, server and client files are attached. The tests just do lots
of invocations, transmitting various kinds of sequences and arrays.
The tests can run in two ways. If you run pserver.py with no arguments
it prints an IOR. Give that IOR as an argument to pclient.py, and it
will do a cross-process test. If you run pserver.py with an argument
of -l, it will import pclient and run the tests in the same process.
First the mapping errors. One of the operations is:
typedef octet octeta[200];
octeta op2(in octeta a);
The CORBA mapping says "Sequences and arrays of octets and characters
are mapped to the string type for efficiency reasons", so the array of
octets should be passed as a string. When the test does that,
ORBit-Python says:
** WARNING **: Array type must be either list or tuple
and then freezes totally. Ctrl-C can't stop it -- you have to use
kill.
The tests also checks sequence of octet, and ORBit-Python properly
accepts a string there. The octet array test is commented out in the
attached file.
Another of the operations tests unsigned longs. I noticed that
ORBit-Python is mapping unsigned long to Python int, which is wrong.
It should use Python long int, so values above 2**31 - 1 can be
represented.
Now the performance. I was surprised that ORBit-Python was slower than
omniORBpy in almost all the tests, given that omniORBpy has lots of
locking overhead due to being multi-threaded, and because there's more
Python code involved. The results of running the server and client in
different processes on the same machine were as follows. Times are in
seconds, so lower is better.
ORBit-Python omniORBpy
single long: 2.231 2.172
octet sequence: 2.309 2.495
short sequence: 6.460 3.655
short array: 6.459 3.624
long sequence: 8.731 3.679
long array: 8.619 3.663
ulong sequence: 9.168 9.742
ulong array: 9.106 9.710
double sequence: 8.817 3.944
double array: 8.377 3.980
omniORBpy's big spike in the times for ulong is because it is properly
using Python longs to represent them, and they are much slower.
ORBit-Python uses Python ints (once it has dealt with the initial
longs the test program gives it) so it doesn't have such a spike. It
seems ORBit-Python is quite slow with the larger types.
Running between two machines across the network results in a similar
spread of times:
ORBit-Python omniORBpy
single long: 4.142 4.008
octet sequence: 5.003 5.869
short sequence: 11.842 8.346
short array: 11.308 8.340
long sequence: 16.562 11.296
long array: 16.358 11.319
ulong sequence: 17.069 16.585
ulong array: 16.903 16.541
double sequence: 16.650 11.903
double array: 16.589 11.983
Now the real surprise, running in-process using pserver.py -l:
ORBit-Python omniORBpy
single long: 1.963 0.463
octet sequence: 2.025 0.477
short sequence: 6.195 0.906
short array: 6.017 0.901
long sequence: 8.261 0.886
long array: 7.981 0.897
ulong sequence: 8.789 1.226
ulong array: 8.518 1.231
double sequence: 8.399 0.913
double array: 8.098 0.916
Eek! It looks like ORBit-Python is still using the Unix socket
transport. Is it meant to do that?
Finally, I noticed that with ORBit-Python the client process grows
continuously throughout the run, so there is obviously a memory leak
somewhere. The same thing happens with the colocated test.
Hope that's all been of interest/use.
Cheers,
Duncan.
|