Squeak is an implementation of Smalltalk-80. The virtual machine port to Linux
at the moment wraps every 'primitive' call (Smalltalk lingo for accessing the
native bits of the VM from the Smalltalk level) in a gettimeofday() pair in
order to determine time spent in native mode - it's got something to do with
detecting slow native calls so that VM-level timing-sensitive stuff can be
adapted to the time spent in native mode. Don't ask me for the gory details,
I just know it's there :-).
In native Linux, n-thousand gettimeofday() calls don't really present much of
a problem. The system load is hardly noticable, but under UML this changes a
bit: both the Squeak VM process as one of the UML (kernel handling?) threads
eat *lots* of CPU time, both around 50% (on a dual PIII Xeon). I presume this
has to do with lots of context switching between all the parties involved in
executing a gettimeofday() system call from UML.
Apart from fixing the VM (there is an itimer-based version as well which is a
lot gettimeofday() friendly that I'm hoping to try this week), is anything on
the UML level that can be done to handle this? A different implementation of
the system call? Maybe - it was suggested on the Squeak mailing list -
mmap()ing a kernel page that updates the counter and just implementing a libc
call that reads the counter from the page or something?
Cees de Groot http://www.cdegroot.com <cg@...>
GnuPG 1024D/E0989E8B 0016 F679 F38D 5946 4ECD 1986 F303 937F E098 9E8B