Re: [Sablevm-developer] Threading support in SableVM
Brought to you by:
egagnon
From: Chris P. <chr...@ma...> - 2004-02-20 08:12:59
|
Archie Cobbs wrote: > Chris Pickett wrote: > >>"Locking any lock conceptually flushes all variables from a thread's >>working memory, and unlocking any lock forces the writing out to main >>memory of all variables that the thread has assigned." >> >>I'm pretty sure that when the spec says "working memory" it does not >>mean "processor cache" but "thread-local heap". SableVM doesn't have >>thread-local heaps (we discussed it the other day), and to me a large >>part of the JMM appears to be unimportant for SableVM. > > > I think that's mistaken... by "working memory" they just mean > a conceptual "working memory" that only the one thread has access to. > I.e., the processor cache metaphor is a good one here. > Thread-local heaps are a different topic I think. Okay, I think I finally understand. Thank-you all for your patience ... and I apologize for all the long emails (they are coming to an end, as I think a reasonable solution is close). I'll first explain my current conception of things -- I'd be grateful for any comment as to whether this sounds right or not. In SableVM, all threads access the same heap, or "main memory". At the hardware level, when a heap memory location is read into the cache, this is the same operation as bringing the value into the Java thread's "working memory". On a uniprocessor, there is only one cache, so in effect all threads' "working memories" are visible to each other (and so it is as if there are no working memories at all). Okay, actually the cache might consist of L1 and L2, but it doesn't matter w.r.t. visibility. On an SMP machine, the "main memory" is the non-cache heap memory, visible to all processors. Threads may reside on the same processor, and if this is the case, their "working memories" are also visible to each other, and no problems arise (functionally identical to the uniprocessor case). If they are on separate processors, in order to meet the requirements of the JMM, each time a lock is acquired by a thread, ALL of the lines brought into the cache by the current thread as a result of reading values from the Java heap must be flushed: they are written back to main memory if they were touched by the thread, otherwise they are simply invalidated. When a lock is released, ONLY those lines associated with the thread that have been modified since acquiring the lock need flushing. So ... assuming that's now correct, I think there are three things that we might do to consider (some or all of which may be gibberish): 1) Flush the entire processor cache as part of each MONITORENTER and MONITOREXIT, or when entering or leaving a synchronized method. This would involve calling / executing one of: a) WBINVD (not available in user mode), b) CLFLUSH on the entire cache (one line at a time), c) a kernel whole-cache flush routine, d) flooding the cache by reading in a bunch of non-Java-heap data. 2) Keep track of which Java heap addresses are read / written by a thread, and flush only the cache lines that match those addresses as part of MONITORENTER / MONITOREXIT, or when entering or leaving a synchronized method. This would involve calling / executing: a) CLFLUSH for each line b) a line-specific kernel cache flush routine. 3) Use the memory barrier instructions: a) MFENCE on each (Java only?) lock/unlock ensures that all loads and stores occurring before the lock/unlock are globally visible before any load or store that follows the MFENCE b) *** while it appears that SFENCE (identical to MFENCE except only stores are serialized) might be appropriate for the unlock operation, this would mean a load operation depending on a store ordered before the MFENCE might occur out-of-order, which would be bad. *** ........ Finally: After I wrote this, I looked again at question #118 of the comp.programming.threads FAQ, and it seems to agree with what I've written, and also makes me think that method (3) is the best. http://www.lambdacs.com/cpt/FAQ.html (careful, it will probably kill Mozilla, lynx is better) Cheers, Chris |