Re: [Sablevm-developer] Question about _svmf_exit_object_monitor()
Brought to you by:
egagnon
From: Archie C. <ar...@de...> - 2003-03-21 17:47:15
|
Prof. Etienne M. Gagnon wrote: > You're the second person to ask me about it. The first person was one > of the members of my evaluation committee during my > Ph.D. defense. :-) So, as you were not there, I'll restate my answer. Wow, such eminent company... :-) > > My question is: is this inflate operation strictly necessary? Won't > > these other threads properly handle inflating the lock when they wake up? > > It is not *strictly* necessary, but it is highly desirable to avoid a > potential pathological case. [Avoiding another pathological situation > (i.e. busy-wait) was the reason for adding this more complex unlocking > protocol!]. > > Imagine that the thin lock owner thread holds a very large number of > thin locks, and that it repeatedly acquires and releases one last > thin-lock repeatedly, e.g.: > > synchronized(o1){ > synchronized(o2){ > ... > ... > synchronized(o10000000){ > for(i=0; i<1000000000000;i++){ > synchronized(a){ > do something short; > } // synch > } // loop > ... > > Assuming there is contention on o1, o2, ..., o10000000, when "a" is > unlocked, the VM *must* visit the list of 10000000 contending threads. > If we don't inflate o1,02,... then we must keep the list of contending > threads. On the next iteration, when we release "a" again, we must > revisit this whole 10000000 element list (searching for threads > contending on "a"). Of course, you can start thinking about how to > maintain a more complex data structure so that you don't have to > revisit the whole list, but then you end-up with complex code in an > operation that we want to be as simple and as fast as possible, as it > is on the critical path of application execution. Inflating the > locks, on the other hand, allows us to keep a very simple and > strait-forward implementation without having to worry about > pathological cases. > > Personally, I think it would be more interesting to experiment and > study the necessity or not of having a "lock deflation" policy to > revert some fat locks to thin locks. Mind you that deflating a fat > lock can be quite tricky to do, regardless of the policy, if you want > to avoid over-synchronization or bugs in corner cases. > > Does this answer your question satisfactorily? Yes it does.. thanks very much. For my purposes then, we can compromise: that is, simply try to inflate the lock when unlocking, but if an out of memory situation occurs, then just leave the lock deflated. This would permit the "unlock" operation to be treated as "should always succeed unless there is a bug", yet we still address the concern you raise above in all except out of memory situations. But in those situations, performance is the least of your problems, so it's ok to fall back. I had also wondered about the question of deflating fat locks as well. It would definitely be interesting to explore the tradeoffs. The main question is: how can the VM tell (or guess) at how often an object is contended? For example, we could devote a few bits in the object's type descriptor that indicated what percentage of this type's objects required inflating their locks. Then we could deflate only if this value was "very low", etc. Thanks, -Archie __________________________________________________________________________ Archie Cobbs * Precision I/O * http://www.precisionio.com |