From: Stephen D. <sd...@gm...> - 2006-10-04 19:33:45
|
On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 04.10.2006, at 20:00, Stephen Deasey wrote: > > > On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > >> > >> On 04.10.2006, at 18:44, Stephen Deasey wrote: > >> > >>>> > >>>> OK. I can accept that. Still, you put this into frequently > >>>> called procedure and there you go (literals are freed AFAIK, > >>>> when procedure scope is exited): again you have global locking. > >>> > >>> > >>> I'm not sure what you're saying here. What is freed, and when? > >>> > >> > >> The literal object "themutex" is gone after procedure exits > >> AFAIK. This MIGHT need to be double-checked though as I'm > >> not 100% sure if the literals get garbage collected at that > >> point OR at interp teardown. > > > > > > If that were true then my scheme would be useless. But I don't think > > it is. I added a log line to GetArgs in nsd/tclthread.c: > > > > if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK > > && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) ! > > = TCL_OK) { > > > > Ns_Log(Warning, "tclthread.c:GetArgs: objv[2] not an > > address object. Looking up in hash table..."); > > > > Tcl_ResetResult(interp); > > Ns_MasterLock(); > > ... > > > > > > i.e. if the 'name' of the thread primitive given to the command does > > not already contain a pointer to the underlying object, and it can not > > be converted to one by deserialising a pointer address, then take the > > global lock and look the name up in the hash table. Log this case. > > > > > > % proc lockunlock args {ns_mutex lock m; ns_mutex unlock m} > > % lockunlock > > [04/Oct/2006:18:39:25][8985.3086293920][- > > thread-1208673376-] Warning: > > tclthread.c:GetArgs: objv[2] not an address object. Looking up > > in hash table... > > % lockunlock > > % lockunlock > > > > > Hmhmhmhmhm.... look what I get after doing this change in tclthread.c > (exactly the same spot): > > > if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK > && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) != > TCL_OK) { > Ns_Log(2, "MISS THE OBJ CACHE : %p", objv[2]); > Tcl_ResetResult(interp); > Ns_MasterLock(); > > > I do following from the nscp line: > > server1:nscp 9> proc lu count {ns_log notice LOCK.$count; ns_mutex > lock m; ns_mutex unlock m; ns_log notice UNLOCK.$count} > server1:nscp 11> lu 1 > server1:nscp 12> lu 2 > server1:nscp 13> lu 3 > > and this is what comes in the log: > > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Notice: LOCK.1 > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Notice: UNLOCK.1 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Notice: LOCK.2 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Notice: UNLOCK.2 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Notice: LOCK.3 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Notice: UNLOCK.3 > > So, what now? The control port is not typical. It's probably evaluating each line with the "no-compile" flag. Neither is the runtest shell, which is what I was using. If I run the test as a registered proc in a conn thread with a proc named "foo" and a mutex named "foo", things work fine -- no clash between the name of the proc "foo" and the mutex "foo". The experiment with a lockunlock proc and a broadcast proc run in the same registered proc causes two log lines. The single atom "m" is being shimmered between mutex and cond type. Looks like these names live in the same namespace due to Tcl caching identical strings in an object cache, to save memory. It's not the end of the world. You just need to make sure the names used are unique, which is not a completely unreasonable thing to do. I mean, nsv array names are unique too, which is the alternative if you have to store handles. But I don't know if that makes this technique not useful... Opinions? Other options? > >> OTOH, when I say > >> > >> set mutexhandle [ns_mutex create] > >> > >> I can do whatever I want with mutexhandle, it will always point to > >> that damn mutex. I need not do some other lookup in addition. > >> I can put it into nsv, transfer it over network back to myself... > > > > > > It's not that you *can* put it into an nsv array, it's that you *have* > > to, because how else are you going to communicate the serialised > > pointer which is a mutex handle? > > No way. I have to put it there. You need not necessarily > put the mutex name there as it is "known". This is true. > > > > > And you *have* to do it because what good is a mutex without more than > > one thread? > > Correct. > > > > > And if you have more than one thread you have more than one interp, > > each with it's own global namesapce. So how do you refer to a single > > mutex from within each interp? > > Using the handle I put in the nsv, how else? Right. So if handle management was implicit you wouldn't have to bother shuffling them between interps via nsv arrays. Which would be a good thing. > > nsv arrays *also* incur locking. Plus you have the effort of having to > > cache the mutex in a thread local variable, or else you incur the nsv > > locking cost each time. > > Right. But finer grade locking, not the global lock. > Instead, the nsv bucket is locked. The global lock is just my laziness. Although if the handle is cached it doesn't matter because the lock is only taken once, the first time. And it has to be cached or else there's no point doing this... |