You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Stephen D. <sd...@gm...> - 2006-10-06 21:41:15
|
On 9/25/06, Zoran Vasiljevic <zv...@ar...> wrote: > Unix is not Unix as I see... > > Please note this (interesting) document: > > http://developers.sun.com/solaris/articles/event_completion.html > > Mostly interesting, as there is now a very powerful and scalable > notification interface on both Solaris and Mac OSX (aka BSD) > Unixes. Windows has it as well, with Linux hanging pretty much > behind (unfortunately). > > I wonder if we should start making ifdefs or wrappers to be able > to benefit from the corresponding event notification interface > available on the current platform. > > Mainly this would affect poll() usage in the driver thread > but can also be used all arround the code where we have to > wait for multiple "things" to happen. > > Does anybody have something to comment about that? What's the goal? We can certainly abstract socket event IO, with poll() as a fallback. But some of the interfaces you mentioned here can handle waiting on other things, e.g. signals. Is this what you want? There's a start at socket event IO in nsd/event.c. I didn't try wrapping anything other than poll(). My main motivation was to create a nice interface to clean up the main driver loop in nsd/driver.c, and also to make something capable of handling the reader/writer stuff that's in there, which is a little trickier due to the IO being spread among more than one thread. I have some patches for it somewhere. IIRC, the locking isn't flexible enough... But it's code that exists so you can hack on it if you think it's suitable. > Ah yes.., why "brawe new world"? Because Unix OS is diverging > and OS vendors are adding new (incompatible) functionality > on a monthly basis. I wonder how this will all look in 10 > years in future... You'll be using Linux like every one else and it won't matter... :-) |
From: Stephen D. <sd...@gm...> - 2006-10-06 21:26:05
|
On 9/24/06, Vlad Seryakov <vl...@cr...> wrote: > Here it is the doctools generated html files from source manpages. > > http://www.crystalballinc.com/vlad/nsdocs/toc.html > > doctool does all auto-referencing, so to make it pretty we just need CSS > style. > > make build-doc > > calls dtplite and produces this under doc/html > Groovy. I was wondering, do we need to rename the source man pages or move them into their own directories according to the section they belong to? At the moment we just have section n, Tcl commands. But we need section 3 for the C API. We need to install them to different locations on the host system and it would be a real pain to have to list them all individually in the Makefile. So, we need doc/src/man/3, doc/src/man/n or we need to start using ns_memoize.n rather than ns_memoize.man. Oh, and how do we handle man pages which document more than one command, such as ns_cache or nsv? It looks like when you refer to a command within a man page source, it auto links to a man page by the same name. But the ns_cache_eval command should be described in the ns_cache.man page, so the link doesn't work. Can doctools handle this? |
From: Stephen D. <sd...@gm...> - 2006-10-06 21:12:52
|
On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 03.10.2006, at 01:01, Stephen Deasey wrote: > > > > > > I was also wondering about the ns_proxy send/wait/receive. Why are > > wait and receive separate commands? > > Because you can wait, get the timeout, do something and then go repeat > waiting. It makes sense. You can't achieve this with a longer wait > timeout OR with a timed [ns_proxy eval]. > Allright, you can argue: one can [ns_proxy receive proxy ?timeout?] > in which case you have the same behaviour. Correct. But what difference > would it make, really? Two commands make it twice as hard to use. All that's needed is a timeout switch: set result [ns_proxy wait -timeout 10 $handle] > > Also, does there need to be so many timeouts? The waittimeout is 100 > > msec. That means if all else goes well, but the wait time takes 101 > > msec, the evaluation will not be successful. But looking at the other > > defaults, the script evaluator was prepared to wait (gettimeout 5000 + > > evaltimeout infinite + sendtimeout 1000 + recvtimeout 1000), or > > between six seconds and forever... Wouldn't a single timeout do? > > There are lots of "places" something can go "wrong" at the communication > path. Hence so many timeouts. At every place you send something or > receive something, there is a timeout. Timeout to send chunk of > data to proxy isn't the same as the timeout to wait for the proxy to > respond after feeding it some command to execute. Basically, one can > hardwire "sane" values for those communication timeouts (and there are > sane values set there as defaults) but somebody may come into the need > of adjusting them during runtime. You however do not need to do that, > as sane defaults are provided everywhere. The caller has a time budget. That's the total amount of time they're prepared to wait for a result. The underlying implementation may break the process down into sub tasks, but the caller doesn't really care, or know about this. If you look at the sub-tasks you might be able to say what a reasonable timeout might be. But note: this is an optimisation. A single time budget works for all the different sub-tasks, a special timeout for some sub-task only allows that task to *fail* quicker. It's not free though. You get the odd effect of failing with a timeout when there's plenty of time left in the budget. The error handling is also weird. As it's currently implemented, there's a different error code for each kind of timeout failure. The caller is forced to deal with all the different ways a timeout might occur. With a generic NS_TIMEOUT errorCode this can be skipped, but now you're loosing information. I think it needs to keep stats on the different failures, with a ns_proxy_stats command to track it. This is the data you will use to help you size the pool according to load and server ability. It's interesting to note that an individual error may not actually be an error. The goal is to size the pool according to resources available for maximum performance. If a caller times out because there are no handles, well maybe the system is doing it's job? On the other hand, if 80% of the callers are failing dues to timeout, well then you have a problem. Maybe your pool is undersized, or maybe your server is overloaded. It's the percentage of failures which determine whether there's a problem with the system. A single concept of timeout with statistics kept on failures would be easier to implement and describe, would prevent spurious timeouts, and would allow administrators to size the proxy pools. |
From: Zoran V. <zv...@ar...> - 2006-10-06 20:56:16
|
On 06.10.2006, at 22:53, Zoran Vasiljevic wrote: > OK. So far I can go. I could imagine an API without explicit > handle usage. I can't however imagine scratching them (handles) > altogether. What I mean by that is that the API should be able to allow both types of usage. |
From: Zoran V. <zv...@ar...> - 2006-10-06 20:53:30
|
On 06.10.2006, at 22:32, Stephen Deasey wrote: > > Or, maybe you need a special pool for special threads, and a default > pool for generic threads. If there are only a couple of special > threads then the pool can be small -- you don't need to size the pool > according to the many generic threads just to make sure there's always > one left over for a special thread. So, if I can read between the lines, you say we should get rid of the handles in ns_whateverwenamethething ? Completely? Irreversibly? Just plain trash handles altogether and always lock pool, get slave, send command, wait for result, read result, lock pool, put slave back. You loose "reservation" but gain "no starvation". This is a trade-off. In such situations it is more oportune to have both. So you can do it with or w/o handles. You can "reserve" and run your commands in the handle, or you can run directly, w/o handle. This can all be the part of the API. I simply hate NOT to be able to reserve a process, open a file in it and use that descriptor for long time. I'd easily imagine how I can profit from that and trashing handles altogether just cuts me from this capability. OK. So far I can go. I could imagine an API without explicit handle usage. I can't however imagine scratching them (handles) altogether. The default pool is still giving me headaches. How would you configure default pool options? |
From: Stephen D. <sd...@gm...> - 2006-10-06 20:52:07
|
On 10/6/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 06.10.2006, at 22:21, Vlad Seryakov wrote: > > > I vote for ns_exec and putting it into the core > > OK. Couple of simple thoughts agout that: > > ns_exec ?-pool poolname? script ?arg ...? > > is nice and short. It uses the default pool > (as Stephen is advocating). > > But... how would we create pools, configure pool-wide > options etc pp? As I could not figure that out, I thought > it would be better NOT to specify the action (i.e. exec) > but the vehicle (the slave process) hence ns_slave. > Now you can easly replace ns_proxy with ns_slave and > all (most) subcommands read nice... > > So we have now: > > ns_process > ns_exec > ns_slave > > ns_slave and ns_process specify the thing and subcommands > the action. it is trivial to expand to include pool > management commands but api is rather "large" or "clumsy". > > ns_exec specifies an action only. it is difficult to fit > in any other "action" for pool management for example > but is small and compact. > > Any other idea? > > (Sometimes it can ve REALLY difficult to give baby a name...) > ns_exec works for me as well. I guess the wording doesn't work so well for the sub-commands, but it does explicitly say what's special about this command, whereas you'd have to read the docs for ns_salve. ns_process is a good idea, but it feels like a very generic word to me. It could imply an individual OS process, but I also think of it more as an action: to process: 'do stuff' to this thing: 'process the loan application, Miss Higgins...!' |
From: Stephen D. <sd...@gm...> - 2006-10-06 20:42:23
|
On 10/6/06, Zoran Vasiljevic <zv...@ar...> wrote: > > etc ? Or ns_slave? > > ns_slave eval arg > ns_slave config thepool option ?value option value ...? > > Hm... that's not bad. What do you think? > I think ns_slave would be "opportune". The people will of course > immediately ask: where is ns_master? But I guess you can't dance > on all weddings... That's not a bad idea. Tcl already has a concept of slave interps. The difference being that ours run out-of-process. There's a whole bunch of other stuff that goes along with Tcl slave interps, but the core is the same: you send a script to evaluate and a result comes back. > Would you integrate that in the core code or would you leave this > as a module? Actually, I keeep asking mylself why we still stick to > modules when some fuctionality is obviously needed all the time > (nslog or nscp for example)... I think we need more modules, not less... :-) |
From: Zoran V. <zv...@ar...> - 2006-10-06 20:36:59
|
On 06.10.2006, at 22:21, Vlad Seryakov wrote: > I vote for ns_exec and putting it into the core OK. Couple of simple thoughts agout that: ns_exec ?-pool poolname? script ?arg ...? is nice and short. It uses the default pool (as Stephen is advocating). But... how would we create pools, configure pool-wide options etc pp? As I could not figure that out, I thought it would be better NOT to specify the action (i.e. exec) but the vehicle (the slave process) hence ns_slave. Now you can easly replace ns_proxy with ns_slave and all (most) subcommands read nice... So we have now: ns_process ns_exec ns_slave ns_slave and ns_process specify the thing and subcommands the action. it is trivial to expand to include pool management commands but api is rather "large" or "clumsy". ns_exec specifies an action only. it is difficult to fit in any other "action" for pool management for example but is small and compact. Any other idea? (Sometimes it can ve REALLY difficult to give baby a name...) |
From: Stephen D. <sd...@gm...> - 2006-10-06 20:32:49
|
On 10/6/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 06.10.2006, at 21:25, Stephen Deasey wrote: > > > > > But what I'm wondering is, why you need to do this with proxy slaves? > > It seems like they don't have the same state problem that a series of > > database statements do. > > > > It's possible to send multiple Tcl commands to a single proxy slave at > > the same time: > > > > ns_proxy eval { > > set f [do_foo ...] > > set b [do_bar ...] > > return [list $f $b] > > } > > > > (You can't do that with databases because they often forbid using a > > statement separating semicolon when using prepared statements and/or > > bind variables. But Tcl doesn't have that restriction.) > > I do not need it. As, as you say, I can simply send all of them > in one script. That's trivial of course. > > I give you another reason pro-handle: I'd like to allocate > the handle in order to have it always (for a longer time) > available. I "reserve" it, so to speak. If I can't do that > than my code is less "predictive" in terms that I might wait > potentially for a long time to get the chance to do something > as all proxies might be doing something else. Understand? In this case, wouldn't you just size the pool so that there was always a free handle? If the pool doesn't have enough handles, and you do hang on to handles in threads indefinitely, then you've basically got a first come, first served system. The threads which ask first will get handles and the rest will be permanently disappointed... Or, maybe you need a special pool for special threads, and a default pool for generic threads. If there are only a couple of special threads then the pool can be small -- you don't need to size the pool according to the many generic threads just to make sure there's always one left over for a special thread. |
From: Vlad S. <vl...@cr...> - 2006-10-06 20:22:43
|
I vote for ns_exec and putting it into the core Zoran Vasiljevic wrote: > On 06.10.2006, at 21:25, Stephen Deasey wrote: > >> But what I'm wondering is, why you need to do this with proxy slaves? >> It seems like they don't have the same state problem that a series of >> database statements do. >> >> It's possible to send multiple Tcl commands to a single proxy slave at >> the same time: >> >> ns_proxy eval { >> set f [do_foo ...] >> set b [do_bar ...] >> return [list $f $b] >> } >> >> (You can't do that with databases because they often forbid using a >> statement separating semicolon when using prepared statements and/or >> bind variables. But Tcl doesn't have that restriction.) > > I do not need it. As, as you say, I can simply send all of them > in one script. That's trivial of course. > > I give you another reason pro-handle: I'd like to allocate > the handle in order to have it always (for a longer time) > available. I "reserve" it, so to speak. If I can't do that > than my code is less "predictive" in terms that I might wait > potentially for a long time to get the chance to do something > as all proxies might be doing something else. Understand? > >> For example, any time budget you have for the statements as a whole >> must be split between each. So, if you decide each call should take >> only 30 secs, and the first call takes 1 sec but the second takes 31 >> secs, you will get a timeout error with almost half your total time >> budget still to spend. But you would have been perfectly happy to wait >> 58 seconds in total, spread evenly between the two calls. >> >> Another problem might be the code you run in between the two eval >> calls. In the case of explicit handle management or the withhandle >> command, the handle is kept open even when it is not being used. If >> the code that runs between the two eval calls takes some time -- >> perhaps because it blocks on IO, which may not be apparent to the >> caller -- then other threads may be prevented from using a proxy slave >> because the pool is empty. Handles which could have been returned to >> the pool and used by other callers are sitting idle in threads busy >> doing other things. This is a (temporary, mostly) resource leak. >> > > Yes. This is true. And this is obviously a contra-handle as it > may lead to starvation. > >> So, apart from state (if this is needed), the withhandle command is a >> speed optimisation. If you know for a fact that you're going to have >> to make sequential calls to the proxy system, you can take the pool >> lock just once. Otherwise, with implicit handle management, you take >> and release the pool lock for each evaluation. >> >> Regardless, the withhandle command allows implicit handle management >> in the common case, and a nice, clear syntax for when you explicitly >> want to manage the handle for performance or state reasons. > > I must think this over... > >> >>> For b. >>> I do not care how we call it. We can call it ns_cocacola if you like. >>> The name contest is open... >> >> It's an annoying thing to have to even bother about... But it is >> important. If it's not clear, people will be confused, we've seen a >> lot of that. Confused people take longer to get up to speed. People >> like new hires, which costs real money. > > So, how would we call the baby? Why not just simply ns_exec? > Actually, we can build this into the core server and not as > module... The ns_exec could exec the same executable with > different command-line args that would select other main > function and not Ns_Main, for example. I'm just thinking "loud"... > >> >>> For c. >>> I'd rather stick to explicit pool naming. I'd leave this "default" >>> to the programmer. The programmer might do something like >>> (not 100% right but serves the illustration purpose): >>> >>> ns_proxy config default >>> rename ::exec tcl::exec >>> proc ::exec args { >>> ns_proxy eval default $args >>> } >>> >>> This covers overloading of Tcl exec command. If you can convince me >>> that there are other benefits of having the default pool I can >>> think about them. I just do not see any at this point. >> >> A default only makes sense if it's built in. If it isn't, no on can >> rely on it and all code that uses proxy pools will have to create >> their own. > > OK. > >> Even with a built-in default, user code can certainly still create >> it's own proxy pool(s). Nothing is being taken away. >> > > True. > >> If it was unlikely that the default would work for much/most code, >> then it would be wrong to have one. That would be hiding something >> from programmers that they should be paying attention to. It looks to >> me though like a default pool would work for most code. But this is >> just a convenience. >> >> To manage resources efficiently you need the default pool. I can >> imagine 'exec' using the default pool, and 3rd party modules doing >> something like: >> >> ns_proxy eval -pool [ns_configvalue ... pool default] { >> .... >> } >> >> Which will just work. The site administrator can then allocate >> resources in a more fine grained way, if needed, according to the >> local situation. >> > > And if we put the ns_exec into the server and make it like this: > > ns_exec eval arg ?arg...? > ns_exec eval -pool thepool arg ?arg ...? > ns_exec config thepool option ?value option value ...? > > etc ? Or ns_slave? > > ns_slave eval arg > ns_slave config thepool option ?value option value ...? > > Hm... that's not bad. What do you think? > I think ns_slave would be "opportune". The people will of course > immediately ask: where is ns_master? But I guess you can't dance > on all weddings... > > Still, I will have to think about the "handle" issue for some time > (over the weekend) as I'm still not 100% convinced... > > Would you integrate that in the core code or would you leave this > as a module? Actually, I keeep asking mylself why we still stick to > modules when some fuctionality is obviously needed all the time > (nslog or nscp for example)... > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-10-06 20:11:52
|
On 06.10.2006, at 21:25, Stephen Deasey wrote: > > But what I'm wondering is, why you need to do this with proxy slaves? > It seems like they don't have the same state problem that a series of > database statements do. > > It's possible to send multiple Tcl commands to a single proxy slave at > the same time: > > ns_proxy eval { > set f [do_foo ...] > set b [do_bar ...] > return [list $f $b] > } > > (You can't do that with databases because they often forbid using a > statement separating semicolon when using prepared statements and/or > bind variables. But Tcl doesn't have that restriction.) I do not need it. As, as you say, I can simply send all of them in one script. That's trivial of course. I give you another reason pro-handle: I'd like to allocate the handle in order to have it always (for a longer time) available. I "reserve" it, so to speak. If I can't do that than my code is less "predictive" in terms that I might wait potentially for a long time to get the chance to do something as all proxies might be doing something else. Understand? > > For example, any time budget you have for the statements as a whole > must be split between each. So, if you decide each call should take > only 30 secs, and the first call takes 1 sec but the second takes 31 > secs, you will get a timeout error with almost half your total time > budget still to spend. But you would have been perfectly happy to wait > 58 seconds in total, spread evenly between the two calls. > > Another problem might be the code you run in between the two eval > calls. In the case of explicit handle management or the withhandle > command, the handle is kept open even when it is not being used. If > the code that runs between the two eval calls takes some time -- > perhaps because it blocks on IO, which may not be apparent to the > caller -- then other threads may be prevented from using a proxy slave > because the pool is empty. Handles which could have been returned to > the pool and used by other callers are sitting idle in threads busy > doing other things. This is a (temporary, mostly) resource leak. > Yes. This is true. And this is obviously a contra-handle as it may lead to starvation. > So, apart from state (if this is needed), the withhandle command is a > speed optimisation. If you know for a fact that you're going to have > to make sequential calls to the proxy system, you can take the pool > lock just once. Otherwise, with implicit handle management, you take > and release the pool lock for each evaluation. > > Regardless, the withhandle command allows implicit handle management > in the common case, and a nice, clear syntax for when you explicitly > want to manage the handle for performance or state reasons. I must think this over... > > >> For b. >> I do not care how we call it. We can call it ns_cocacola if you like. >> The name contest is open... > > > It's an annoying thing to have to even bother about... But it is > important. If it's not clear, people will be confused, we've seen a > lot of that. Confused people take longer to get up to speed. People > like new hires, which costs real money. So, how would we call the baby? Why not just simply ns_exec? Actually, we can build this into the core server and not as module... The ns_exec could exec the same executable with different command-line args that would select other main function and not Ns_Main, for example. I'm just thinking "loud"... > > >> For c. >> I'd rather stick to explicit pool naming. I'd leave this "default" >> to the programmer. The programmer might do something like >> (not 100% right but serves the illustration purpose): >> >> ns_proxy config default >> rename ::exec tcl::exec >> proc ::exec args { >> ns_proxy eval default $args >> } >> >> This covers overloading of Tcl exec command. If you can convince me >> that there are other benefits of having the default pool I can >> think about them. I just do not see any at this point. > > > A default only makes sense if it's built in. If it isn't, no on can > rely on it and all code that uses proxy pools will have to create > their own. OK. > > Even with a built-in default, user code can certainly still create > it's own proxy pool(s). Nothing is being taken away. > True. > If it was unlikely that the default would work for much/most code, > then it would be wrong to have one. That would be hiding something > from programmers that they should be paying attention to. It looks to > me though like a default pool would work for most code. But this is > just a convenience. > > To manage resources efficiently you need the default pool. I can > imagine 'exec' using the default pool, and 3rd party modules doing > something like: > > ns_proxy eval -pool [ns_configvalue ... pool default] { > .... > } > > Which will just work. The site administrator can then allocate > resources in a more fine grained way, if needed, according to the > local situation. > And if we put the ns_exec into the server and make it like this: ns_exec eval arg ?arg...? ns_exec eval -pool thepool arg ?arg ...? ns_exec config thepool option ?value option value ...? etc ? Or ns_slave? ns_slave eval arg ns_slave config thepool option ?value option value ...? Hm... that's not bad. What do you think? I think ns_slave would be "opportune". The people will of course immediately ask: where is ns_master? But I guess you can't dance on all weddings... Still, I will have to think about the "handle" issue for some time (over the weekend) as I'm still not 100% convinced... Would you integrate that in the core code or would you leave this as a module? Actually, I keeep asking mylself why we still stick to modules when some fuctionality is obviously needed all the time (nslog or nscp for example)... |
From: Stephen D. <sd...@gm...> - 2006-10-06 19:25:36
|
On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > To summarize. What you object is: > > a. handle management (should work without handles) > b. name of the module (should not be proxy) > c. invent default pool > d. ? > > > For a: > I understand the syntactical need, but this is how I see that from > the practical side. By having a handle, I can run "things" in the > proxy (lets call it slave process from now on) one after another with > a possible "state" between the runs. This might sometimes be needed. > If I hide the handle, how could I force my two or three consecutive > operations to be executed in the same slave? That's a good point. The same problem arises in the higher level db APIs. If handle management is transparent, how do you send multiple statement to a single db back end, such as to implement transactions or "select for update" and so on? dbi_withhandle { dbi_dml "update foo ..." dbi_dml "update bar ..." } But what I'm wondering is, why you need to do this with proxy slaves? It seems like they don't have the same state problem that a series of database statements do. It's possible to send multiple Tcl commands to a single proxy slave at the same time: ns_proxy eval { set f [do_foo ...] set b [do_bar ...] return [list $f $b] } (You can't do that with databases because they often forbid using a statement separating semicolon when using prepared statements and/or bind variables. But Tcl doesn't have that restriction.) I can imagine some kind of if-then between the two statements. When does it become impossible to do this in the proxy slave? When do you have to do this instead?: ns_proxy withhandle { set f [ns_proxy eval {do_foo ...}] ... if {$f ...} { set b [ns_proxy eval {do bar ...}] } } If it's not possible to send all the code to the proxy slave at once, then you can use something like the withhandle command to batch multiple calls to eval on the same handle. This does have some disadvantages to running both statements in a single call to 'eval'. For example, any time budget you have for the statements as a whole must be split between each. So, if you decide each call should take only 30 secs, and the first call takes 1 sec but the second takes 31 secs, you will get a timeout error with almost half your total time budget still to spend. But you would have been perfectly happy to wait 58 seconds in total, spread evenly between the two calls. Another problem might be the code you run in between the two eval calls. In the case of explicit handle management or the withhandle command, the handle is kept open even when it is not being used. If the code that runs between the two eval calls takes some time -- perhaps because it blocks on IO, which may not be apparent to the caller -- then other threads may be prevented from using a proxy slave because the pool is empty. Handles which could have been returned to the pool and used by other callers are sitting idle in threads busy doing other things. This is a (temporary, mostly) resource leak. So, apart from state (if this is needed), the withhandle command is a speed optimisation. If you know for a fact that you're going to have to make sequential calls to the proxy system, you can take the pool lock just once. Otherwise, with implicit handle management, you take and release the pool lock for each evaluation. Regardless, the withhandle command allows implicit handle management in the common case, and a nice, clear syntax for when you explicitly want to manage the handle for performance or state reasons. > For b. > I do not care how we call it. We can call it ns_cocacola if you like. > The name contest is open... It's an annoying thing to have to even bother about... But it is important. If it's not clear, people will be confused, we've seen a lot of that. Confused people take longer to get up to speed. People like new hires, which costs real money. > For c. > I'd rather stick to explicit pool naming. I'd leave this "default" > to the programmer. The programmer might do something like > (not 100% right but serves the illustration purpose): > > ns_proxy config default > rename ::exec tcl::exec > proc ::exec args { > ns_proxy eval default $args > } > > This covers overloading of Tcl exec command. If you can convince me > that there are other benefits of having the default pool I can > think about them. I just do not see any at this point. A default only makes sense if it's built in. If it isn't, no on can rely on it and all code that uses proxy pools will have to create their own. Even with a built-in default, user code can certainly still create it's own proxy pool(s). Nothing is being taken away. If it was unlikely that the default would work for much/most code, then it would be wrong to have one. That would be hiding something from programmers that they should be paying attention to. It looks to me though like a default pool would work for most code. But this is just a convenience. To manage resources efficiently you need the default pool. I can imagine 'exec' using the default pool, and 3rd party modules doing something like: ns_proxy eval -pool [ns_configvalue ... pool default] { .... } Which will just work. The site administrator can then allocate resources in a more fine grained way, if needed, according to the local situation. |
From: Vlad S. <vl...@cr...> - 2006-10-06 17:41:20
|
I am not using it so i have nothing to add to existing API. As for the name, maybe ns_process or ns_exec ? Zoran Vasiljevic wrote: > On 04.10.2006, at 10:03, Zoran Vasiljevic wrote: > >> To summarize. What you object is: >> >> a. handle management (should work without handles) >> b. name of the module (should not be proxy) >> c. invent default pool >> d. ? >> >> >> For a: >> I understand the syntactical need, but this is how I see that from >> the practical side. By having a handle, I can run "things" in the >> proxy (lets call it slave process from now on) one after another with >> a possible "state" between the runs. This might sometimes be needed. >> If I hide the handle, how could I force my two or three consecutive >> operations to be executed in the same slave? >> >> For b. >> I do not care how we call it. We can call it ns_cocacola if you like. >> The name contest is open... >> >> For c. >> I'd rather stick to explicit pool naming. I'd leave this "default" >> to the programmer. The programmer might do something like >> (not 100% right but serves the illustration purpose): >> >> ns_proxy config default >> rename ::exec tcl::exec >> proc ::exec args { >> ns_proxy eval default $args >> } >> >> This covers overloading of Tcl exec command. If you can convince me >> that there are other benefits of having the default pool I can >> think about them. I just do not see any at this point. > > > So, how we are about to proceed on this matter? > Are there any comments/ideas about naming? Handle managment? > > What I'd say is (in a nutshell): > > a. I think handles should be left as is (after all you also open > a file, get the handle and then use it to write/read, or?) > > b. I do not like the name very much but I can't think of better. > > c. I would not "invent" a default pool. I would leave this to the > programmer. The default pool just does not fit in and it would > make the API more (instead of less) complicated. > > Overall, I'm more/less happy with what we have now. I could imagine > adding some more config options to limit the resources in the slave > (memory, number of open files, absolute execution time etc) but this > can be added during regular maintenance. The most important thing > now is to freeze the API. The API is now well-documented and I > invite everybody to read it and suggest better one so we can vote. > I have no vested interest. I will bow to the will of the majority. > > Cheers, > Zoran > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-10-06 17:27:06
|
On 04.10.2006, at 10:03, Zoran Vasiljevic wrote: > To summarize. What you object is: > > a. handle management (should work without handles) > b. name of the module (should not be proxy) > c. invent default pool > d. ? > > > For a: > I understand the syntactical need, but this is how I see that from > the practical side. By having a handle, I can run "things" in the > proxy (lets call it slave process from now on) one after another with > a possible "state" between the runs. This might sometimes be needed. > If I hide the handle, how could I force my two or three consecutive > operations to be executed in the same slave? > > For b. > I do not care how we call it. We can call it ns_cocacola if you like. > The name contest is open... > > For c. > I'd rather stick to explicit pool naming. I'd leave this "default" > to the programmer. The programmer might do something like > (not 100% right but serves the illustration purpose): > > ns_proxy config default > rename ::exec tcl::exec > proc ::exec args { > ns_proxy eval default $args > } > > This covers overloading of Tcl exec command. If you can convince me > that there are other benefits of having the default pool I can > think about them. I just do not see any at this point. So, how we are about to proceed on this matter? Are there any comments/ideas about naming? Handle managment? What I'd say is (in a nutshell): a. I think handles should be left as is (after all you also open a file, get the handle and then use it to write/read, or?) b. I do not like the name very much but I can't think of better. c. I would not "invent" a default pool. I would leave this to the programmer. The default pool just does not fit in and it would make the API more (instead of less) complicated. Overall, I'm more/less happy with what we have now. I could imagine adding some more config options to limit the resources in the slave (memory, number of open files, absolute execution time etc) but this can be added during regular maintenance. The most important thing now is to freeze the API. The API is now well-documented and I invite everybody to read it and suggest better one so we can vote. I have no vested interest. I will bow to the will of the majority. Cheers, Zoran |
From: Zoran V. <zv...@ar...> - 2006-10-05 08:19:47
|
On 05.10.2006, at 00:27, Mike wrote: > <lots of interesting text cut> > > Zoran, Stephen, > To me, what this discussion says most of all is that the current > approaches to handling synchronization primitives in naviserver are > way too complicated for human consumption. Well, we have most (all?) basic blocks exported to Tcl level. This allows you (a Tcl programmer) to build just about anything that C-programmer can do in that area. Unfortunately, using the same idioms that C-programmer is used to, which may seem odd to you. > Perhaps the "correct" approach is to actually examine real-world use > of these primitives and find a "better" approach to the problem that > can leverage the power of Tcl (something closer to the stuff Stephen > was pointing out earlier) but not necessarily losing the power of free > handles that Zoran believes are useful. It is true that we'd yet to find something like that. It isn't easy though. At the moment we are discussing whether handles or tags are better, obviously neglecting the fact that the whole concept is actually "foreign" to the Tcl programmer. > I don't have the vaguest idea > what that mechanism may be - but all you guys are doing right now is > throwing around hypothetical use cases. I have yet to see a solid > example in this thread that would lead me to believe one would ever > want to use a mutex or condition variable anywhere in naviserver at > all. Ah... in some cases you have no choice! Every time you have some code that may execute in 2 connection (or other) threads at the same time and that accesses the same resource (a file or piece of memory) you need to lock and for that you need all those sync primitives. In some cases there are "abstractions" or "helpers" like ns_job or nsv where most of that ugly locking issues is hidden from you. But you can't abstract everything. From what I see, we are already very good: there is nsv which allows you to store/access scalar values across threads w/o explicit locking. Then we have caches which are thread-wide, also we have ns_job so you can start parallel jobs and wait for them, all w/o any special locking or synchronization of your own. The only piece that I miss is the ability to park a thread as a "script evaluator" in the way that Tcl threading extension does. So with all those tools, you can mostly avoid any own synchronization. But you can't really avoid it for some special cases. |
From: Mike <nee...@gm...> - 2006-10-04 22:27:21
|
<lots of interesting text cut> Zoran, Stephen, To me, what this discussion says most of all is that the current approaches to handling synchronization primitives in naviserver are way too complicated for human consumption. Perhaps the "correct" approach is to actually examine real-world use of these primitives and find a "better" approach to the problem that can leverage the power of Tcl (something closer to the stuff Stephen was pointing out earlier) but not necessarily losing the power of free handles that Zoran believes are useful. I don't have the vaguest idea what that mechanism may be - but all you guys are doing right now is throwing around hypothetical use cases. I have yet to see a solid example in this thread that would lead me to believe one would ever want to use a mutex or condition variable anywhere in naviserver at all. (I don't mean to be say this harshly - but I feel like the discussion you guys are having right now is missing a bigger picture.) |
From: Zoran V. <zv...@ar...> - 2006-10-04 19:51:28
|
On 04.10.2006, at 21:33, Stephen Deasey wrote: > The control port is not typical. It's probably evaluating each line > with the "no-compile" flag. Neither is the runtest shell, which is > what I was using. NO! It is FAR more complex than you think... I added this: proc lu count { ns_log notice LOCK.$count ns_mutex lock m ns_mutex unlock m ns_log notice UNLOCK.$count } into tcl/util.c and restarted the server. Then I went to ctrl port and did: server1:nscp 1> lu 1 server1:nscp 2> lu 2 server1:nscp 3> lu 3 and got [04/Oct/2006:21:37:35][5931.41968128][-nscp:2-] Notice: LOCK.1 [04/Oct/2006:21:37:35][5931.41968128][-nscp:2-] Error: MISS THE OBJ CACHE : 0x2f3858 [04/Oct/2006:21:37:35][5931.41968128][-nscp:2-] Notice: UNLOCK.1 [04/Oct/2006:21:37:38][5931.41968128][-nscp:2-] Notice: LOCK.2 [04/Oct/2006:21:37:38][5931.41968128][-nscp:2-] Notice: UNLOCK.2 [04/Oct/2006:21:37:38][5931.41968128][-nscp:2-] Notice: LOCK.3 [04/Oct/2006:21:37:38][5931.41968128][-nscp:2-] Notice: UNLOCK.3 Now, you would naively (as myself) think all is green. Its not. Because when I do following (on the ncp line) server1:nscp 11> proc lu count {ns_log notice LOCK.$count; ns_mutex lock m; ns_mutex unlock m; ns_log notice UNLOCK.$count} i.e. redefine the proc "lu" then I get the cache misses again. As you might expect because the ncp is "probably" not compiling the proc. Allright, but when I logout the ncp and log in again the completely new thread is created and the "lu" procedure is again created from the blueprint. AND... if I now repeat my test i get: [04/Oct/2006:21:39:08][5931.41967104][-nscp:3-] Notice: LOCK.1 [04/Oct/2006:21:39:08][5931.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f34b0 [04/Oct/2006:21:39:08][5931.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f34b0 [04/Oct/2006:21:39:08][5931.41967104][-nscp:3-] Notice: UNLOCK.1 [04/Oct/2006:21:39:10][5931.41967104][-nscp:3-] Notice: LOCK.2 [04/Oct/2006:21:39:10][5931.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f34b0 [04/Oct/2006:21:39:10][5931.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f34b0 [04/Oct/2006:21:39:10][5931.41967104][-nscp:3-] Notice: UNLOCK.2 [04/Oct/2006:21:39:12][5931.41967104][-nscp:3-] Notice: LOCK.3 [04/Oct/2006:21:39:12][5931.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f34b0 [04/Oct/2006:21:39:12][5931.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f34b0 [04/Oct/2006:21:39:12][5931.41967104][-nscp:3-] Notice: UNLOCK.3 As you see... it is much more complicated and I can't really explain w/o deeply looking into it. There is something happening there which we still do not understand. > But I don't know if that makes this technique not useful... > It is useful. But it has side-effects we will yet have to understand. > > Right. So if handle management was implicit you wouldn't have to > bother shuffling them between interps via nsv arrays. Which would be a > good thing. Of course. I never sad something different. > > > The global lock is just my laziness. Although if the handle is cached > it doesn't matter because the lock is only taken once, the first time. > And it has to be cached or else there's no point doing this... It is cached *sometimes* and we do not know why not, when not. At least I don't as I can reproduce the above behaviour easly. |
From: Stephen D. <sd...@gm...> - 2006-10-04 19:33:45
|
On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 04.10.2006, at 20:00, Stephen Deasey wrote: > > > On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > >> > >> On 04.10.2006, at 18:44, Stephen Deasey wrote: > >> > >>>> > >>>> OK. I can accept that. Still, you put this into frequently > >>>> called procedure and there you go (literals are freed AFAIK, > >>>> when procedure scope is exited): again you have global locking. > >>> > >>> > >>> I'm not sure what you're saying here. What is freed, and when? > >>> > >> > >> The literal object "themutex" is gone after procedure exits > >> AFAIK. This MIGHT need to be double-checked though as I'm > >> not 100% sure if the literals get garbage collected at that > >> point OR at interp teardown. > > > > > > If that were true then my scheme would be useless. But I don't think > > it is. I added a log line to GetArgs in nsd/tclthread.c: > > > > if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK > > && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) ! > > = TCL_OK) { > > > > Ns_Log(Warning, "tclthread.c:GetArgs: objv[2] not an > > address object. Looking up in hash table..."); > > > > Tcl_ResetResult(interp); > > Ns_MasterLock(); > > ... > > > > > > i.e. if the 'name' of the thread primitive given to the command does > > not already contain a pointer to the underlying object, and it can not > > be converted to one by deserialising a pointer address, then take the > > global lock and look the name up in the hash table. Log this case. > > > > > > % proc lockunlock args {ns_mutex lock m; ns_mutex unlock m} > > % lockunlock > > [04/Oct/2006:18:39:25][8985.3086293920][- > > thread-1208673376-] Warning: > > tclthread.c:GetArgs: objv[2] not an address object. Looking up > > in hash table... > > % lockunlock > > % lockunlock > > > > > Hmhmhmhmhm.... look what I get after doing this change in tclthread.c > (exactly the same spot): > > > if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK > && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) != > TCL_OK) { > Ns_Log(2, "MISS THE OBJ CACHE : %p", objv[2]); > Tcl_ResetResult(interp); > Ns_MasterLock(); > > > I do following from the nscp line: > > server1:nscp 9> proc lu count {ns_log notice LOCK.$count; ns_mutex > lock m; ns_mutex unlock m; ns_log notice UNLOCK.$count} > server1:nscp 11> lu 1 > server1:nscp 12> lu 2 > server1:nscp 13> lu 3 > > and this is what comes in the log: > > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Notice: LOCK.1 > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Notice: UNLOCK.1 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Notice: LOCK.2 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Notice: UNLOCK.2 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Notice: LOCK.3 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Error: MISS THE OBJ > CACHE : 0x2f4ff8 > [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Notice: UNLOCK.3 > > So, what now? The control port is not typical. It's probably evaluating each line with the "no-compile" flag. Neither is the runtest shell, which is what I was using. If I run the test as a registered proc in a conn thread with a proc named "foo" and a mutex named "foo", things work fine -- no clash between the name of the proc "foo" and the mutex "foo". The experiment with a lockunlock proc and a broadcast proc run in the same registered proc causes two log lines. The single atom "m" is being shimmered between mutex and cond type. Looks like these names live in the same namespace due to Tcl caching identical strings in an object cache, to save memory. It's not the end of the world. You just need to make sure the names used are unique, which is not a completely unreasonable thing to do. I mean, nsv array names are unique too, which is the alternative if you have to store handles. But I don't know if that makes this technique not useful... Opinions? Other options? > >> OTOH, when I say > >> > >> set mutexhandle [ns_mutex create] > >> > >> I can do whatever I want with mutexhandle, it will always point to > >> that damn mutex. I need not do some other lookup in addition. > >> I can put it into nsv, transfer it over network back to myself... > > > > > > It's not that you *can* put it into an nsv array, it's that you *have* > > to, because how else are you going to communicate the serialised > > pointer which is a mutex handle? > > No way. I have to put it there. You need not necessarily > put the mutex name there as it is "known". This is true. > > > > > And you *have* to do it because what good is a mutex without more than > > one thread? > > Correct. > > > > > And if you have more than one thread you have more than one interp, > > each with it's own global namesapce. So how do you refer to a single > > mutex from within each interp? > > Using the handle I put in the nsv, how else? Right. So if handle management was implicit you wouldn't have to bother shuffling them between interps via nsv arrays. Which would be a good thing. > > nsv arrays *also* incur locking. Plus you have the effort of having to > > cache the mutex in a thread local variable, or else you incur the nsv > > locking cost each time. > > Right. But finer grade locking, not the global lock. > Instead, the nsv bucket is locked. The global lock is just my laziness. Although if the handle is cached it doesn't matter because the lock is only taken once, the first time. And it has to be cached or else there's no point doing this... |
From: Zoran V. <zv...@ar...> - 2006-10-04 19:03:11
|
On 04.10.2006, at 20:26, Stephen Deasey wrote: > > Or not. I don't know... Not. It does not have anything to do with unique names. So far I recall, the procedure context stores list of literals used in the proc. When the proc scope exits the literal table is cleaned and all literal objects are decr-ref-counted (ulimately put back on the per-thread list of free objects). I'm not SURE that this is so, I recall seeing this at the time I was chasing a very weird bug resulting in wrong usage of Tcl objects. A peek into Tcl sources will surely reval that, but even w/o that, my little example illustrates exactly such behaviour. We might need to understand how come you get different behaviour but this way or another, the objects get lost. BUT... this way or another, what you did is perfectly OK as there is no other way one can do that. It is just that people need to be aware of such possiblity and operate either on the handle of the mutex or on its name. In our code I have done the same on the Tcl level. I invented the "getlock" call which returned the mutex handle, cacheing it in between in the nsv array. You could improve the code by inventing similar bucket based locking as done in nsv in order to avoid using that one single global lock. |
From: Zoran V. <zv...@ar...> - 2006-10-04 18:34:59
|
On 04.10.2006, at 20:00, Stephen Deasey wrote: > On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: >> >> On 04.10.2006, at 18:44, Stephen Deasey wrote: >> >>>> >>>> OK. I can accept that. Still, you put this into frequently >>>> called procedure and there you go (literals are freed AFAIK, >>>> when procedure scope is exited): again you have global locking. >>> >>> >>> I'm not sure what you're saying here. What is freed, and when? >>> >> >> The literal object "themutex" is gone after procedure exits >> AFAIK. This MIGHT need to be double-checked though as I'm >> not 100% sure if the literals get garbage collected at that >> point OR at interp teardown. > > > If that were true then my scheme would be useless. But I don't think > it is. I added a log line to GetArgs in nsd/tclthread.c: > > if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK > && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) ! > = TCL_OK) { > > Ns_Log(Warning, "tclthread.c:GetArgs: objv[2] not an > address object. Looking up in hash table..."); > > Tcl_ResetResult(interp); > Ns_MasterLock(); > ... > > > i.e. if the 'name' of the thread primitive given to the command does > not already contain a pointer to the underlying object, and it can not > be converted to one by deserialising a pointer address, then take the > global lock and look the name up in the hash table. Log this case. > > > % proc lockunlock args {ns_mutex lock m; ns_mutex unlock m} > % lockunlock > [04/Oct/2006:18:39:25][8985.3086293920][- > thread-1208673376-] Warning: > tclthread.c:GetArgs: objv[2] not an address object. Looking up > in hash table... > % lockunlock > % lockunlock > Hmhmhmhmhm.... look what I get after doing this change in tclthread.c (exactly the same spot): if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) != TCL_OK) { Ns_Log(2, "MISS THE OBJ CACHE : %p", objv[2]); Tcl_ResetResult(interp); Ns_MasterLock(); I do following from the nscp line: server1:nscp 9> proc lu count {ns_log notice LOCK.$count; ns_mutex lock m; ns_mutex unlock m; ns_log notice UNLOCK.$count} server1:nscp 11> lu 1 server1:nscp 12> lu 2 server1:nscp 13> lu 3 and this is what comes in the log: [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Notice: LOCK.1 [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f4ff8 [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f4ff8 [04/Oct/2006:20:25:22][3026.41967104][-nscp:3-] Notice: UNLOCK.1 [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Notice: LOCK.2 [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f4ff8 [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f4ff8 [04/Oct/2006:20:25:24][3026.41967104][-nscp:3-] Notice: UNLOCK.2 [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Notice: LOCK.3 [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f4ff8 [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Error: MISS THE OBJ CACHE : 0x2f4ff8 [04/Oct/2006:20:25:27][3026.41967104][-nscp:3-] Notice: UNLOCK.3 So, what now? > The first time the proc is run it is compiled and the name is indeed > looked up in the hash of mutex names. And that is of course 'slow' > because there's a lock around the hash table. > > But that is the *only* time this happens. On the second and third > attempt, no locking or look up! > > What's interesting from the above is that there is only a single log > line, but there are two literal "m" objects. Apparently Tcl is doing > some optimising behind the scenes... I believe quite opposit is happening. The literal "m" gets lost and so its saved address. > > Right. But it won't shimmer away, because it is a literal name and you > have no need to manipulate it in any way that would cause it to > shimmer, such as putting it in an nsv array. > > Now, I'm not sure what you're getting at with the "junklock" business. > If you mean the user could have a typo in their code and an extra lock > will be created behind their back, well, that's the nature of Tcl. > Same goes for variables, right? > > Although you could define ns_mutex create to take a name and force > people to use this in their initialisation code, and then in calls to > lock and unlock you wouldn't create on demand, you'd just do the look > up (first time only!), and throw an error if the name doesn't exist. > > But maybe I'm missing your point here... Yes. You miss the point. The "junkmutex" is just yet another name. No real "junk" was ment here. > > >> OTOH, when I say >> >> set mutexhandle [ns_mutex create] >> >> I can do whatever I want with mutexhandle, it will always point to >> that damn mutex. I need not do some other lookup in addition. >> I can put it into nsv, transfer it over network back to myself... > > > It's not that you *can* put it into an nsv array, it's that you *have* > to, because how else are you going to communicate the serialised > pointer which is a mutex handle? No way. I have to put it there. You need not necessarily put the mutex name there as it is "known". This is true. > > And you *have* to do it because what good is a mutex without more than > one thread? Correct. > > And if you have more than one thread you have more than one interp, > each with it's own global namesapce. So how do you refer to a single > mutex from within each interp? Using the handle I put in the nsv, how else? > > >> Thats what I mean by removing the handle from users. If you >> do that you need to do more, introduce more locks etc. >> Nothing is for free... > > > It is in fact more or less free. Compiling the Tcl source to byte code > incurs a lock around the mutex hash table. It is a compile time > expense. At run time there is no locking. Hmhmhm... not really and not always. First we have to understand why do I get those misses of the cache... > > nsv arrays *also* incur locking. Plus you have the effort of having to > cache the mutex in a thread local variable, or else you incur the nsv > locking cost each time. Right. But finer grade locking, not the global lock. Instead, the nsv bucket is locked. > > So I think this is in fact a case of a free lunch! There is no such thing as free lunch. This is not a theorem, this is an axiom. > > As far as I can see, the only thing that will make this not true, is > some real-world, non-contrived case where the mutex name has to > shimmer. See above... |
From: Stephen D. <sd...@gm...> - 2006-10-04 18:26:46
|
On 10/4/06, Stephen Deasey <sd...@gm...> wrote: > On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > > > On 04.10.2006, at 18:44, Stephen Deasey wrote: > > > > >> > > >> OK. I can accept that. Still, you put this into frequently > > >> called procedure and there you go (literals are freed AFAIK, > > >> when procedure scope is exited): again you have global locking. > > > > > > > > > I'm not sure what you're saying here. What is freed, and when? > > > > > > > The literal object "themutex" is gone after procedure exits > > AFAIK. This MIGHT need to be double-checked though as I'm > > not 100% sure if the literals get garbage collected at that > > point OR at interp teardown. > > > If that were true then my scheme would be useless. But I don't think > it is. I added a log line to GetArgs in nsd/tclthread.c: > > if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK > && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) != TCL_OK) { > > Ns_Log(Warning, "tclthread.c:GetArgs: objv[2] not an > address object. Looking up in hash table..."); > > Tcl_ResetResult(interp); > Ns_MasterLock(); > ... > > > i.e. if the 'name' of the thread primitive given to the command does > not already contain a pointer to the underlying object, and it can not > be converted to one by deserialising a pointer address, then take the > global lock and look the name up in the hash table. Log this case. > > > % proc lockunlock args {ns_mutex lock m; ns_mutex unlock m} > % lockunlock > [04/Oct/2006:18:39:25][8985.3086293920][-thread-1208673376-] Warning: > tclthread.c:GetArgs: objv[2] not an address object. Looking up > in hash table... > % lockunlock > % lockunlock > > The first time the proc is run it is compiled and the name is indeed > looked up in the hash of mutex names. And that is of course 'slow' > because there's a lock around the hash table. > > But that is the *only* time this happens. On the second and third > attempt, no locking or look up! > > What's interesting from the above is that there is only a single log > line, but there are two literal "m" objects. Apparently Tcl is doing > some optimising behind the scenes... > Hmm.. what's most interesting about the above is, if Tcl notices two literal "m" objects in the above code and coalesces them into one, where does it put them? If it puts them in a global cache, not a proc-local one, then we have problems: % proc lockunlock args {ns_mutex lock x; ns_mutex unlock x} % proc broadcast args {ns_cond broadcast x} % broadcast [04/Oct/2006:19:15:41][9066.3086289824][-thread-1208677472-] Warning: tclthread.c:GetArgs: objv[2] not an address object. Looking up in hash table... % broadcast % lockunlock [04/Oct/2006:19:15:52][9066.3086289824][-thread-1208677472-] Warning: tclthread.c:GetArgs: objv[2] not an address object. Looking up in hash table... % lockunlock % broadcast [04/Oct/2006:19:15:59][9066.3086289824][-thread-1208677472-] Warning: tclthread.c:GetArgs: objv[2] not an address object. Looking up in hash table... Er, there's the object shimmering... So to work efficiently the name you give to the mutex must be unique. As is the case with naming procs etc. It will still work correctly with a non-unique name, but it will incur a locking overhead should your name clash with some other literal, and should that other literal be used in a way which would cause shimering. Damn. So maybe we can force unique names by requiring mutexes to be explicitly created, as mentioned before. i.e.: init.tcl: ns_mutex create foo_lock foo.tcl proc foo_do args { ns_mutex lock foo_lock ... } Or not. I don't know... |
From: Stephen D. <sd...@gm...> - 2006-10-04 18:01:00
|
On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 04.10.2006, at 18:44, Stephen Deasey wrote: > > >> > >> OK. I can accept that. Still, you put this into frequently > >> called procedure and there you go (literals are freed AFAIK, > >> when procedure scope is exited): again you have global locking. > > > > > > I'm not sure what you're saying here. What is freed, and when? > > > > The literal object "themutex" is gone after procedure exits > AFAIK. This MIGHT need to be double-checked though as I'm > not 100% sure if the literals get garbage collected at that > point OR at interp teardown. If that were true then my scheme would be useless. But I don't think it is. I added a log line to GetArgs in nsd/tclthread.c: if (Ns_TclGetOpaqueFromObj(objv[2], type, &addr) != TCL_OK && Ns_TclGetAddrFromObj(interp, objv[2], type, &addr) != TCL_OK) { Ns_Log(Warning, "tclthread.c:GetArgs: objv[2] not an address object. Looking up in hash table..."); Tcl_ResetResult(interp); Ns_MasterLock(); ... i.e. if the 'name' of the thread primitive given to the command does not already contain a pointer to the underlying object, and it can not be converted to one by deserialising a pointer address, then take the global lock and look the name up in the hash table. Log this case. % proc lockunlock args {ns_mutex lock m; ns_mutex unlock m} % lockunlock [04/Oct/2006:18:39:25][8985.3086293920][-thread-1208673376-] Warning: tclthread.c:GetArgs: objv[2] not an address object. Looking up in hash table... % lockunlock % lockunlock The first time the proc is run it is compiled and the name is indeed looked up in the hash of mutex names. And that is of course 'slow' because there's a lock around the hash table. But that is the *only* time this happens. On the second and third attempt, no locking or look up! What's interesting from the above is that there is only a single log line, but there are two literal "m" objects. Apparently Tcl is doing some optimising behind the scenes... > > Right. But why would you ever do this? This was a corner case example > > I gave to show that it *could* happen, if you tried real hard and > > looked at it funny, but you wouldn't actually do this, right? > > Correct. > > > > > > >> Unlike with handles > >> where you get the "real thing" immediately and need not global > >> lock. > > > > > > In the case of the thread objects this is the case, but they are > > special (or weird). The internal rep of a thread object (mutex, cond > > var etc.) is a serialised C pointer. Given the string you can > > re-create the pointer (and if you try hard enough you can create an > > invalid pointer and crash the server, hence weird). > > > > But for handles in general, such as nsproxy, this is not the case. > > > > Remember, if you can refer to thread objects by name then you don't > > *need* to put the name in an nsv array, for example. And if you do, > > ns_mutex create does in fact still return a handle. > > BUT: to be able to refer them by name I need a lookup table. > And if this is to be thread-wide I need to lock that table globally. But only ONCE! The look up is cached. > >> Keeping the handles "away" from the user, means you need to manage > >> handles yourself because the underlying C code needs handles/ > >> pointers. > >> Every time you do that, you need to lock. In this case the global > >> lock. > > > > > > But you don't. That's the point. If this isn't the case, then I've > > done something wrong and the code needs to be reverted. > > You don't?? You do! > > ns_mutex create > > returns handle. But I can say: > > ns_mutex lock junklock > > and it will *hide* a newly created mutex and tag it with > a "junkmutex" in a thread-wide table. There you go. > The user never "see's" that real mutex handle. He knows > only "junklock". With some object tricks you "remember" > the real handle so to avoid lookup in a locked table. > But when this shimmers away, you're out of bussiness. Right. But it won't shimmer away, because it is a literal name and you have no need to manipulate it in any way that would cause it to shimmer, such as putting it in an nsv array. Now, I'm not sure what you're getting at with the "junklock" business. If you mean the user could have a typo in their code and an extra lock will be created behind their back, well, that's the nature of Tcl. Same goes for variables, right? Although you could define ns_mutex create to take a name and force people to use this in their initialisation code, and then in calls to lock and unlock you wouldn't create on demand, you'd just do the look up (first time only!), and throw an error if the name doesn't exist. But maybe I'm missing your point here... > OTOH, when I say > > set mutexhandle [ns_mutex create] > > I can do whatever I want with mutexhandle, it will always point to > that damn mutex. I need not do some other lookup in addition. > I can put it into nsv, transfer it over network back to myself... It's not that you *can* put it into an nsv array, it's that you *have* to, because how else are you going to communicate the serialised pointer which is a mutex handle? And you *have* to do it because what good is a mutex without more than one thread? And if you have more than one thread you have more than one interp, each with it's own global namesapce. So how do you refer to a single mutex from within each interp? > Thats what I mean by removing the handle from users. If you > do that you need to do more, introduce more locks etc. > Nothing is for free... It is in fact more or less free. Compiling the Tcl source to byte code incurs a lock around the mutex hash table. It is a compile time expense. At run time there is no locking. nsv arrays *also* incur locking. Plus you have the effort of having to cache the mutex in a thread local variable, or else you incur the nsv locking cost each time. So I think this is in fact a case of a free lunch! As far as I can see, the only thing that will make this not true, is some real-world, non-contrived case where the mutex name has to shimmer. |
From: Zoran V. <zv...@ar...> - 2006-10-04 17:13:35
|
On 04.10.2006, at 18:44, Stephen Deasey wrote: >> >> OK. I can accept that. Still, you put this into frequently >> called procedure and there you go (literals are freed AFAIK, >> when procedure scope is exited): again you have global locking. > > > I'm not sure what you're saying here. What is freed, and when? > The literal object "themutex" is gone after procedure exits AFAIK. This MIGHT need to be double-checked though as I'm not 100% sure if the literals get garbage collected at that point OR at interp teardown. > > > Right. But why would you ever do this? This was a corner case example > I gave to show that it *could* happen, if you tried real hard and > looked at it funny, but you wouldn't actually do this, right? Correct. > > >> Unlike with handles >> where you get the "real thing" immediately and need not global >> lock. > > > In the case of the thread objects this is the case, but they are > special (or weird). The internal rep of a thread object (mutex, cond > var etc.) is a serialised C pointer. Given the string you can > re-create the pointer (and if you try hard enough you can create an > invalid pointer and crash the server, hence weird). > > But for handles in general, such as nsproxy, this is not the case. > > Remember, if you can refer to thread objects by name then you don't > *need* to put the name in an nsv array, for example. And if you do, > ns_mutex create does in fact still return a handle. BUT: to be able to refer them by name I need a lookup table. And if this is to be thread-wide I need to lock that table globally. > > >> Keeping the handles "away" from the user, means you need to manage >> handles yourself because the underlying C code needs handles/ >> pointers. >> Every time you do that, you need to lock. In this case the global >> lock. > > > But you don't. That's the point. If this isn't the case, then I've > done something wrong and the code needs to be reverted. You don't?? You do! ns_mutex create returns handle. But I can say: ns_mutex lock junklock and it will *hide* a newly created mutex and tag it with a "junkmutex" in a thread-wide table. There you go. The user never "see's" that real mutex handle. He knows only "junklock". With some object tricks you "remember" the real handle so to avoid lookup in a locked table. But when this shimmers away, you're out of bussiness. OTOH, when I say set mutexhandle [ns_mutex create] I can do whatever I want with mutexhandle, it will always point to that damn mutex. I need not do some other lookup in addition. I can put it into nsv, transfer it over network back to myself... Thats what I mean by removing the handle from users. If you do that you need to do more, introduce more locks etc. Nothing is for free... > > >> So, nothing is for free. By allowing the tcl programmer some freedom, >> you charge him with some performance/concurrency. >> >> The best illustration: >> >> lexxsrv:nscp 7> ns_mutex unlock themutex >> Connection closed by foreign host. >> >> Here no check is done: you get a core. But if the themutex was >> locked before, all would be fine. > > > I guess this is a good example of trade offs -- performance over > safety. But it's not a good example of the differences between names > and handles, because you can do exactly the same thing: > > % set m [ns_mutex create] > t0xfd3dd9 a0xa654800 ns:mutex > % ns_mutex unlock $m > > The trade off here is balanced, you can have one thing (safety) or you > can have the other (performance). Your choice. For both handles and > names. > > On the other hand, the choice between names and handles is not > balanced. Assuming names actually work (you've sort of said they don't > above), they are equally as fast. So the choice is fast and easy or > fast and hard. > > Fast and easy, please! > > There's also the question of whether names allow the same kind of > expressive power as handles, in the nsproxy case for example. I think > they are, but that's another thread. Ah... the handle is an address of the "thing". The name is a tag. You can tag the same address with several names but only the address matters, as this is now the lower bits are working. So, when you have a name of a "thing" you need a table to associate it with an address in order to find/use it. Right? If yes, then you need to protect accesses to that table when the name you use refers to some thread-wide thing. Right? When you use handles DIRECTLY, the implementation is normally responsible to make a as-fast-as-possble transformation to get the thing "address": t0xfd3dd9 a0xa654800 ns:mutex that 0x nubmers is actually the hidden address of the mutex. What I say is: everytime you make something "nicer" or "easier" to use, you need to do more work, potentially reducing performance or concurrency or scalability. This case of tagging thread sync primitives with names and using the names instead of their handles is an example. I do not say that what you've done is bad. Actually it is the best one can do. BUT: it costs concurrency and people must know that it might be unpredictable and dependent on some hidden implicit object behaviour. Bottom line: I do not object. I find it good in any way. It is just important to know that one might come into concurrency issues at that point. And it is not always 100% predictable unless you read thru the lines. |
From: Stephen D. <sd...@gm...> - 2006-10-04 16:44:35
|
On 10/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > On 04.10.2006, at 15:52, Stephen Deasey wrote: > > > > Hmm... I think this does work. Here's how I read it: > > First time arround, there will be 3 lookups. > The next round should be simple deref. > Allright this might compute. If you have this in a proc > body the "themutex" will be a literal objects and will/should > last "longer". Not just proc bodies, but also sourced files and looping commands. You have to go out of your way to evaluate a piece of Tcl code and not have it compiled to byte code. Like use the nscp control port... :-) > I did it under the debugger in the nscp session and every > time I get another object as "themutex". This is most probably > because the code does not compile into bytecodes... > > OK. I can accept that. Still, you put this into frequently > called procedure and there you go (literals are freed AFAIK, > when procedure scope is exited): again you have global locking. I'm not sure what you're saying here. What is freed, and when? > > > >> If one however does: > >> > >> set m themutex > >> set t thecond > >> > >> ns_mutex lock $m > >> while {} { > >> ns_cond wait $c $m > >> } > >> ns_mutex unlock $m > >> > >> it WILL work. But how are you going to convey this information > >> to the Tcl programmer??? He's implicitly serializing his app > >> at the place he does not expect (lookup of the internal hash > >> table). > > > > > > Actually, the above might NOT work... :-) > > The first two init lines I havent counted for. It is just the > remaining and in fact most specifically the > ns_cond wait > that was the problem as it may awake and sleep again. > > > > > But I'm having a hard time coming up with real world scenarios where > > that might happen... > > This is OK. In the compiled code, things start to look somehow > different. > But also when you use the "themutex" to put it in the nsv array > you loose the caching effect of the object. Right. But why would you ever do this? This was a corner case example I gave to show that it *could* happen, if you tried real hard and looked at it funny, but you wouldn't actually do this, right? > Unlike with handles > where you get the "real thing" immediately and need not global > lock. In the case of the thread objects this is the case, but they are special (or weird). The internal rep of a thread object (mutex, cond var etc.) is a serialised C pointer. Given the string you can re-create the pointer (and if you try hard enough you can create an invalid pointer and crash the server, hence weird). But for handles in general, such as nsproxy, this is not the case. Remember, if you can refer to thread objects by name then you don't *need* to put the name in an nsv array, for example. And if you do, ns_mutex create does in fact still return a handle. > Keeping the handles "away" from the user, means you need to manage > handles yourself because the underlying C code needs handles/pointers. > Every time you do that, you need to lock. In this case the global lock. But you don't. That's the point. If this isn't the case, then I've done something wrong and the code needs to be reverted. > So, nothing is for free. By allowing the tcl programmer some freedom, > you charge him with some performance/concurrency. > > The best illustration: > > lexxsrv:nscp 7> ns_mutex unlock themutex > Connection closed by foreign host. > > Here no check is done: you get a core. But if the themutex was > locked before, all would be fine. I guess this is a good example of trade offs -- performance over safety. But it's not a good example of the differences between names and handles, because you can do exactly the same thing: % set m [ns_mutex create] t0xfd3dd9 a0xa654800 ns:mutex % ns_mutex unlock $m The trade off here is balanced, you can have one thing (safety) or you can have the other (performance). Your choice. For both handles and names. On the other hand, the choice between names and handles is not balanced. Assuming names actually work (you've sort of said they don't above), they are equally as fast. So the choice is fast and easy or fast and hard. Fast and easy, please! There's also the question of whether names allow the same kind of expressive power as handles, in the nsproxy case for example. I think they are, but that's another thread. > In the Tcl threading extension I check mutex for a locked state > and TCL_ERROR if lock was attempted on an already locked mutex > OR an attempt was made to unlock never-locked mutex. > This requires lots of low-level plumbing BUT it gives the Tcl > coder maximum comfort. > > Neither way is "right" or "wrong". So you can't generally > "avoid" using handles as this will lead you to situation > where you must sacrify some speed/concurrency for getting > the comfortable API. |
From: Stephen D. <sd...@gm...> - 2006-10-04 15:51:41
|
On 10/4/06, Andrew Piskorski <at...@pi...> wrote: > On Wed, Oct 04, 2006 at 12:05:48PM +0200, Zoran Vasiljevic wrote: > > I've been thinking about that... > > Basically, what you try to avoid is to replicate common > > C-idioms to Tcl because Tcl is not C after all. > > > > Would this include common threading paradigms like > > > cond wait cond mutex > > ns_cond has exactly the same semantics and usage style as the C > pthread_cond_*() functions which underly it, and thus is precisely as > confusing and low-level as they are. This is annoying, especially > when using ns_cond for the first time. > > However, it also has the important ADVANTAGE that the ns_cond > implementation is simple, and that you know that ns_cond* and > pthread_cond_* are in fact intended to work EXACTLY the same way. > > ns_cond has been extremely useful to me on the rare occasions when I > needed it, but the average Naviserver user probably never uses it even > once. And AOLserver (and thus Naviserver) has had ns_conf for many, > many years. I bet the original ns_cond implementor couldn't think of > a much better design, so he did the obvious simple thing: just wrap > the ugly C APIs as is. > > Designing better, high level, Tcl-ish APIs is non-trivial. And if > you're going to design some better high level API for waking up a > thread, why even pollute your thinking with low-level "condition > variable" stuff at all? Zoran, I think you've already got some nice > message passing style stuff in your Threads extension, maybe it would > make more sense to add some friendlier "Heh thread X, wake up, you've > got work to do now!" style API there? > > And if so, I'd say leave ns_cond alone, it's just a low-level wrapper > around pthread_cond_*, and that's fine. ns_cond is a *literal* implementation of the underlying C API. An ns_cond which takes a condition name rather than a handle is a *faithful* implementation of the underlying C API. Which is entirely different than the hypothetical ns_serialize command which has no C equivalent. The reason that a name based ns_cond makes sense as a low level building block, deviating from the C tradition of passing a handle around, is that Tcl is a different environment. Each thread has it's own Tcl interp, yet each thread must use the same condition variable. In C this is not a problem. Memory is shared so you can use a global variable to hold the condition var, and this is really easy. In Tcl, you have to some how communicate the handle to each interp, and this is painful. Which leads to the crazy situation where it's is easier to use condition variables from C than it is in Tcl..! So yes, we need to be careful when creating APIs that when we strive for simplicity we do not make it difficult or impossible to do things a little out of the ordinary. But we also need to remember that Tcl and C are different so that a low-level command does not necessarily have to be a literal translation of the C API. |