|
From: Magentus <mag...@gm...> - 2008-11-23 06:29:05
|
On Sat, 22 Nov 2008 15:05:53 +0200,
Twylite <tw...@cr...> wrote:
> Hi,
>> From: Magentus <mag...@gm...>
>> The [finally script] usage is trivial to implement using unset
>> traces (although not quite as clean, mostly since it uses a magic
>> variable name).
> This works for [proc] and [apply], but is not completely reliable.
> There is no guarantee that the magic finally variable will be the
> last to be unset, so a script like 'finally [list close $f]' is safe
> but 'finally { close $f }' may not behave as expected.
Granted. Which is why I generally attach it to the variable holding
the channel descriptor, and always use the [list ...] form. The
statement still stands, as usual, with some caveats.
> Also [try] is not a separate scope for variables, so it would have to
> have a special interaction with the magic finally variable such that
> [finally] scripts added inside the context of [try] are executed at
> the end of the [try].
No arguments that it's a bad way of adding finally scripts to [try]. I
was responding the to idea of a [finally] command in general. As said,
it's a nice idea, but trivial to implement (add: with care), and not
useful enough to worry about at this time. My apologies if the extra
comments didn't make that clear.
In short, I do think that a proper [finally] command would be handy,
the one I offered can optionally be bound to a specific variable which
makes it a lot safer; I was just saying it's not necessary. Easier
persistent local storage for a [proc] would be handy too (and sort of
achieved with continuations, although they're still much more fiddly
than necessary).
> Example:
> proc dostuff {} {
> set f [open {c:/boot.ini} r]
> trace add variable --finally--trap-- unset [list apply [list args
> { close $f ; puts done }]]
> }
> dostuff
> chan names ;# -> stdout stderr filed27ae8 stdin
As you said, a bad way of doing it. I tend to use [list] for ANYTHING
that's going to be deferred, unless I absolutely have it. Avoids a
whole lot of such problems.
> proc dostuff {} {
> set f [open {c:/boot.ini} r]
> trace add variable --finally--trap-- unset [list apply [list args
> [list close $f]]]
> }
> dostuff
> chan names ;# -> stdout stderr stdin
Certainly the way I'd do it. Mind you, I wouldn't waste an [apply] on
a hard-coded script like that; unless some of the core wizards here can
think of a reason why it's a good thing. (Which I would be most
interested in hearing.)
>> The [try] command for matching on something other than the return
>> code is excellent. Especially if it can match on return values as
>> well as errorcodes. How about this for a twist on the idea...
>>
>> try {
>> script
>> } catch {
>> var ?opts?
>> } then {
>> script
>> } handler .....and so on.....
>>
> This fits with extending [catch], e.g.
> catch { ... } em opts then { ... } handler {...}
> The feedback I've had so far on this approach has not been
> favorable. It seems that developers would prefer to keep the
> args/vars in the context of the handler body.
Hmmm..... Fair enough. My reasoning is this:
- Restricts and confuses the arguments to the individual handlers.
They're obvious, mostly redundant, and get in the way of other more
useful potentially optional arguments.
- What happens if each handler specifies a different set of variables.
Which ones will be defined when the code block completes? Or are they
only defined within the context of the handler being invoked? It's
confusing.
Having them specified up front makes it obvious that they're set within
the current scope, and hence will be available both to the invoked
handler and to code following the [try] block.
>> Regardless, why not have the handler clause evaluate an expression
>> in the context of a [dict with $opts]? Then you can use whatever
>> matching function you wish, the only minor pain is that you have to
>> use some ugly bracketing of the option names { ${-code} == 2 }.
>> But maybe there's a way around that, too, especially if the [dict
>> with] is doable read-only and non-destructively somehow.
> In a word, performance. I have been having conversations with other
> Tcl developers off-list, and proposed exactly this. It is
> unquestionably the most flexible option, but it forces a sequential
> consideration of each handler's expression, preventing any sort of
> heuristic to improve the performance of the construct. Since one of
> the uses of this [try] will be to build other language constructs,
> performance is something that deserves reasonable consideration.
> The tradeoff may be to have "pluggable handler matching" where some
> handlers can use exact matching ( O(1) time), some can use glob, some
> can use expr, etc. Doing this in a manner that maintains a simple
> syntax is quite difficult however.
This is pretty much exactly what I expected, and why I was thinking
that adding it later, to the standard already-specified
most-common-cases forms, would be optimal.
The simple on and handle cases are fast and efficient, and need only
support basic [glob] matching against the return and errorcode values
respectively. (A list-wise glob match would probably be useful, in a
few places.) The expr-based matching is then reserved for making curly
cases readable, able to perform and/or conditionals as well as
extraction with [regexp] and every other form of matching known to
TCL-kind.
>> And finally for over-all syntax, what'd be wrong with tagging the
>> try clauses onto the end of the present [catch] command. Make the
>> options variable mandatory in this usage, and bring it into scope
>> for the evaluations as above.
> See above. I'm not necessarily against it, but it doesn't seem to be
> a popular option.
Yeah. I kind of got that myself. Just seems like a bit of duplication
to me. Nevermind.
>>> handle {code ?resultVar ?optionsVar??} { script }
>> Is there any actual practical use to putting code in the braces?
> Not that I'm aware of, no. My current thinking is that it will be
> outside the brackets, e.g.
> handle code/expr {?resultvar? ?optionsvar?} { body }
That would be _much_ preferable. I do think, though, that being able
to glob-match on a returned value is a requirement to being worth the
effort. Otherwise you'll have a bunch of branches each with an
embedded [switch] and it's going to look worse, be less useful, and
probably less efficient, than what I've sometimes done:
switch -glob -- [catch {...} foo],$foo
hence my personal preference to moving the variables up top, and having
the syntax:
HANDLE errorcode-pattern {...}
ON return-code returnvalue-pattern {...}
One possible thought; a "return" (pending a better name) handler that
matches the return value, and leaving that off from the "on" handler.
So...
HANDLE errorcode-pattern {...}
RETURN returnvalue-pattern {...}
ON return-code {...}
might be better, on the basis that most of the return codes don't allow
you to specify a return value without producing them directly through
[return]. Further on that, the return-code could optionally be a list
of two words with the return value pattern being the second, which
would allow the "on" form to handle it transparently without the
"return" form at all.
>> Something like a:
>> withvars {resultVar ?optionsVar?}
>> following the main try script indicating where to stash the
>> variables.
> One advantage of having the vars with the handler script is that it
> allows you to reuse handlers. e.g.
> set GENERAL_IO_HANDLER {{em opts} { log "Problem: $em" }}
> ...
> try {
> # some IO routine
> } handle error * {*}$GENERAL_IO_HANDLER
> And in this case its no coincidence that the GENERAL_IO_HANDLER looks
> like an anonymous function that can be used with [apply]
I don't see any advantage to that at all. The handler won't be
compiled or anything of the kind any more than it would be without the
vars, and special magic is still going to need to be added to allow it
to efficiently be re-used with [apply] or [eval] or what-not.
>> For the blending with [if] option, there was chatter a while back
>> about fast [expr]-local variables intended mostly to hold partial
>> results during an expression; the main terms of the options dict
>> could quite readily be pre-loaded as [expr]-local variables.
> I'm very interesting in the idea of extending [expr] in various ways,
> especially to make pattern matching easier and somehow bind the error
> options as variables into the expr. It's just not going to happen by
> 10 December, so we can't use any approach that relies on it.
Absolutely. Again, that's why I suggested having the [expr]-based
method in addition to basic glob-matched "handler" and "on code"
forms. Almost every place where I'd use the [try] structure, fall into
one of two catagories;
try {
... open a file and do stuff ...
} finally {
... close the file ...
}
and
try {
... do some stuff that might error ...
} handle "error BLAH:*" {
... handle error blah ...
} handle "error FOO:*" {
... handle error foo ...
} on break * {
... handle the aborted case ...
} on ok "* *" {
... handle two or more word return ...
} on ok "" {
... handle empty return ...
} on ok * {
... handle single-word or empty return ...
}
Without the [expr]-based match that's a little uglier than needed, but
still marginally better than the usual catch+switch method. The
pluggable handlers might let me do a [proc args] style match, which
would be very useful for several other places as well as here (eg.
useful continuations), but this would suffice for every use case I can
think of. The "return" form or option-second-word of the "on" forms
code argument, would make that just a little bit neater...
--
Fredderic
Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 45 days, 22:33)
|