You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(15) |
Nov
(37) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(13) |
Feb
(58) |
Mar
(61) |
Apr
(8) |
May
|
Jun
(18) |
Jul
(51) |
Aug
(11) |
Sep
(41) |
Oct
(19) |
Nov
(39) |
Dec
(14) |
2003 |
Jan
(46) |
Feb
(28) |
Mar
(3) |
Apr
(132) |
May
(93) |
Jun
(46) |
Jul
(22) |
Aug
(55) |
Sep
(13) |
Oct
(6) |
Nov
(8) |
Dec
(6) |
2004 |
Jan
(28) |
Feb
(60) |
Mar
(9) |
Apr
(28) |
May
(39) |
Jun
(40) |
Jul
(36) |
Aug
(13) |
Sep
(21) |
Oct
(38) |
Nov
(25) |
Dec
(8) |
2005 |
Jan
(6) |
Feb
(14) |
Mar
(1) |
Apr
(2) |
May
(17) |
Jun
(9) |
Jul
(7) |
Aug
(90) |
Sep
(44) |
Oct
(40) |
Nov
(22) |
Dec
(1) |
2006 |
Jan
(31) |
Feb
(10) |
Mar
(1) |
Apr
(3) |
May
(8) |
Jun
(28) |
Jul
(5) |
Aug
(42) |
Sep
(40) |
Oct
(40) |
Nov
(27) |
Dec
(26) |
2007 |
Jan
(14) |
Feb
(13) |
Mar
(3) |
Apr
(3) |
May
(22) |
Jun
|
Jul
|
Aug
(17) |
Sep
(10) |
Oct
|
Nov
(24) |
Dec
(5) |
2008 |
Jan
|
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(18) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(5) |
Oct
(3) |
Nov
(5) |
Dec
(3) |
2009 |
Jan
(17) |
Feb
(31) |
Mar
(5) |
Apr
(6) |
May
(15) |
Jun
(52) |
Jul
(48) |
Aug
(39) |
Sep
(6) |
Oct
(11) |
Nov
(8) |
Dec
(6) |
2010 |
Jan
(2) |
Feb
(3) |
Mar
(1) |
Apr
|
May
(3) |
Jun
(12) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(4) |
Dec
(1) |
2011 |
Jan
(3) |
Feb
(21) |
Mar
(17) |
Apr
(8) |
May
(10) |
Jun
(7) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
(5) |
Dec
(3) |
2012 |
Jan
(1) |
Feb
(1) |
Mar
(3) |
Apr
(1) |
May
(6) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
(8) |
2013 |
Jan
(3) |
Feb
(7) |
Mar
(3) |
Apr
(1) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
(3) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2014 |
Jan
(1) |
Feb
(12) |
Mar
(4) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(3) |
Oct
(9) |
Nov
(4) |
Dec
(1) |
2015 |
Jan
|
Feb
|
Mar
(2) |
Apr
(3) |
May
(17) |
Jun
(4) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(1) |
2016 |
Jan
(9) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(8) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
(2) |
Feb
(10) |
Mar
|
Apr
(1) |
May
(2) |
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2019 |
Jan
|
Feb
(3) |
Mar
|
Apr
(17) |
May
|
Jun
(1) |
Jul
|
Aug
(4) |
Sep
(2) |
Oct
|
Nov
(1) |
Dec
(1) |
2020 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(8) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
(11) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(4) |
Dec
(4) |
2024 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
(6) |
Jun
|
Jul
(2) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Jeff H. <je...@Ac...> - 2005-09-07 19:47:28
|
I have created an ActiveTcl-dev list for those interested in package and release management discussion about the ActiveTcl distribution. The list is intended for authors of packages that are included in the ActiveTcl distribution. It would discuss things like upcoming release timing (in case you want to finalize package changes), notable distribution changes (like threaded vs. non-threaded builds) and related issues. If you are an author of a package included in ActiveTcl, I would encourage signing up for the list (core maintainers may also be interested). Regards, Jeff Hobbs, The Tcl Guy http://www.ActiveState.com/, a division of Sophos |
From: Arjen M. <arj...@wl...> - 2005-09-07 06:41:35
|
Lars Hellstr=F6m wrote: >=20 I took the liberty of putting this entire "essay" on the Wiki for future reference and perhaps reverence :). >=20 > Actually, this modularity is pretty straightforward to accomplish in Tc= l, > if only one is prepared to give up on the traditional underspecified > notation. A notation that will work well is instead a prefix notation, > where one first specifies the "number system" (more properly: algebraic > structure) in force, then the operation within that system, and finally= the > operands. Supposing that Z, Q, and C are the integers, rationals, and > complex numbers respectively, one might imagine calculations such as th= e > following (with % for prompt): >=20 > % Z * 12 5 > 60 > % Q + 2/3 3/5 > 19/15 > % C / 1+2i 3+4i > 0.44+0.08i > % Q * [Q - 2/3 3/5] 5/4 > 1/12 >=20 > (Realistic pure Tcl implementations would probably use {2 3} and {1 2} = as > representations rather than 2/3 and 1+2i, but the more traditional form= s of > these representations here make the examples easier to understand.) >=20 > The point of having such "number system" commands is that they make it = very > easy to generically implement e.g. vectors. Instead of having one codeb= ase > for real vectors, another for complex vectors, etc., one can have just = a > single [vectorspace] command that constructs a vector space on top of a= n > arbitrary base field[1]. To make C^2 a two-dimensional complex vector s= pace > one might say >=20 > vectorspace C^2 -scalars C -dim 2 >=20 > after which the following would probably work: >=20 > % C^2 + {{1.0 0.0} {0.0 0.0}} {{0.0 0.1} {0.0 2.0}} > {1.0 0.1} {0.0 2.0} >=20 > (this result being (1.0+0.1i,2.0i)); whereas to make R^3 a > three-dimensional real vector space one would say >=20 > vectorspace R^3 -scalars R -dim 3 >=20 > and similarly to make W an eight-dimensional rational vector space >=20 > vectorspace W -scalars Q -dim 8 >=20 > etc. (Personally I more often need to define polynomials over some > particular ring than vectors, but the approach is the same in both case= s.) > What the commands created by this [vectorspace] command do is simply th= at > they pick apart the vectors they are handed as arguments into component= s, > but rely on the specified -scalars command to implement the necessary > arithmetic with these components. That way, it is straightforward to bu= ild > complex vectors as vectors on top of complex numbers. >=20 What a delightful solution, and with a few differences in the interface, I was able to concoct an actual script that will do just this: <http://wiki.tcl.tk/14686> Of course, the script is way to simple to be used robustly, but it is a start.=20 Regards, Arjen |
From: <aku...@sh...> - 2005-09-07 03:16:10
|
Tcl/Tk 2005 Conference Schedule & Registration ============================================== The 12th Tcl/Tk Conference Schedules are available. The tutorials and paper presentation schedules have been finalized and are available at: http://www.tcl.tk/community/tcl2005/tut2005.html http://www.tcl.tk/community/tcl2005/schedule.html The abstracts for the selected papers are available at: http://www.tcl.tk/community/tcl2005/abstracts.html The conference dinner will be on Wednesday evening. Blueteam will be providing a social hour with drinks and munchies on Thursday evening. Registration is open for tutorials and technical sessions at: http://www.tcl.tk/community/tcl2005/reg.html Program Committee: ================== Donal Fellows University of Manchester Clif Flynt Noumena Corp. Ron Fox NSCL Michigan State University Jeff Hobbs ActiveState Corp. Steve Landers Digital Smarties Gerald Lester HMS Software Cyndy Lilagan Eolas Technologies Inc. Arjen Markus WL | Delft Hydraulics -- Sincerely, Andreas Kupries <aku...@sh...> <http://www.purl.org/NET/akupries/> ------------------------------------------------------------------------------- |
From: Techentin, R. W. <tec...@ma...> - 2005-09-06 16:42:02
|
Lars Hellstr=F6m wrote: >=20 > ... One of the reasons I haven't written much actual=20 > code to back up these ideas is that [namespace ensemble]=20 > requires Tcl 8.5, and it would be kind of a drag to code it=20 > all up in one way now just to "have to" rewrite it all later.=20 > Maybe if Tcllib had a forward-compatible [namespace ensemble]=20 > emulation I could feel more inclined to proceed... That shouldn't be too hard to implement. Pick you're favorite object system, and you can create objects with methods that act pretty much = like ensembles. Here's a snit type that supports a [*] method. package require snit snit::type RealExpressions { method expr {args} { eval expr $args } method * {a b} { expr $a * $b } } RealExpressions Z package require tcltest tcltest::test numbers-Z-1 {multiply} { Z * 12 5 } {60} tcltest::test numbers-Z-2 {expr} { Z expr 12 * 5 } {60} tcltest::cleanupTests Bob --=20 Bob Techentin tec...@ma...=20 Mayo Foundation (507) 538-5495=20 200 First St. SW FAX (507) 284-9171 Rochester MN, 55901 USA http://www.mayo.edu/sppdg/=20 |
From: Lars <lar...@re...> - 2005-09-06 16:23:39
|
At 16.58 +0200 2005-09-06, Neil Madden wrote: >Lars Hellstrm wrote: >> Actually, this modularity is pretty straightforward to accomplish in Tcl, >> if only one is prepared to give up on the traditional underspecified >> notation. A notation that will work well is instead a prefix notation, >> where one first specifies the "number system" (more properly: algebraic >> structure) in force, then the operation within that system, and finally t= he >> operands. Supposing that Z, Q, and C are the integers, rationals, and >> complex numbers respectively, one might imagine calculations such as the >> following (with % for prompt): >> >> % Z * 12 5 >> 60 >> % Q + 2/3 3/5 >> 19/15 >> % C / 1+2i 3+4i >> 0.44+0.08i >> % Q * [Q - 2/3 3/5] 5/4 >> 1/12 > >These are just straight-forward ensembles, right? Exactly! One of the reasons I haven't written much actual code to back up these ideas is that [namespace ensemble] requires Tcl 8.5, and it would be kind of a drag to code it all up in one way now just to "have to" rewrite it all later. Maybe if Tcllib had a forward-compatible [namespace ensemble] emulation I could feel more inclined to proceed... Some people would probably prefer to think of the "number systems" as objects and implement them in some OO system, but there's not much need to do so, as they typically wouldn't have much of an internal state to speak of. >e.g., spelling things >out a bit more these could be written as: > >integer multiply 12 5 >rational add 2/3 3/5 >etc. Yes, but the brevity of the formula is very seductive, so I'd expect one would have to try to compete with it. OTOH, even if these things are called Z, C, Q, etc. then they would of course typically be buried deep down in some namespace and imported directly into the context where they are needed. >This fits well with Tcl's model that commands determine the types of >their arguments, rather than the other way around. It does mean, >however, that you need to have uniquely named commands for each type -- >no overloading -- and that it is the duty of the programmer to decide >what type is the best interpretation (here, by specifying the number >system). In most sane situations, and IMHO, there is only one choice anyway. Procedures that were written with the intention that users should be able to throw arbitrary data at them, but the procedure still magically should be able to figure out what is The Right Thing To Do, could become more complicated to write, but only because the logic underlying that magic would have to be made explicit and put in the input stage. >You can reverse the situation by associating types with values I.e., include explicit type tags in the values. Certainly doable, but probably not quite what was discussed in the mail I replied to. >(or variables) and then using these types to determine the correct >command, e.g. TOOT and most other OO systems does this, and use a simple >dispatch mechanism -- first argument gets to choose the type, and thus >the correct operation. You could develop more complex dispatch >mechanisms, but without some form of static optimisation they can get a >bit slow. In TOOT I could do something like: > >set a [integer: 12] >set b [real: 4.2] > >puts [$a / $b] > >which would be equivalent to [integer / 12 4.2], and thus would probably >produce an integer result (although it might try to be clever and know >about other types too). That's the problem that the value=3Dobject approach to mathematics typically stumbles upon. OO is fine for unary operations, since these translate to methods that are applied to the object, but binary operations cannot be implemented without violating the opacity of at least one of the operands. If one (in a complicated expression) will anyway need to violate the object natures of all operands but one, then why bother with establishing one in the first place? By contrast, object =3D "number system" works pretty well (and, incidentally= , often coincides with the category theoretical concept of object). >The point then is that we can do this stuff in Tcl today. We don't >particularly need any new machinery (although, for TOOT there are a few >things on my wish list). The only problem is with [expr] -- it is too >weak as a language to handle this stuff correctly. I'd rather just >jettison it completely... > >The other side of the coin is being able to specify interfaces, so that >we can be assured that [integer /] and [real /] (or whatever) are >roughly speaking the same operation This would be needed precisely to use [integer] and [real] in the same places. For those cases where one does want to do this (e.g. for coefficients of polynomials), one most likely wouldn't want to use /division/ (so there is little need to keep the two you mention the same), but certainly addition, subtraction, and multiplication. Typically the interfaces would correspond to established categories of algebraic structures: * a ring has addition, subtraction, multiplication. * a field has addition, subtraction, multiplication, and division. * an algebra has addition, subtraction, multiplication, and multiplication by scalar. * a group has multiplication and inversion. Etc. The tricky part is however what auxilliary operations to provide that don't feature prominently in the traditional definitions. An operation to test equality must often be provided explicitly. In rings, fields, and algebras there is always a zero element, which should probably be implemented as a nullary operation. Some rings have a one element (multiplicative identity), but not all do. Getting the interfaces right can be pretty tricky, so ideally one should have some method (declaration and versioning of interfaces?) that would prevent a codebase from getting completely useless if it turns out some part got wrong. Lars Hellstr=F6m |
From: Neil M. <ne...@Cs...> - 2005-09-06 15:00:27
|
Lars Hellstrm wrote: > Actually, this modularity is pretty straightforward to accomplish in Tcl, > if only one is prepared to give up on the traditional underspecified > notation. A notation that will work well is instead a prefix notation, > where one first specifies the "number system" (more properly: algebraic > structure) in force, then the operation within that system, and finally the > operands. Supposing that Z, Q, and C are the integers, rationals, and > complex numbers respectively, one might imagine calculations such as the > following (with % for prompt): > > % Z * 12 5 > 60 > % Q + 2/3 3/5 > 19/15 > % C / 1+2i 3+4i > 0.44+0.08i > % Q * [Q - 2/3 3/5] 5/4 > 1/12 These are just straight-forward ensembles, right? e.g., spelling things out a bit more these could be written as: integer multiply 12 5 rational add 2/3 3/5 etc. This fits well with Tcl's model that commands determine the types of their arguments, rather than the other way around. It does mean, however, that you need to have uniquely named commands for each type -- no overloading -- and that it is the duty of the programmer to decide what type is the best interpretation (here, by specifying the number system). You can reverse the situation by associating types with values (or variables) and then using these types to determine the correct command, e.g. TOOT and most other OO systems does this, and use a simple dispatch mechanism -- first argument gets to choose the type, and thus the correct operation. You could develop more complex dispatch mechanisms, but without some form of static optimisation they can get a bit slow. In TOOT I could do something like: set a [integer: 12] set b [real: 4.2] puts [$a / $b] which would be equivalent to [integer / 12 4.2], and thus would probably produce an integer result (although it might try to be clever and know about other types too). The point then is that we can do this stuff in Tcl today. We don't particularly need any new machinery (although, for TOOT there are a few things on my wish list). The only problem is with [expr] -- it is too weak as a language to handle this stuff correctly. I'd rather just jettison it completely... The other side of the coin is being able to specify interfaces, so that we can be assured that [integer /] and [real /] (or whatever) are roughly speaking the same operation -- satisfy some basic algebraic laws for instance. I personally like Haskell's type-class approach to this (and ad-hoc polymorphism in general), and that's the direction I intend to move TOOT in, when I get some free time. Cheers, -- Neil This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. |
From: Techentin, R. W. <tec...@ma...> - 2005-09-06 13:59:34
|
Lars Hellstr=F6m wrote: >=20 > My apologies if the following is a bit long, but it concerns=20 > a matter I have often thought about, and since a neighbouring=20 > matter came up in the discussion last week I might just as=20 > well put forth my findings on the matter. >=20 Thank you very much for writing it. You have succinctly documented = several important issues that we need to keep in mind for more general = mathematics. I still wouldn't mind having a generalized expression parser, but now = I've got a better understanding of how it ought to be applied. Bob --=20 Bob Techentin tec...@ma...=20 Mayo Foundation (507) 538-5495=20 200 First St. SW FAX (507) 284-9171 Rochester MN, 55901 USA http://www.mayo.edu/sppdg/=20 |
From: Arjen M. <arj...@wl...> - 2005-09-06 13:09:35
|
Lars Hellstr=F6m wrote: >=20 > My apologies if the following is a bit long, but it concerns a matter I > have often thought about, and since a neighbouring matter came up in th= e > discussion last week I might just as well put forth my findings on the > matter. >=20 I do think we quite agree here - speaking as a physical engineer with a love for mathematics. Unfortunately I had no time yet to read your mail in full, but indeed the overloading of notation which currently resides in the [expr] command should not lead to an overloading of the command itself, so to=20 speak.=20 I have a script that could be taken as a basis for dedicated [expr]-like commands - see earlier postings. I intend to enhance it with the ideas discussed in this and the other thread - just to find the time - sigh. Regards, Arjen |
From: Lars <lar...@re...> - 2005-09-06 12:46:53
|
My apologies if the following is a bit long, but it concerns a matter I have often thought about, and since a neighbouring matter came up in the discussion last week I might just as well put forth my findings on the matter. At 21.39 +0200 2005-08-30, Techentin, Robert W. wrote: >Andreas Kupries >> However the only extensibility hooks which are exposed are >> math functions. Operators are whole different kettle of fish. >> I haven't looked at the parser code, but nevertheless believe >> that the operator priorities are hardwired. > >I wasn't thinking about adding new operators or priorities. Just redefinin= g >existing operators so that they know how to handle a new "data type." The >BLT vector expression, for example, does different things for "double + >double" and "double + vector." Tcl_CreateMathFunc() lets me replace the >atan() function with one of my own cooking, and it would be nice if I could >replace the operators as well. > >Of course, more complexity is always more fun. :-) If I wanted to teach >[expr] to handle complex numbers, then I'd have to be able to specify >operands and parameters that went beyond int/wide/double. Same for vectors >and other stuff. I'd advice *against* aiming to extend [expr] to handle new types (as sketched above); in a way it already tries to handle too many. A better approach would be to have a [cexpr] for calculating with complex numbers, a [vexpr] for vectors, etc. The reason I say this is that in my opinion as a mathematician, the usual mathematical notations for operations are grossly unsuitable for the task of instructing a computer. The basic problem is that in mathematical writing the actual meaning of pretty much everything depends on the context (is e.g. * ordinary multiplication, inner product of vectors, multiplication of matrices, or what?) and the TRANslation of a FORmula to computer text generally fails to capture anything but minute traces of that context. As a consequence, there are a number of standard tricks around that are routinely employed for rediscovering that context. The most common trick is to have a fixed standard context which everything is relative to, and this is pretty much where [expr] is today. In the many languages where it is a task in its own right to even store any piece of data more complicated than a double, this is of course a very natural approach. It is however fundamentally impossible to rely upon in a system that can be extended. The most celebrated trick is polymorphism/overloading: the combination of argument types decide which of the identically named operations should be applied. Here one can furthermore distinguish between static ("compile-time") and dynamic ("run-time") variants, since the role of the programmer in specifying the context are somewhat different. Static polymorphism is actually something one sees quite a lot in mathematical formulae; it is for example traditional to denote vectors by bold letters, matrices by upper case letters, and scalars by lower case letters; one knows Au denotes matrix-vector product because A is a "matrix letter" and u is a "vector letter". In most computer languages this takes the form of explicit type declarations for variables, so that the choice of operation implementation can be guided by the types of the operand variables. Since Tcl doesn't type-tag variables however, static polymorphism isn't much of an option for us. What one could possibly do (and I tend to think that this is not a bad idea, but not something I particularly wish to engage in) is to introduce a little "math" language that was more powerful than that of [expr], and seek to make that extendable. If a little language has variables of its own, then these can be declared with types, and static polymorphism becomes possible. If one allows several statements per "math" Tcl command, then one may find that very mathematical Tcl procedures can be written with bodies entirely in the little "math" language, so there is no need to make tight coupling between general Tcl variables and the little language variable. If the little math language is implemented by a compiled extension (it probably should be, for speed reasons), then it could have its own bytecode engine so that we wouldn't need to petition the TCT for hooks into the core engine, etc. But that isn't the kind of suggestion that was discussed above. What remains for us is rather dynamic polymorphism, where the values of the operand determine the operation applied. This too exists in the present [expr], and has its most striking appearance for "/", where for example [expr 3/2] is quite different from [expr 3.0/2]. Even dynamic polymorphism faces some rather obvious obstacles in a typeless language like Tcl, since there is no existing type system and no visible tagging of the values to fall back on; it is in principle necessary that the values are repeatedly examined to see if they can be seen as belonging to a particular type. Efficiency-wise this can be a great burden, but there are probably ways to minimise the effects in practice. More problematic is that the type of a value will probably not be unique. A reasonable encoding of a complex number is as a two element list of doubles, but that is also the natural encoding of a vector in R^2, so how is {1.0 0.0}+{0.0 1.0 0.0} to be interpreted? Scalar (although complex) plus vector (automagically interpreted as "add this scalar to each vector element"), or an error because the vectors are of different lengths, or what? The ambiguities will get pretty severe once one starts combining code by different authors. My main critique of such dynamic polymorphism is however not the above, but that it is in almost all cases stupid. I, as a programmer, typically _know_ when I write a command _exactly_ which operation I want performed (e.g., is this going to be a vector-vector or matrix-vector product?), so a programming language that doesn't allow me (or even merely discourages me) to tell the computer this certainly has a built-in flaw. That the computer guesses my intentions when I write commands interactively is nice, but in a programmatic situation it is of little use and very likely to lead to unexpected run-time errors further on. It should be avoided whenever possible. If one has a [cexpr] command that is like [expr] but computes with complex numbers (upgrading doubles and integers whenever necessary, and preferably doesn't has a special case for integer division), then the ambiguities can be kept somewhat under control, since the author of this [cexpr] has taken on the task to sort the edge cases out. The same goes for a [vexpr] for vectors. Edge cases still exist, but they don't arise accidentally when different extensions are combined in the same interpreter, only when someone actually extends someone else's extensions. The downside of this is that if someone has done a [cexpr] for complex numbers and a [vexpr] for real vectors then there's no automatic way of combining these features to make a [cvexpr] for complex vectors---instead it will be necessary to duplicate the code. Sensible resolution of ambiguous expression interpretations probably requires doing it that way, but since vectorification and complexification are mostly orthogonal one might still wish for some kind of modularisation that would make it easy to combine the two. Ideally one should be able to pick a "complex" module and a "vector" module off the shelf that would trivially combine into a "complex vector" module! Actually, this modularity is pretty straightforward to accomplish in Tcl, if only one is prepared to give up on the traditional underspecified notation. A notation that will work well is instead a prefix notation, where one first specifies the "number system" (more properly: algebraic structure) in force, then the operation within that system, and finally the operands. Supposing that Z, Q, and C are the integers, rationals, and complex numbers respectively, one might imagine calculations such as the following (with % for prompt): % Z * 12 5 60 % Q + 2/3 3/5 19/15 % C / 1+2i 3+4i 0.44+0.08i % Q * [Q - 2/3 3/5] 5/4 1/12 (Realistic pure Tcl implementations would probably use {2 3} and {1 2} as representations rather than 2/3 and 1+2i, but the more traditional forms of these representations here make the examples easier to understand.) The point of having such "number system" commands is that they make it very easy to generically implement e.g. vectors. Instead of having one codebase for real vectors, another for complex vectors, etc., one can have just a single [vectorspace] command that constructs a vector space on top of an arbitrary base field[1]. To make C^2 a two-dimensional complex vector space one might say vectorspace C^2 -scalars C -dim 2 after which the following would probably work: % C^2 + {{1.0 0.0} {0.0 0.0}} {{0.0 0.1} {0.0 2.0}} {1.0 0.1} {0.0 2.0} (this result being (1.0+0.1i,2.0i)); whereas to make R^3 a three-dimensional real vector space one would say vectorspace R^3 -scalars R -dim 3 and similarly to make W an eight-dimensional rational vector space vectorspace W -scalars Q -dim 8 etc. (Personally I more often need to define polynomials over some particular ring than vectors, but the approach is the same in both cases.) What the commands created by this [vectorspace] command do is simply that they pick apart the vectors they are handed as arguments into components, but rely on the specified -scalars command to implement the necessary arithmetic with these components. That way, it is straightforward to build complex vectors as vectors on top of complex numbers. All it takes, really, is good specifications of the interfaces between the various modules. Primarily people must agree on what to call things ("+" or "plus" or "add"?). Secondarily there is the need for introspection. Sometimes a higher level operation must be implemented in different ways depending on what is available on the lower level, and then this "what is available" query must be possible to perform programmatically. But here I'm getting waaaaay ahead of things. My basic point is that one shouldn't be too concerned about getting [expr] to do fancy things, since it's not a very good way to do things anyway, it just happens to be one we're very used to. Lars Hellstr=F6m PS: [1] An algebraic structure which supports addition, subtraction, multiplication, and division of arbitrary elements (except division by zero) while satisfying the usual laws of arithmetic is known as a /field/. This is not in any way related to "vector fields", which are just a traditional name for "vector-valued function". |
From: Will D. <wi...@wj...> - 2005-09-02 04:22:17
|
Yeah, I saw that; I've used that technique once or twice but it had never occurred to me to make it the basis for a library. I think there's a place for it, but I think it also needs to be used with care. The place I've used it is in the Preferences window for Notebook. The Preferences window contains a BWidgets "NoteBook" widget, which implements a tabbed notebook. You ask the widget to create named pages, which are frames; then you fill in the frames. I've used widgetadaptors to adapt the frames and will them in with the behavior I want. Except that for some reason I stopped doing that; I'm now creating a standard snit::widget for each tab and just packing it into the tabbed notebook's frame. I'm not sure why I changed it, but there must have been a reason. The problem with using widgetadaptors to mix multiple features into a widget is that each adaptor adds a another layer of indirection between the widget command (.foo) and the real Tk widget. If you do it too much, your GUI might be a bit sluggish. There's another way to get the same behavior; it might use a bit more memory, but it's architecturally simpler: define the features you want to mix-in as Snit macros. Here's a trivial example. Say you like your text widgets to have a "clear" subcommand that erases all text; ".text delete 1.0 end" is simply too much typing. You could define a widgetadaptor with the following method, and mix it in. method clear {} { $hull delete 1.0 end } *OR*, you could do this: snit::macro text_clear {} { method clear {} { $hull delete 1.0 end } } Then, whenever you define a customized text widget using a widgetadaptor, you can add a "clear" method like this: snit::widgetadaptor text_with_clear { . . . text_clear . . . } Of course, the macro could define more than one method; it could define a whole host of methods. And you could pass in the name of the component, instead of always using "hull". In short, there are a whole host of things you can implement this way, and then reuse in multiple widgets. And of course, this works with normal snit::types as well. Will On Sep 1, 2005, at 7:48 AM, Techentin, Robert W. wrote: > I see in this week's Tcl-URL a pointer to a new wiki page > (http://wiki.tcl.tk/14638) advocating creation of widget mixins as > snit > widgetadapters. I haven't done this myself, but I think the > concept looks > very good. Basically, instead of declaring a Bwidget-style > scrolled window > widget and adding your widget with a separate command... > > set sw [scrolledwindow $f.sw] > set txt [text $sw.mytext] > $sw setwidget $txt > > You create the scrolled object, which attaches to your widget. > > set txt [scrollbars [text $f.mytext]] > > And the neat thing is that you can mix-and-match mixins, like > > set ent [balloonhelp [scrollbars [entry $f.nameentry] -sides s] - > helptext > "Enter your name"] > > > Would mixins of this style, for functions like scrollbars, help, file > completion, history, etc. be a useful component of a snit widget > library? > Or would a better home be a new adapter library? Has anybody had much > experience creating these sorts of widgetadapters? > > Bob > -- > Bob Techentin tec...@ma... > Mayo Foundation (507) 538-5495 > 200 First St. SW FAX (507) 284-9171 > Rochester MN, 55901 USA http://www.mayo.edu/sppdg/ > _______________________________________________ > Snit mailing list > Sn...@li... > http://lists.wjduquette.com/listinfo.cgi/snit-wjduquette.com > ------------------------------------------------------------- will -at- wjduquette.com | Catch our weblog, http://foothills.wjduquette.com | The View from the Foothills |
From: Techentin, R. W. <tec...@ma...> - 2005-09-01 14:51:56
|
I see in this week's Tcl-URL a pointer to a new wiki page (http://wiki.tcl.tk/14638) advocating creation of widget mixins as snit widgetadapters. I haven't done this myself, but I think the concept looks very good. Basically, instead of declaring a Bwidget-style scrolled window widget and adding your widget with a separate command... set sw [scrolledwindow $f.sw] set txt [text $sw.mytext] $sw setwidget $txt You create the scrolled object, which attaches to your widget. set txt [scrollbars [text $f.mytext]] And the neat thing is that you can mix-and-match mixins, like set ent [balloonhelp [scrollbars [entry $f.nameentry] -sides s] -helptext "Enter your name"] Would mixins of this style, for functions like scrollbars, help, file completion, history, etc. be a useful component of a snit widget library? Or would a better home be a new adapter library? Has anybody had much experience creating these sorts of widgetadapters? Bob -- Bob Techentin tec...@ma... Mayo Foundation (507) 538-5495 200 First St. SW FAX (507) 284-9171 Rochester MN, 55901 USA http://www.mayo.edu/sppdg/ |
From: Andreas K. <and...@Ac...> - 2005-08-31 17:24:39
|
> At 20.57 +0200 2005-08-30, Andreas Kupries wrote: > > > >(*) Note that the internal format I talked about is not the rcs > format, but a Tcl data structure (list). > > Yes, I was aware of this. As far as explained in the manpage, this list is > however just a parsed form of the rcs format diffs, so it keeps almost all > the pros and cons of the latter. Yes, as I then noted a bit later. > >> - The context and unified diff formats provide for including as > >> context some lines around each change. Such context lines are > > important for proper operation of the patch program. > > > >Hm. Ok, agreed. Keeping that information makes sense. It has to be optional > >however because there are formats which do not support this. > > My intention is that the Tcl format should work as a functional superset of > the various file formats, so that a FOO_diff -> Tcl_diff -> FOO_diff > conversion sequence should always reproduce an equivalent file. In the case > of features (such as extra lines of context) that are not supported by the > target format, it would be the responsibility of the corresponding > converter to drop that information. Yes, exactly. > >So feel free to post your suggestions, whenever you find time. > > OK, here is an outline, from small to large. > > For the individual hunks (as they are apparently called, i.e., each "range" Sometimes "chunks" as well, but I am getting off-track. > of lines in a diff), I've used a "line list" with the format > > type1 line1 type2 line2 ... typeN lineN > > where the varous line elements are actual lines, and the type elements > specify what kind of line it is. Such a list is easy to loop over using a > [foreach] command. > > The three basic line types are > > "-" Line only in file1 (thus: "is being removed"). > "+" Line only in file2 (thus: "is being added"). > "0" Line in both files I was confused at first about the apparent lack of line-numbers, but the "0" type explains it. You keep everything from both files which is unchanged as well. You need the line information only per hunk. In a very non-optimal degenerate representation you could even describe all changes in a single hunk containing basically both revisions of the whole file ... I believe I have read about something like that before, IIRC it was called a "weave"-format. Back to yours ... > A change from [some examples elided] > format. The rcs format diffs OTOH: > > d2 1 > a2 2 > The > Cool > > doesn't say what the deleted line was, so that needs something more. I was Oh, yes, when we import to our format from a rcs format diff. Bad. > thinking that "-?" could mean "line only in file1, but I don't know what > was in it". Then the above could be encoded as > > -? {} + The + Cool Yes, something like that. > whereas the rcs diff from file2 to file1 (recombining the lines) would be > > d2 2 > a3 1 > The Cool > > and correspond to > > -? {} -? {} + {The Cool} > As a general rule one could have that lines only in file1 have a type > beginning with -, lines only in file2 have a type beginning with +, and > lines in both files have a type beginning with 0. Diff consumers which do > not understand a particular type should treat it as the corresponding -+0 > type. Unterminated lines could e.g. have types "-/", "+/", and "0/" > respectively. > Well, that's the line-list contents. The diff hunks also contain line > numbers specifying where in the files these lines appear, so my idea has > been that a Tcl format hunk should be a list with the structure > > start1 end1 start2 end2 line-list ?attribute value ...? > Here the startK and endK are the numbers in fileK of the first and last > (inclusive range) lines this hunk. In principle it would be possible to > recompute the end line numbers by counting lines in the line-list, but many > diff formats list them explicitly anyway, and for e.g. a text editor > navigating to the relevant range in the file it saves a lot of trouble > knowing exact line numbers. > A reason for making the line numbers separate list elements is that this > makes it possible to do an [lsort -index] on a list of hunks. In my use of > these in docstrip::util, I found that to be *very* convenient. Another reason is that this redundancy allows us to sanity-check a hunk, i.e to compare the stored end line-numbers against computed ones, and blaring if there is a difference. It makes it more difficult to edit hunks manually, which is not to bad a property too. > The optional "attribute value" stuff is for encoding additional bells and > whistles. I've noticed that in some diffs you can get "section headings" > (often C function names) above each hunk, so that one can quickly orient > oneself. This could then e.g. be put as a "heading" attribute of the hunk. Interesting. I did not knew that about the diff formats. ... Other attributes I can think of which are wholly ours are checksums, i.e. like a md5 hash over the hunk data. Again something to make casual changes to the hunk more difficult. > Beyond that, I haven't got a clear suggestion. Obviously diffs > are lists of > hunks, so that's what one gets, but some diffs also include the names of > the files being compared, and that should be encoded too somehow. Yes. Always, superset. This is a necessary thing for the multi-file diffs you mention next. > There are also multifile diffs (directory diffs and patches made by concatenating > individual diffs) around, so the general hierarchy is probably something > like > > diff > | > +- file > | > +- hunk > | > +- line Yes. The context and unified formats list the file being modified before the hunks and may contain hunks for an arbitrary number of files. The regular patch format ... Might be possible > With that complicated a structure, it is getting easy to imagine a whole > bunch of commands that examine and modify the structure of a diff (count > files, count lines changed, get Nth file of diff as a separate diff, etc.), Also get a specific hunk in a specific file, or the nTh hunk from the beginning, etc. > but perhaps their number can be reduced if one specifies that a "diff" is > always a list of "file diffs". Hm ... Maybe a hierarchy of objects/clases. A diff container returning filediff containers returning hunk objects. That way the commands we need are organized at various levels of the hierarchy. One application I would like to have would be a patch editor which allows me to edit a patch, or to separate it ito several logical disjoint patches, etc. Down to the line-level. Maybe even something scriptable ... Ok, off track rambling. I mentioned in my response to Kevin reminder regarding [struct::list longestcommonsubsequence] the possibility of handling binary files, and binary changes. We might be able to handle them with the proposed hunk format if we say that the ine-numbers are not always line-numbers, but can be byte (or character) offsets into the file as well. We would need some additional indicator telling us the type of the numbers. The data part can be left unchanged, as we Tcl'ers are lucky enough to have no problems when it comes to binary data. ... I said 'or character offsets' as that will allow us to create and manipulate character based patches of text files too. An interesting property of character and byte patches is that sequences of - and + blocks in a hunk can be merged (or split) with impunity. {- a - b} is the same as {- ab}. This is not possible for line-based blocks, because the merge of two blocks is not a single line anymore. I wonder if we should rename the 'rcs' module to 'patch', for handling the various patch formats, and then introduce a module 'diff' which contains functions to perform various diff operations, creating patches in the internal format proposed here. -- Andreas Kupries <and...@Ac...> Developer @ http://www.ActiveState.com, a division of Sophos Tel: +1 604 484 6491 |
From: Andreas K. <and...@Ac...> - 2005-08-31 16:39:09
|
> In the discussion of 'diff' formats, it might be worthwhile also > to consider whether it would be desirable to wrap > ::struct::list::Llongestcommonsubsequence so that it can produce > diffs in the chosen format. This proc is Tcl's very own "diff engine". It should generate diffs in the internal format we choose, that way we can generate all types of diffs. Also to think about, should the wrapper be line based ? I remember that 'eskil', a variant of 'tkdiff', actually shows the exact words or char-sequences which were added/deleted/changed. I do not know how it does that, still, this might be/is an interesting thing for patches as well. IOW the main problem I see for the Llongestcommonsubsequence is IMHO choosing the tokenization. This also brings up the question, what about binary data ? Is there a good tokenization which would allow us to handle this as well ? Can we think of an internal format which is capable of representing binary patches ? -- Andreas Kupries <and...@Ac...> Developer @ http://www.ActiveState.com, a division of Sophos Tel: +1 604 484 6491 |
From: <ke...@cr...> - 2005-08-31 15:41:17
|
In the discussion of 'diff' formats, it might be worthwhile also to consider whether it would be desirable to wrap ::struct::list::Llongestcommonsubsequence so that it can produce diffs in the chosen format. This proc is Tcl's very own "diff engine". -- 73 de ke9tv/2, Kevin KENNY GE Corporate Research & Development ke...@cr... P. O. Box 8, Bldg. K-1, Rm. 5B36A Schenectady, New York 12301-0008 USA |
From: Lars <lar...@re...> - 2005-08-31 15:20:37
|
At 20.57 +0200 2005-08-30, Andreas Kupries wrote: > >(*) Note that the internal format I talked about is not the rcs format, but >a Tcl data structure (list). Yes, I was aware of this. As far as explained in the manpage, this list is however just a parsed form of the rcs format diffs, so it keeps almost all the pros and cons of the latter. >> - The context and unified diff formats provide for including as >> context some lines around each change. Such context lines are > important for proper operation of the patch program. > >Hm. Ok, agreed. Keeping that information makes sense. It has to be optional >however because there are formats which do not support this. My intention is that the Tcl format should work as a functional superset of the various file formats, so that a FOO_diff -> Tcl_diff -> FOO_diff conversion sequence should always reproduce an equivalent file. In the case of features (such as extra lines of context) that are not supported by the target format, it would be the responsibility of the corresponding converter to drop that information. >> I do have a suggestion myself for what a format might look like, but I'll >> have to get back to you on that issue ... work is calling my attention. > >See above, my note about changing the API. This module is still below 1.0 >and can be modified as we see fit to support all the patch formats we care >about. > >So feel free to post your suggestions, whenever you find time. OK, here is an outline, from small to large. =46or the individual hunks (as they are apparently called, i.e., each "range= " of lines in a diff), I've used a "line list" with the format type1 line1 type2 line2 ... typeN lineN where the varous line elements are actual lines, and the type elements specify what kind of line it is. Such a list is easy to loop over using a [foreach] command. The three basic line types are "-" Line only in file1 (thus: "is being removed"). "+" Line only in file2 (thus: "is being added"). "0" Line in both files A change from Tcl is The Cool Language to Tcl is The Cool Language can thus be represented as the line-list 0 {Tcl is} - {The Cool} + The + Cool 0 Language but with fewer lines of context alternatively as - {The Cool} + The + Cool The "normal" format diff for the above is 2c2,3 < The Cool --- > The > Cool and it should be clear how to encode that with one - and two + lines. The corresponding "context" format diff is *************** *** 1,3 **** Tcl is ! The Cool Language --- 1,4 ---- Tcl is ! The ! Cool Language and this is more the 0 - + + 0 lines variant. Similarly with the "unified" format. The rcs format diffs OTOH: d2 1 a2 2 The Cool doesn't say what the deleted line was, so that needs something more. I was thinking that "-?" could mean "line only in file1, but I don't know what was in it". Then the above could be encoded as -? {} + The + Cool whereas the rcs diff from file2 to file1 (recombining the lines) would be d2 2 a3 1 The Cool and correspond to -? {} -? {} + {The Cool} As a general rule one could have that lines only in file1 have a type beginning with -, lines only in file2 have a type beginning with +, and lines in both files have a type beginning with 0. Diff consumers which do not understand a particular type should treat it as the corresponding -+0 type. Unterminated lines could e.g. have types "-/", "+/", and "0/" respectively. Well, that's the line-list contents. The diff hunks also contain line numbers specifying where in the files these lines appear, so my idea has been that a Tcl format hunk should be a list with the structure start1 end1 start2 end2 line-list ?attribute value ...? Here the startK and endK are the numbers in fileK of the first and last (inclusive range) lines this hunk. In principle it would be possible to recompute the end line numbers by counting lines in the line-list, but many diff formats list them explicitly anyway, and for e.g. a text editor navigating to the relevant range in the file it saves a lot of trouble knowing exact line numbers. A reason for making the line numbers separate list elements is that this makes it possible to do an [lsort -index] on a list of hunks. In my use of these in docstrip::util, I found that to be *very* convenient. The optional "attribute value" stuff is for encoding additional bells and whistles. I've noticed that in some diffs you can get "section headings" (often C function names) above each hunk, so that one can quickly orient oneself. This could then e.g. be put as a "heading" attribute of the hunk. Beyond that, I haven't got a clear suggestion. Obviously diffs are lists of hunks, so that's what one gets, but some diffs also include the names of the files being compared, and that should be encoded too somehow. There are also multifile diffs (directory diffs and patches made by concatenating individual diffs) around, so the general hierarchy is probably something like diff | +- file | +- hunk | +- line With that complicated a structure, it is getting easy to imagine a whole bunch of commands that examine and modify the structure of a diff (count files, count lines changed, get Nth file of diff as a separate diff, etc.), but perhaps their number can be reduced if one specifies that a "diff" is always a list of "file diffs". Lars Hellstr=F6m |
From: Arjen M. <arj...@wl...> - 2005-08-31 14:11:00
|
"Techentin, Robert W." wrote: > > Arjen Markus wrote > > > > Andreas Kupries wrote: > > > > > > Tcl_ParseExpr () is already a public API as can be > > > seen from the Tcl_ prefix. > > > > I know that part of Tcl's core a bit. It is a > > recursive-descent thing, the priorities are however > > hard-coded at the moment. > > I don't know that the hardcoded priorities are too bad a problem. > > Andreas pointed me to the tclparser extension (part of tclpro), which adds > script access to Tcl_ParseExpr(). The parsing function accepts a string > expression and returns a nested list which is roughly equivalent to the > command form that I mentioned yesterday. I wrote a little post-process proc > to walk the parse tree, and I can generate command syntax (more or less) > from expressions. For some odd reason, it even works about right for lists > and lists-o-lists. (code follows) > > Maybe just exposing that parser would be enough for Tcl math > implementations. > Yes, that sounds very nice - and it would mean a lot less work! Regards, Arjen |
From: Techentin, R. W. <tec...@ma...> - 2005-08-31 13:56:03
|
Arjen Markus wrote > > Andreas Kupries wrote: > > > > Tcl_ParseExpr () is already a public API as can be > > seen from the Tcl_ prefix. > > I know that part of Tcl's core a bit. It is a > recursive-descent thing, the priorities are however > hard-coded at the moment. I don't know that the hardcoded priorities are too bad a problem. Andreas pointed me to the tclparser extension (part of tclpro), which adds script access to Tcl_ParseExpr(). The parsing function accepts a string expression and returns a nested list which is roughly equivalent to the command form that I mentioned yesterday. I wrote a little post-process proc to walk the parse tree, and I can generate command syntax (more or less) from expressions. For some odd reason, it even works about right for lists and lists-o-lists. (code follows) Maybe just exposing that parser would be enough for Tcl math implementations. Bob -- Bob Techentin tec...@ma... Mayo Foundation (507) 538-5495 200 First St. SW FAX (507) 284-9171 Rochester MN, 55901 USA http://www.mayo.edu/sppdg/ package require parser proc pexpr {string {tree ""} {result ""}} { if {[llength $tree] == 0} { set tree [parse expr $string {0 end}] } switch [lindex $tree 0] { subexpr { foreach item [lindex $tree 2] { set result [pexpr $string $item $result] } } operator { foreach {start length} [lindex $tree 1] {break} set end [expr {$start+$length-1}] append result "\[[string range $string $start $end] " } command - text { foreach {start length} [lindex $tree 1] {break} set end [expr {$start+$length-1}] append result "[list [string range $string $start $end]] " } variable { foreach {start length} [lindex $tree 1] {break} set end [expr {$start+$length-1}] append result "[string range $string $start $end] " } } return $result } foreach e { {$a+1} {$a+($b+$c)*tan($d)} {{3 4}+{2 1}} {({3 4}+{2 1})*4} {{3 4}+7} {7+{3 4}} {{{1 2} {3 4}} + {{5 6} {7 8}}} } { puts "$e --> [pexpr $e]" } $a+1 --> [+ $a 1 $a+($b+$c)*tan($d) --> [+ $a [* [+ $b $c [tan $d {3 4}+{2 1} --> [+ {3 4} {2 1} ({3 4}+{2 1})*4 --> [* [+ {3 4} {2 1} 4 {3 4}+7 --> [+ {3 4} 7 7+{3 4} --> [+ 7 {3 4} {{1 2} {3 4}} + {{5 6} {7 8}} --> [+ {{1 2} {3 4}} {{5 6} {7 8}} |
From: Arjen M. <arj...@wl...> - 2005-08-31 07:16:27
|
Andreas Kupries wrote: > > > Bob Techentin wrote: > > > > > > Arjen Markus wrote: > > > > > > > > I suggest a new command - not a replacement for [expr], > > > > something like [cexpr] if you want to deal with complex > > > > numbers, [vexpr] if it is vectors you are interested > > > > in and the like. > > > > As I ponder this a bit, I've imagined general expression implementations > > that could be extended. You can already add a new math function to Tcl's > > [expr] command (from C). We're talking about complex numbers and vectors. > > And in a flash of insight (or hallucination?) I imagined units-compatible > > expressions for dimensional analysis or quantity-type safe > > operations, such > > as adding velocities or computing thermal coefficients of expansion. > > ([uexpr]?) > > > Tcl's expression parser seems to be buried pretty deep in the C code. > > Yes. > > > It looks like BLT implemented another expression parser (in C), with > > extensions to handle named vectors. Would it make sense to try to figure > > out a way to expose Tcl_ParseExpr() to the outside world > > Tcl_ParseExpr () is already a public API as can be seen from the Tcl_ > prefix. > > However the only extensibility hooks which are exposed are math functions. > Operators are whole different kettle of fish. I haven't looked at the parser > code, but nevertheless believe that the operator priorities are hardwired. I > actually don't know if the parser is recursive-descent, or some yacc-based > thing. Only the rec-descent would have a chance IMHO of being extensible to > new operators and priorities. Or something which maintains an explicit stack > and has an extensible table of operator/priorities/codegens. > I know that part of Tcl's core a bit. It is a recursive-descent thing, the priorities are however hard-coded at the moment. A more flexible set-up would certainly be possible and might not be that much work either. But that would make sense only if you have a decent way of prescribing the operator precedence. And I am not quite sure how to do that ... TIP #133 does mention an API for it. Regards, Arjen |
From: Arjen M. <arj...@wl...> - 2005-08-31 07:09:44
|
Andreas Kupries wrote: > > > > > No, I suggest a new command - not a replacement for [expr], something > > like [cexpr] if you want to deal with complex numbers, [vexpr] if it > > is vectors you are interested in and the like. > > Which can lead to a veritable combinatory explosion if you have lots of > different things to work with. Vectors of complex numbers, matrices, > matrices of complex numbers, matrices of vectors, and so on. > It is exactly this explosion that makes me want to use specialised commands (even though the whole framework can be generic). Otherwise we would burden an innocent, if elaborate, command like [expr] with the knowledge of many many arithmetic systems. Returning a procedure rather than a raw string .... yes. And I would like to use as much of Tcl 8.5's possibilities as possible for this. Regards, Arjen |
From: Arjen M. <arj...@wl...> - 2005-08-31 06:57:29
|
"Techentin, Robert W." wrote: > > Arjen Markus wrote: > > > > I suggest a new command - not a replacement for [expr], something > > like [cexpr] if you want to deal with complex numbers, [vexpr] if it > > is vectors you are interested in and the like. > > > > The behaviour would be much like [expr], only the {} would be > > mandatory. A sketch: > > - If the expression is new, parse it and turn it into a sequence of > > Tcl commands (or a nesting) such that this new command evaluates > > the expression according to the arithmetic that was meant. > > For instance: > > > > set w [cexpr {$z*$z-4}] > > > > could be turned into: > > > > set w [- [* $z $z] [complex 4]] > > I did a little bit of that last year. > > I needed to do complex math on vectors to manipulate scattering parameters. > I used separate BLT vectors for the real and imaginary parts, so for S11, I > had vectors S11r and S11i. So I modified Richeard Suchenwirth's "poor man's > expression parser" from the wiki to rarrange an expression like > > a + ( b + c ) * d * e * f > > Into an equivalent command form, > > + a [* [* [* [[+ b c]] d] e] f]] > > Then defined complex operator commands for +-*/. I ended up with two long > expression strings which could be fed directoy into BLT's vector expr > command. It worked pretty well, but it only supported the four operators, > vectors and constants. No functions, and I'm sure it wouldn't pass > anybody's expression validation suite. > > I'd be delighted to see something that could manipulate vectors of complex > numbers. > I remember us discussing that ... Yes, it is exactly the sort of things I have in mind. Regards, Arjen |
From: Andreas K. <and...@Ac...> - 2005-08-30 19:51:49
|
> Andreas Kupries > > > > Tcl_ParseExpr () is already a public API as can be seen from > > the Tcl_ prefix. > > Well, ok. So what? I don't think I can use it from Tcl. (Or can I?) tclparser package, in the tclpro project at SF. > We did, after all, start talking about new tcllib modules, which > might include math extensions beyond complex and linear algebra modules. > > However the only extensibility hooks which are exposed are > > math functions. Operators are whole different kettle of fish. > > I haven't looked at the parser code, but nevertheless believe > > that the operator priorities are hardwired. > > I wasn't thinking about adding new operators or priorities. Just redefining > existing operators so that they know how to handle a new "data type." Ok. That is an extension mode I had not thought about. With its own set of difficulties. Don Porter did, see his mail to me I forwarded to Tcllib-devel. > The BLT vector expression, for example, does different things for "double + > double" and "double + vector." Tcl_CreateMathFunc() lets me replace the > atan() function with one of my own cooking, and it would be nice > if I could replace the operators as well. True. > Of course, more complexity is always more fun. :-) If I wanted to teach > [expr] to handle complex numbers, then I'd have to be able to specify > operands and parameters that went beyond int/wide/double. Same > for vectors and other stuff. > > > Or are we better off creating an expression framework in pure-Tcl? > > > > Hm. I can currently not see this being fast, even with > > caching of expressions and code. > > Fast? Probably not. But maybe good enough to figure out what sort of > generalized extensible APIs look like. Ok, that might be possible. i.e. experimentation. -- Andreas Kupries <and...@Ac...> Developer @ http://www.ActiveState.com, a division of Sophos Tel: +1 604 484 6491 |
From: Andreas K. <and...@Ac...> - 2005-08-30 19:48:39
|
I wrote > > > This is more about expr extensibility in the areas of operand parsing > > > (numbers etc with dimensions), new operators (the parsing > > engine in general > > > and the handling of operator priorities). IMHO orthogonal to bignums. > > > > Not orthogonal at all. Most of the bignum support work involves > > going into the guts of the bytecode execution engine and making > > sure that operations like INST_ADD know how to add bignums as well > > as existing Tcl numeric types. > > > > If we wanted Tcl to be extensible to the degree of allowing extensions > > to define new numeric types that "just work" with [expr] operators, > > the hooks would need to go in at that level. > > I see. Ok. Feel free to correct me on tcllib-devel as well. > > > Because the number and depth of such hooks is so large, I suspect we'll > > just keep adding more numeric types into the core. > > :( -- Andreas Kupries <and...@Ac...> Developer @ http://www.ActiveState.com, a division of Sophos Tel: +1 604 484 6491 |
From: Andreas K. <and...@Ac...> - 2005-08-30 19:47:37
|
> > This is more about expr extensibility in the areas of operand parsing > > (numbers etc with dimensions), new operators (the parsing > engine in general > > and the handling of operator priorities). IMHO orthogonal to bignums. > > Not orthogonal at all. Most of the bignum support work involves > going into the guts of the bytecode execution engine and making > sure that operations like INST_ADD know how to add bignums as well > as existing Tcl numeric types. > > If we wanted Tcl to be extensible to the degree of allowing extensions > to define new numeric types that "just work" with [expr] operators, > the hooks would need to go in at that level. > > Because the number and depth of such hooks is so large, I suspect we'll > just keep adding more numeric types into the core. > > | Don Porter Mathematical and Computational Sciences Division | > | don...@ni... Information Technology Laboratory | > | http://math.nist.gov/~DPorter/ NIST | > |______________________________________________________________________| > -- Andreas Kupries <and...@Ac...> Developer @ http://www.ActiveState.com, a division of Sophos Tel: +1 604 484 6491 |
From: Techentin, R. W. <tec...@ma...> - 2005-08-30 19:42:39
|
Andreas Kupries > > Tcl_ParseExpr () is already a public API as can be seen from > the Tcl_ prefix. Well, ok. So what? I don't think I can use it from Tcl. (Or can I?) We did, after all, start talking about new tcllib modules, which might include math extensions beyond complex and linear algebra modules. > However the only extensibility hooks which are exposed are > math functions. Operators are whole different kettle of fish. > I haven't looked at the parser code, but nevertheless believe > that the operator priorities are hardwired. I wasn't thinking about adding new operators or priorities. Just redefining existing operators so that they know how to handle a new "data type." The BLT vector expression, for example, does different things for "double + double" and "double + vector." Tcl_CreateMathFunc() lets me replace the atan() function with one of my own cooking, and it would be nice if I could replace the operators as well. Of course, more complexity is always more fun. :-) If I wanted to teach [expr] to handle complex numbers, then I'd have to be able to specify operands and parameters that went beyond int/wide/double. Same for vectors and other stuff. > > Or are we better off creating an expression framework in pure-Tcl? > > Hm. I can currently not see this being fast, even with > caching of expressions and code. Fast? Probably not. But maybe good enough to figure out what sort of generalized extensible APIs look like. See ya, Bob -- Bob Techentin tec...@ma... Mayo Foundation (507) 538-5495 200 First St. SW FAX (507) 284-9171 Rochester MN, 55901 USA http://www.mayo.edu/sppdg/ |
From: Andreas K. <and...@Ac...> - 2005-08-30 19:11:02
|
> > Or are we better off creating an expression framework in pure-Tcl? > > Before you all get too grandiose with your [expr] plans, be sure to > follow all the developments in Tcl 8.5. A summary is in this > message to TCLCORE: > > http://sourceforge.net/mailarchive/message.php?msg_id=10644074 > > (Note the typo in the message - status in 2005, not 2004) > > Much of what you may want, may be done, in progress, or at least much > easier to accomplish with the improved feature set. Hm. Remember this is talk about lots of things outside of just big numbers, i.e vectors, matrices, complex numbers, quaternions, and whatever else can be seen as operand to mathematical operators. This is more about expr extensibility in the areas of operand parsing (numbers etc with dimensions), new operators (the parsing engine in general and the handling of operator priorities). IMHO orthogonal to bignums. Possibly even extensibility in the are of types, type conversion, explicit, and automatic. Definitely something which can't be done in a day. Still worthy to think about. -- Andreas Kupries <and...@Ac...> Developer @ http://www.ActiveState.com, a division of Sophos Tel: +1 604 484 6491 |