From: Richard F. <fa...@gm...> - 2022-07-28 23:06:41
|
The idea of symbols evaluating to themselves when they have no assigned values might have shown up most immediately before Macsyma in William A. Martin's Symbolic Mathematical Laboratory (MIT PhD, 1967) It was one of the precursors to Macsyma. Another, Engelman's Mathlab, used the apostrophe to indicate evaluation! That is, the opposite of quoting. x+'y would take x, literally and add it to the value assigned to y. advantage to this: you could write something like integrate(x^2,x) and it looks normal. Other people, in particular Robert Fenichel at Harvard ( who, afterward, joined MIT as faculty) wrote a framework for rule-driven transformations, called FAMOUS. Dunno about automatic quoting there, but the time frame is also early 1960s. There are activities that tend not to be recounted in these emails because they did not happen at MIT (or Harvard), but at places like IBM Research, (Scratchpad system) Bell Labs (Altran), Stanford (Reduce), etc etc. To pursue the idea of automatically quoting the undefined variables -- doesn't seem too important. After all, if you build a CAS and you get an error message "reference to undefined variable X" over and over, you probably consider fixing it by just figuring "uh, let's just call that the symbol X", and continue computing. While the occasional logician playing by rigid rules might object to that, the vast vast majority of mathematicians turn out to be way sloppy with notations etc in the case that anyone reading the material will know enough to disambiguate the statements. RJF On Thu, Jul 28, 2022 at 10:45 AM Stavros Macrakis <mac...@al...> wrote: > Eduardo, > > You are approaching Lisp as though it was a form of the lambda calculus. > But it is very different. For one thing, it has a well-defined execution > order. For another, until Sussman and Steele rethought Lisp as Scheme, > *lambda* was not well understood and all practical Lisp implementations > after the very earliest ones (which used a-lists) used shallow binding to > implement dynamic scope, which makes first-class closures ("upwards > funarg") a special case that is difficult to implement and rarely used. If > I remember correctly, McCarthy explicitly said that he didn't understand > *lambda*, and he simply borrowed it as a notation for anonymous > functions. (I thought he said that in his "History of Lisp > <http://jmc.stanford.edu/articles/lisp.html>" paper, but I don't see it > there.) > > I don't think that people thought about Maxima simplification as a kind of > "reduction", although of course in writing simplifying routines, you make > them confluent and terminating. > > As for symbols evaluating to themselves if not bound to a value, I would > guess that that was a hack that someone thought up early on. Was it > Engelman? Martin? Moses? Fateman? I don't know. But I doubt that it > resulted from any deep thought. Rather, I suspect that someone got tired of > writing quotation marks all over the place and had an aha! moment that, if > Maxima had its own language and interpreter rather than using the Lisp > interpreter, it would be handy to have the symbol *a* evaluate to itself > if not assigned rather than cause an error. > > Fateman, you were closer to these events. Any insights? > > -s > > On Wed, Jul 27, 2022 at 3:51 AM Eduardo Ochs <edu...@gm...> > wrote: > >> Hi list, especially Richard, >> >> I've read a part of Richard's PhD thesis, >> >> https://apps.dtic.mil/sti/pdfs/AD0740132.pdf >> >> and it gave me the impression that this would be a good place to ask >> this... so here it goes. But first a little story. >> >> In the middle of my master's degree, ages ago, I switched from trying >> to do research on Maths (that I was finding boring) to trying to do >> research on Logic (that looked much more fun). The people in the Logic >> group in my university were working mostly on Proof Theory, and every >> time that they would start to work on a new logical system they would >> try to prove strong normalization for it - and they would usually >> succeed, because they had a lot of knowledge about which systems >> "looked like something that they would be able to prove strong >> normalization for"... systems that were not strongly normalizable were >> "bad" to them, and they were put in the box of the toys that they >> didn't want to play with. I somehow managed to 1) not learn how to do >> strong normalization proofs, and to 2) focus more on Lambda Calculus >> than on Proof Theory... so I know a lot of the terminology about >> reductions and normalization, but only a few of the techniques. >> >> End of the little story. >> >> So: in Maxima we can change how our reductions work by changing lots >> of flags, and in some contexts we can set, say, w to 42, and this >> tells the system that from that point onwards every w should be >> reduced to 42. And if we turn off most of the actions of the >> simplifier we can make both of these expressions >> >> expr1 : (x + y)(x - y) >> expr2 : (x + y)(x - y) - (x^2 - y^2) >> >> reduce to themselves. >> >> I _guess_ that: >> >> 1. when the first Lisps were being created people saw very quickly >> that evaluation should be different from β-reduction... in >> β-reduction and its variants a free variable like x reduces to >> itself, but it's better to define evaluation in a way in which >> evaluating an undefined variable yields an error... and: >> >> 2. when the first Computer Algebra systems were being created people >> saw very quickly that variables should be treated in other, more >> complex, ways - both expr1 and expr2 above are valid expressions >> that can be manipulated in many ways, but that can't evaluated >> numerically or plotted because they both depend on both x and >> y... and I guess that even the first CASs had simple functions, >> that didn't work in all cases, that could compare expr1 and expr2 >> to 0, and answer that "expr1 is not 0 because expr1 depends on x >> and y and 0 doesn't" - and the same for expr2 (ta-da). >> >> My question is: can you recommend good early references that discuss >> this, i.e., that discuss how the notion of free/undefined variables in >> CASs was invented? >> >> Thanks in advance! >> Cheers =), >> Eduardo Ochs >> http://angg.twu.net/eev-maxima.html >> >> >> _______________________________________________ >> Maxima-discuss mailing list >> Max...@li... >> https://lists.sourceforge.net/lists/listinfo/maxima-discuss >> > |