From: Eduardo O. <edu...@gm...> - 2022-07-27 10:51:07
|
Hi list, especially Richard, I've read a part of Richard's PhD thesis, https://apps.dtic.mil/sti/pdfs/AD0740132.pdf and it gave me the impression that this would be a good place to ask this... so here it goes. But first a little story. In the middle of my master's degree, ages ago, I switched from trying to do research on Maths (that I was finding boring) to trying to do research on Logic (that looked much more fun). The people in the Logic group in my university were working mostly on Proof Theory, and every time that they would start to work on a new logical system they would try to prove strong normalization for it - and they would usually succeed, because they had a lot of knowledge about which systems "looked like something that they would be able to prove strong normalization for"... systems that were not strongly normalizable were "bad" to them, and they were put in the box of the toys that they didn't want to play with. I somehow managed to 1) not learn how to do strong normalization proofs, and to 2) focus more on Lambda Calculus than on Proof Theory... so I know a lot of the terminology about reductions and normalization, but only a few of the techniques. End of the little story. So: in Maxima we can change how our reductions work by changing lots of flags, and in some contexts we can set, say, w to 42, and this tells the system that from that point onwards every w should be reduced to 42. And if we turn off most of the actions of the simplifier we can make both of these expressions expr1 : (x + y)(x - y) expr2 : (x + y)(x - y) - (x^2 - y^2) reduce to themselves. I _guess_ that: 1. when the first Lisps were being created people saw very quickly that evaluation should be different from β-reduction... in β-reduction and its variants a free variable like x reduces to itself, but it's better to define evaluation in a way in which evaluating an undefined variable yields an error... and: 2. when the first Computer Algebra systems were being created people saw very quickly that variables should be treated in other, more complex, ways - both expr1 and expr2 above are valid expressions that can be manipulated in many ways, but that can't evaluated numerically or plotted because they both depend on both x and y... and I guess that even the first CASs had simple functions, that didn't work in all cases, that could compare expr1 and expr2 to 0, and answer that "expr1 is not 0 because expr1 depends on x and y and 0 doesn't" - and the same for expr2 (ta-da). My question is: can you recommend good early references that discuss this, i.e., that discuss how the notion of free/undefined variables in CASs was invented? Thanks in advance! Cheers =), Eduardo Ochs http://angg.twu.net/eev-maxima.html |