You can subscribe to this list here.
2007 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}
(73) 
_{Sep}
(57) 
_{Oct}
(138) 
_{Nov}
(91) 
_{Dec}
(99) 

2008 
_{Jan}
(91) 
_{Feb}
(53) 
_{Mar}
(37) 
_{Apr}
(125) 
_{May}
(176) 
_{Jun}
(23) 
_{Jul}
(135) 
_{Aug}
(119) 
_{Sep}
(26) 
_{Oct}
(38) 
_{Nov}
(46) 
_{Dec}
(11) 
2009 
_{Jan}
(4) 
_{Feb}
(2) 
_{Mar}
(5) 
_{Apr}
(15) 
_{May}
(4) 
_{Jun}
(18) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(17) 
_{Oct}
(9) 
_{Nov}
(14) 
_{Dec}
(11) 
2010 
_{Jan}
(9) 
_{Feb}
(6) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}
(4) 
_{Jun}
(3) 
_{Jul}

_{Aug}
(10) 
_{Sep}
(7) 
_{Oct}
(7) 
_{Nov}
(36) 
_{Dec}
(23) 
2011 
_{Jan}
(2) 
_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(11) 
_{May}
(5) 
_{Jun}
(17) 
_{Jul}
(2) 
_{Aug}
(26) 
_{Sep}
(14) 
_{Oct}
(51) 
_{Nov}
(39) 
_{Dec}
(7) 
2012 
_{Jan}
(24) 
_{Feb}
(7) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(9) 
_{Jun}
(7) 
_{Jul}
(3) 
_{Aug}
(1) 
_{Sep}
(8) 
_{Oct}
(12) 
_{Nov}
(1) 
_{Dec}

2013 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(35) 
_{May}
(28) 
_{Jun}
(14) 
_{Jul}
(10) 
_{Aug}
(3) 
_{Sep}
(6) 
_{Oct}

_{Nov}
(1) 
_{Dec}

2014 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(4) 
_{May}
(3) 
_{Jun}
(2) 
_{Jul}
(2) 
_{Aug}
(2) 
_{Sep}
(1) 
_{Oct}
(3) 
_{Nov}
(5) 
_{Dec}
(8) 
2015 
_{Jan}
(3) 
_{Feb}
(2) 
_{Mar}

_{Apr}

_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(1) 
2016 
_{Jan}
(5) 
_{Feb}
(10) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(5) 
2017 
_{Jan}
(2) 
_{Feb}
(2) 
_{Mar}
(5) 
_{Apr}
(1) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1

2
(1) 
3

4
(1) 
5
(1) 
6

7

8
(1) 
9
(6) 
10
(7) 
11

12

13

14

15
(1) 
16
(1) 
17
(5) 
18
(3) 
19

20

21

22
(1) 
23

24
(1) 
25

26
(2) 
27

28

29
(5) 
30
(1) 
31
(1) 

From: Francois Maltey <fmaltey@ne...>  20081031 15:44:07

Alejandro Jakubi wrote: > I wonder how it is done in Axiom the selection of roots of a > polynomial with some property. As in this example, select the positive > roots out of the list of three roots generated by: > > radicalSolve(p^3p+1/10=0,p) With fricas I get a mysterious result and a fuzzy bug. First I get 3 radical roots, second I get [p=3/20] and third I get [p=(10pl3+1)/10]. With openaxiom I test LR := radicalSolve (p^3p+1/10=0,p) then a map for a numerical value shows three real roots with map (eq +> (rhs eq)::Complex Float, LR) I believe that the ... +/ 1.0e20*%i is a rounding numerical error. So LR.2 < LR.1 < LR.3 for my openaxiom. I don't understand why map (eq +> numerical rhs eq) fails. But reduce ('+, map (eq +> (rhs eq)::Complex Float, LR)) is about 0 and reduce ('*, map (eq +> (rhs eq)::Complex Float, LR)) is arround 0.1... perfect ! You can play with real positive radix with the domain RealClosure Fraction Integer, but it seems impossible here because there are complex root during the internal computation. The internal axiom algebraic number don't know real properties and sign. It's why I use the coerce to Complex Float above. Try RCFI := RealClosure sqrt (3::RCFI) sqrt(3::RCFI)  and get an error sqrt 3 sqrt (3) I 
From: Gabriel Dos Reis <gdr@cs...>  20081030 20:28:19

From: Gabriel Dos Reis <gdr@cs...>  20081029 21:42:38

"Bill Page" <bill.page@...> writes: [...]  >  These rational values always have an exact representation in the  >  domain Fraction Integer and form a specific subset of the real  >  numbers. Among other things, this makes equality in Float fully  >  transitive, where as in some other floating point systems this might  >  not be the case.  >  > I do not understand what you mean by that. In the IEEE's system,  > 0 compare equal to +0. The only way you get the sign is when you  > ask for it.  >   You are right and I was wrong.   As one can see from the above code, Axiom works with an arbitrarily  set fixed precision represented by 'bits()' and equality is defined in  terms of 'zero? (x  y)'. The result of  depends on the value of  'bits()' and is therefore ultimately nontransitive   a = b and b = c does not imply a = c   for some values of a, b and c. equality is not transitive if you take into account NaNs, but that is bit what I had in mind...  >  Simiarly, %plusInfinity and %minusInfinity are not  >  values in Float.  >   >  IEEE floats however are something different  wouldn't you agree?  >  > It is as different as short integer is different from long long  > interger :) Note that floating point systems are parameterized, and  > not all of them are algebraically closed. The IEEE system, as an  > algebra, is closed.  >   But as an algebra what axioms does it obey? The distinctive ones are the laws governing the projections (rounding modes)  see Goldberg's tutorial.   >  What I was supposing is the maybe the domain DoubleFloat (which  >  currently looks more or less just like a fixed precision version of  > Float) should be given semantics that more closely resembles  > IEEE 754.  >  > In OpenAxiom, DoubleFloat is supposed to reflect a (64bit) double  > precision IEEE754 floating point datatype. Only Lisp mis abstractions  > get in the way; but thankfully SBCL and ECL provides some (limited)  > ways to get around thise over abstractions.  >   So, would it be a good idea to make Axiom domain Float also conform  to IEEE 754? IEEE 754 defines fixed precision floating point system. I'm not sure I want that for Float (which is trying to implement arbitrary precision floating point systems).  Gaby 
From: Bill Page <bill.page@ne...>  20081029 20:39:39

On Wed, Oct 29, 2008 at 2:32 PM, Gabriel Dos Reis wrote: > > Bill Page writes: > >  Well in the context of this thread a specific example might be 0. As >  I understand it, the reason that this value is not included in the >  domain Float is because the representation used there has no room for >  it. Instead, the values in Float are *exactly* the numbers >  >  (1) mantissa*base^exponent >  >  where base=2 and mantissa and exponent are values from domain Integer. Actually what I wrote above is entirely incorrect. If (1) were true than the result would be exactly Fraction Integer, not Float. It is true that the mantissa and exponent are values from domain Integer, but there is a lot more than (1) going on! Here is the relevant code extracted from float.spad.pamphlet: x = y == order x = order y and sign x = sign y and zero? (x  y) sign x == if x.mantissa < 0 then 1 else 1 order(a) == LENGTH a.mantissa + a.exponent  1 x  y == normalize plus(x,negate y) negate x == [x.mantissa,x.exponent] plus(x,y) == mx := x.mantissa; my := y.mantissa mx = 0 => y my = 0 => x ex := x.exponent; ey := y.exponent ex = ey => [mx+my,ex] de := ex + LENGTH mx  ey  LENGTH my de > bits()+1 => x de < (bits()+1) => y if ex < ey then (mx,my,ex,ey) := (my,mx,ey,ex) mw := my + shift2(mx,exey) [mw,ey] normalize x == m := x.mantissa m = 0 => 0 e : I := LENGTH m  bits() if e > 0 then y := shift2(m,1e) if odd? y then y := (if y>0 then y+1 else y1) quo 2 if LENGTH y > bits() then y := y quo 2 e := e+1 else y := y quo 2 x := [y,x.exponent+e] x shift2(x,y) == sign(x)*shift(sign(x)*x,y) shift(x:%,n:I) == [x.mantissa,x.exponent+n] zero? a == zero?(a.mantissa) > > One of the differences between that representation and the one in > IEEE is that in IEEE's system, you always get a sign component > in {1,+1}. In the Float's representation, you also get a sign component, > but in {1,0,+1}, and when the sign projection goes to 1 or +1, the > mantissa is required to be nonzero (whereas it can be zero in the > IEEE system). It is not accurate that there is "no room" to represent > 0. There is plenty room  it is just required to be ignored, therefore > the implementation takes advantages of that. Notices that I'm trying > to make a distinction between a specification and an implementation  > I don't take implementations as `God given'. > Thanks Gaby. You are right. sign x == if x.mantissa < 0 then 1 else 1 have no dependence on the value of the exponent but sign could without affecting the concept of zero. zero? a == zero?(a.mantissa) >  These rational values always have an exact representation in the >  domain Fraction Integer and form a specific subset of the real >  numbers. Among other things, this makes equality in Float fully >  transitive, where as in some other floating point systems this might >  not be the case. > > I do not understand what you mean by that. In the IEEE's system, > 0 compare equal to +0. The only way you get the sign is when you > ask for it. > You are right and I was wrong. As one can see from the above code, Axiom works with an arbitrarily set fixed precision represented by 'bits()' and equality is defined in terms of 'zero? (x  y)'. The result of  depends on the value of 'bits()' and is therefore ultimately nontransitive a = b and b = c does not imply a = c for some values of a, b and c. >  Simiarly, %plusInfinity and %minusInfinity are not >  values in Float. >  >  IEEE floats however are something different  wouldn't you agree? > > It is as different as short integer is different from long long > interger :) Note that floating point systems are parameterized, and > not all of them are algebraically closed. The IEEE system, as an > algebra, is closed. > But as an algebra what axioms does it obey? >  What I was supposing is the maybe the domain DoubleFloat (which >  currently looks more or less just like a fixed precision version of > Float) should be given semantics that more closely resembles > IEEE 754. > > In OpenAxiom, DoubleFloat is supposed to reflect a (64bit) double > precision IEEE754 floating point datatype. Only Lisp mis abstractions > get in the way; but thankfully SBCL and ECL provides some (limited) > ways to get around thise over abstractions. > So, would it be a good idea to make Axiom domain Float also conform to IEEE 754? > Which reminds me that I need to get Bemol up and running, so that I > can dump the Lisp stuff. > In case someone else forgot, on April 22, 2008 Gaby wrote: > OpenAxiom has much less Lisp codes for the interper and compiler > than the others from the Axiom family. The goal is to remove any > reference to Lisp.To succeed, there must be a Boot translator, and > one that can translate to something other than Lisp  either C++ or > Java (anything with better support than Lisp). Fortunately, Boot is > not complicated and the translator (called Bemol) written in C++ is > progressing quite well. I hope that by OpenAxiom2.0, the Bemol > translator would be a viable alternative. Regards, Bill Page. 
From: Gabriel Dos Reis <gdr@cs...>  20081029 18:32:09

"Bill Page" <bill.page@...> writes: [...]  >  However as I understand it, values in the domain FLOAT are  >  to be taken as exact rationals that approximate real numbers in  >  a well defined manner. Is this an accurate view? Are there specific  >  changes that should be made to these floating point domains that  >  would make their associated algebra more obvious?  >  > As far as I know all floating point systems define a subset of  > rational numbers as approximation to the reals; they come with  > projections (rounding mode) for delivering result of computations.  > Also, see Language Independent Arithmetic, part 1.  >   Well in the context of this thread a specific example might be 0. As  I understand it, the reason that this value is not included in the  domain Float is because the representation used there has no room for  it. Instead, the values in Float are *exactly* the numbers   mantissa*base^exponent   where base=2 and mantissa and exponent are values from domain Integer. One of the differences between that representation and the one in IEEE is that in IEEE's system, you always get a sign component in {1,+1}. In the Float's representation, you also get a sign component, but in {1,0,+1}, and when the sign projection goes to 1 or +1, the mantissa is required to be nonzero (whereas it can be zero in the IEEE system). It is not accurate that there is "no room" to represent 0. There is plenty room  it is just required to be ignored, therefore the implementation takes advantages of that. Notices that I'm trying to make a distinction between a specification and an implementation  I don't take implementations as `God given'.  These rational values always have an exact representation in the  domain Fraction Integer and form a specific subset of the real  numbers. Among other things, this makes equality in Float fully  transitive, where as in some other floating point systems this might  not be the case. I do not understand what you mean by that. In the IEEE's system, 0 compare equal to +0. The only way you get the sign is when you ask for it.  Simiarly, %plusInfinity and %minusInfinity are not  values in Float.   IEEE floats however are something different  wouldn't you agree? It is as different as short integer is different from long long interger :) Note that floating point systems are parameterized, and not all of them are algebraically closed. The IEEE system, as an algebra, is closed.  What I was supposing is the maybe the domain DoubleFloat (which currently  looks more or less just like a fixed precision version of Float)  should be given semantics that more closely resembles IEEE 7542008. In OpenAxiom, DoubleFloat is supposed to reflect a (64bit) double precision IEEE754 floating point datatype. Only Lisp misabstractions get in the way; but thankfully SBCL and ECL provides some (limited) ways to get around thise overabstractions. Which reminds me that I need to get Bemol up and running, so that I can dump the Lisp stuff.  Gaby 
From: Bill Page <bill.page@ne...>  20081029 17:48:24

On Wed, Oct 29, 2008 at 12:36 PM, Gabriel Dos Reis wrote: > ... > There is a huge ongoing debate in the Interval Computation community > about the links between intervals and floating points. > > They are different systems trying to deal with computability with real > numbers. The IEEE floating point systems are well defined algebraic > systems. I'm not aware they are less correct or less mathematical > than unspoken alternatives. > I'll take your word on that. :) > For use of signed zeros, it might be enlightening to read >... >For a mode indepth, tutorial see Goldberg's classic paper > ... I really appreciate these references. Thanks. > >  However as I understand it, values in the domain FLOAT are >  to be taken as exact rationals that approximate real numbers in >  a well defined manner. Is this an accurate view? Are there specific >  changes that should be made to these floating point domains that >  would make their associated algebra more obvious? > > As far as I know all floating point systems define a subset of > rational numbers as approximation to the reals; they come with > projections (rounding mode) for delivering result of computations. > Also, see Language Independent Arithmetic, part 1. > Well in the context of this thread a specific example might be 0. As I understand it, the reason that this value is not included in the domain Float is because the representation used there has no room for it. Instead, the values in Float are *exactly* the numbers mantissa*base^exponent where base=2 and mantissa and exponent are values from domain Integer. These rational values always have an exact representation in the domain Fraction Integer and form a specific subset of the real numbers. Among other things, this makes equality in Float fully transitive, where as in some other floating point systems this might not be the case. Simiarly, %plusInfinity and %minusInfinity are not values in Float. IEEE floats however are something different  wouldn't you agree? What I was supposing is the maybe the domain DoubleFloat (which currently looks more or less just like a fixed precision version of Float) should be given semantics that more closely resembles IEEE 7542008. Regards, Bill Page. 
From: Gabriel Dos Reis <gdr@cs...>  20081029 16:36:11

"Bill Page" <bill.page@...> writes:  >  Waldek Hebisch writes:  >   >  > Martin, first I _really_ prefer to get exceptions in normal code.  >  > Ignoring exceptions is a great way to do not see bugs. Also, things  >  > like infinities and (particularly nasty) "not a number" break normal  >  > mathematical reasoning.  >    > Martin Rubey writes:  >  >  Don't worry, I see your point. One thing though:  >   >  > (basically IEEE defined a new formal system quite unlike mathematical  >  > real numbers).  >   >  Is it really different? I thought that computation with +infinity and  >  infinity was OK  except for imagpart...  >   >  I don't care about nan.   On Wed, Oct 29, 2008 at 9:43 AM, Gabriel Dos Reis wrote:  >  > If you have +infinity and infinity, then you get +0, 0 and NaN to  > have an algebraically closed system. Most rants about NaNs and  > signed zeros tend to reflect misunderstanding of the floating point  > systems.   I think the correct *algebraic* representation of the floating point  variants (Float, DoubleFloat, and MachineFloat) is an important  subject for panAxiom. Perhaps DFLOAT follows (more or less) the IEEE  7542008 standard but I think it is clear that the domain Float in  panAxiom implements something quite different. I am not well informed  about the standards issues, but mathematically I think IEEE values  like +infinity, infinity, +0 and 0 make sense as limits when  floating point values are taken as (possibly open) intervals of the  real line. There is a huge ongoing debate in the Interval Computation community about the links between intervals and floating points. They are different systems trying to deal with computability with real numbers. The IEEE floating point systems are well defined algebraic systems. I'm not aware they are less correct or less mathematical than unspoken alternatives. For use of signed zeros, it might be enlightening to read "Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit" in The State of the Art in Numerical Analysis, (eds. Iserles and Powell), Clarendon Press, Oxford, 1987. by Prof. Kahan. Or you can access this http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf freely online. For a mode indepth, tutorial see Goldberg's classic paper What Every Computer Scientist Should Know About FloatingPoint Arithmetic And despite the title, it is not just for "computer scientist" :) http://docs.sun.com/source/8063568/ncg_goldberg.html  However as I understand it, values in the domain FLOAT are  to be taken as exact rationals that approximate real numbers in a well  defined manner. Is this an accurate view? Are there specific changes  that should be made to these floating point domains that would make  their associated algebra more obvious? As far as I know all floating point systems define a subset of rational numbers as approximation to the reals; they come with projections (rounding mode) for delivering result of computations. Also, see Language Independent Arithmetic, part 1.  Gaby 
From: Martin Rubey <martin.rubey@un...>  20081026 22:18:58

Martin Rubey <martin.rubey@...> writes: > Oh, I found it: > > )lisp (sbext::setfloatingpointmodes :traps nil) > > this should go into > > (defun setinitialparameters() > (setf *readdefaultfloatformat* 'doublefloat)) > > shouldn't it? no, I just saw that setinitialparameters is ANSI Common Lisp, whereas (sbext::setfloatingpointmodes :traps nil) is sbcl specific, of course. Waldek, where does it belong? (and how about ecl and clisp...) Martin 
From: Martin Rubey <martin.rubey@un...>  20081026 22:04:02

Martin Rubey <martin.rubey@...> writes: > I just checked ecl: > > g(x:DFLOAT):DFLOAT == 10^155 > draw(g, 1..1) > > fails, while 10^154 seems to work. The failing routine is norm in CLIP, being called by iClipParametric. I find the following behaviour: gcl: (2) > f := max()$DFLOAT (2) 1.7976931348623157E308 Type: DoubleFloat (3) > f^2 (3) #<inf> Type: DoubleFloat sbcl: (8) > f := max()$DFLOAT (8) 1.7976931348623157e308 Type: DoubleFloat (9) > f^2 >> System error: arithmetic error FLOATINGPOINTOVERFLOW signalled I'm not sure what to do about this. But I vaguely remember that Gaby implemented something around "not a number", didn't you? We could of course check in DoubleFloat itself whether we are going to overflow, but that defeats the purpose of being efficient, doesn't it? Oh, I found it: )lisp (sbext::setfloatingpointmodes :traps nil) this should go into (defun setinitialparameters() (setf *readdefaultfloatformat* 'doublefloat)) shouldn't it? Martin 
From: Gabriel Dos Reis <gdr@cs...>  20081024 12:37:42

Alfredo, Stefan  OpenAxiom 1.3.0 (trunk) has basic support for TCP/IP stream client sockets. The interface is still a bit rough but it is a start we can use as basis to expand. For example, this buf := byteBuffer 1024  create a byte buffer of capacity 1024  connect to NIST time server. Conenction may fail, or succeed  with a InetClientStreamSocket. s := connectTo(host "timea.nist.gov", port 13)$InetClientStreamSocket s case "failed" => print "connection failed"  read bytes from the server and convert it to string for output. n := readBytes!(s,buf) print(buf::String::OutputForm)  when done, close the socket close! s Over the weekend (hoping to have time), I'll try to write a tutorial, more examples and improve the TCP/IP support. Feedback most welcome.  Gaby 
From: Gabriel Dos Reis <gdr@cs...>  20081022 14:19:24

For OpenAxiom1.3.0 (trunk), the broken domain BinaryFile is removed in favor of the specific  InputBinaryFile, for binary files open for input operations.  OutputBinaryFile, for binary files open for output operations.  InputOutputBinaryFile, for binary files open for both input and output operations.  Gaby 
From: Gabriel Dos Reis <gdr@in...>  20081018 17:22:34

[ Stephen, I'm moving the discussion to openaxiomdevel@... as I believe it raises questions more about the pattern match design choices than its mere uses. I hope you would be able to follow and contribute there. ] On Fri, Oct 17, 2008 at 6:04 PM, Stephen MontgomerySmith <stephen@...> wrote: > Gabriel Dos Reis wrote: >> >> On Thu, Oct 16, 2008 at 4:19 PM, Stephen MontgomerySmith >> <stephen@...> wrote: >>> >>> So the reason I am asking this is so that I persuade axiom to write x^y >>> as >>> Pow(x,y) (i.e. make it more C friendly). >>> >>> >>> So I did this: >>> Pow:=operator 'Pow >>> powerrule := rule x^y == Pow(x,y) >>> >>> >>> Then I get good results with this: >>> >>> powerrule(sqrt(x+y))::InputForm >>> gives Pow(y + x,1/2) >>> >>> But then I was rather surprised at this: >>> >>> powerrule(81) >>> gives Pow(3,Pow(2,2)) >>> >>> It seems that its attempts to deconstruct '81' are rather aggressive. >> >> Pattern matching in OpenAxiom is semantics based, as opposed to being >> just syntactical. That explains the decomposition of 81. I do not >> think we have a purely syntactic rewrite system at the moment. >> >>> Then I tried this: >>> Mult:=operator 'Mult >>> multrule := rule x^y == Mult(x,y) >>> >>> The results here are inconsistent: >>> multrule(6) >>> gives 6 >>> but >>> multrule(81) >>> as I would (now) expect gives Mult(3,Mult(2,2)) >> >> Hmm, I'm not clear on why the result is inconsistent. As far as I can see >> 'Pow' was replaced by 'Mult'. Am I overlooking something? > > My mistake. I meant > Mult:=operator 'Mult > multrule := rule x*y == Mult(x,y) > > I would expect > multrule(6) > to give the prime factorization, but it doesn't. You are absolutely right, the behaviour is inconsistent. I would count that as a bug. And I'm no longer sure it was a good choice to use semantics in the pattern matching machinery. I'm going to explain what is happening  that does not necessarily mean that I completely agree with everything happening. For Pow, the pattern matcher tries to use a semantics function to determine a number x that would be raised to some power y to give 81  the system calls perfectNthRoot$IntegerRoots for that task. That is using the semantics of integers, not just the syntax of the input (Mathematica which does structural matching will not succeed in the match). However, for Mult, no decomposition is done on the input if the input itself does not suggest the decomposition in the sense that one of x or y must be an integer constant. Hence the behaviour your seeing. There are possible ways to go. One solution would be to disallow the use of semantics, and stick to structure. That would consistent with Mathematica, but one would need to assess the impact on OpenAxiom library itself (I would hope minimal or none, but hey). If we do that, then there is a question of what to do with division. Here is an example along what you showed earlier Div := operator 'Div divrule := rule x/y == Div(x,y) Then divrule(2/3) gives Div(2,3), but divrule(8/2) gives 4. Again, as you can see here, OpenAxiom is using semantics to perform pattern matching. It figures out that 2/3 has type Fraction Integer. However, all rules insist on operating over Expression Integer, not Expression T for any T. Consequently, 2/3 is converted to Expression Integer yielding a pair (numerator,denominator). Hence the result. For 8/2, the system figures out that it is actually 4 (using semantics again) and consequenly the conversion to Expression Integer yields the integer constant, so the match failed. The short summary is that, although the Pow case seems easy to fix (just disallow it), the similar Div case is trickier because by the time we get into the pattern matcher, the conversion already happened. On the other hand we cannot just disallow the Fraction Integer > Expression Integer unconditionally in the hope of preventing a illadvised conversion for the pattern matcher. Another option would be to continue with the idea of using semantics to achieve pattern match. I believe that is a slipery slope and probably illadvised.  Gaby 
From: Gabriel Dos Reis <gdr@in...>  20081018 04:07:19

On Fri, Oct 17, 2008 at 9:40 PM, Bill Page <bill.page@...> wrote: > Gaby, > > OpenAxiom currently renders package calls in terms of the function $elt: > > (1) > parseString("x^1.2")$InputForm > > (1) x^($elt(Float,float)(12,1,10)) > Type: InputForm > (2) > unparse(%)$InputForm > > (2) "x^($elt(Float,float)(12,1,10))" > Type: String > > Although this works in the interpreter (when $ is properly escaped), Fixed on 1.2 branch and mainline. Thanks.  Gaby 
From: Bill Page <bill.page@ne...>  20081018 02:41:01

Gaby, OpenAxiom currently renders package calls in terms of the function $elt: (1) > parseString("x^1.2")$InputForm (1) x^($elt(Float,float)(12,1,10)) Type: InputForm (2) > unparse(%)$InputForm (2) "x^($elt(Float,float)(12,1,10))" Type: String Although this works in the interpreter (when $ is properly escaped), "x^(_$elt(Float,float)(12,1,10))", I noticed that 'unparse' in FriCAS actually renders this as one would usually write it in the interpreter: (1) > parse("x^1.2")$InputForm (1) (^ x (($elt (Float) float) 12  1 10)) Type: InputForm (2) > unparse(%)$InputForm (2) "x^float(12,1,10)$Float" Type: String  If you agree with this approach, I will try to port this change from FriCAS. Should InputForm render like this or just unparse? Regards, Bill Page. 
From: Bill Page <bill.page@ne...>  20081017 16:34:38

Gaby, I am concerned about the internal representation of the InputForm value. I presume that the display (rendering) of InputForm is essential 11 with it's representation  syntax aside. Why is it so much more complicated than it needs to be? Compare it to the follow InputForm values generated by parse: (1) > parseString("x^1.0")$InputForm (1) x^($elt(Float,float)(1,0,10)) Type: InputForm (2) > parseString("x^1.2")$InputForm (2) x^($elt(Float,float)(12,1,10)) Type: InputForm Why don't we get something like this back from (4), below? Also, I note that '$elt(Float,float)' is not quite an accepted syntax for package calls in the interpreter. The $ needs to be escaped: (3) > x^(_$elt(Float,float)(12,1,10)) 5++ (3) x\x Type: Expression Float Regards, Bill Page. On Fri, Oct 17, 2008 at 11:51 AM, Gabriel Dos Reis wrote: > On Thu, Oct 16, 2008 at 8:35 PM, Bill Page wrote: > >> Can anyone explain this odd result? Or this even one? >> >> (4) > (x^1.0)::InputForm >> >> (4) >> (/ (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) (float 1 0 2)) >> >> Type: InputForm >> > > Is your issue about the internal representation or about the display? > 
From: Gabriel Dos Reis <gdr@in...>  20081017 15:51:48

On Thu, Oct 16, 2008 at 8:35 PM, Bill Page <bill.page@...> wrote: > Can anyone explain this odd result? Or this even one? > > (4) > (x^1.0)::InputForm > > (4) > (/ (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) (float 1 0 2)) > > Type: InputForm > Is your issue about the internal representation or about the display?  Gaby 
From: Waldek Hebisch <hebisch@ma...>  20081017 04:15:07

> Whoever Japp is, he's doing a great job! > Jose Alfredo Portes ?  Waldek Hebisch hebisch@... 
From: Waldek Hebisch <hebisch@ma...>  20081017 03:54:32

Bill Page wrote: > > Try this: > > (1) > x^1.2 > > 5++ > (1) x\x > > Type: Expression Float > > (2) > %::InputForm > > (2) > (/ > (+ > (+ (float 0 0 2) > (* (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) > (** > (/ (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) > (float 1 0 2)) > (/ 1 (/ (float 5 0 2) (float 1 0 2)))) > ) > ) > (float 0 0 2)) > (float 1 0 2)) > > Type: InputForm > > Using unparse or the most recent version of OpenAxiom this displays as: > > (2) > (float(0,0,2) + (float(0,0,2) + float(1,0,2)*x + float(0,0,2))*((float(0,0,2) > + float(1,0,2)*x + float(0,0,2))/float(1,0,2))^(1/(float(5,0,2)/float(1,0,2) > )) + float(0,0,2))/float(1,0,2) > > Type: InputForm > > (3) > interpret(%)$InputForm > > 5++ > (3) x\x > > Type: Expression Float > >  > > Can anyone explain this odd result? Or this even one? > > (4) > (x^1.0)::InputForm > > (4) > (/ (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) (float 1 0 2)) > > Type: InputForm > It looks like internal representation. Expression is a quotient of two polynimials. Denominator is just constant polynimial equal 1. In numerator you have two variables (x and the root), but you also see explicitly zero coefficients (that is a bit strange because internally polynomial is sparse). Root has two arguments, which again are expressions...  Waldek Hebisch hebisch@... 
From: Bill Page <bill.page@ne...>  20081017 01:43:23

Try this: (1) > x^1.2 5++ (1) x\x Type: Expression Float (2) > %::InputForm (2) (/ (+ (+ (float 0 0 2) (* (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) (** (/ (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) (float 1 0 2)) (/ 1 (/ (float 5 0 2) (float 1 0 2)))) ) ) (float 0 0 2)) (float 1 0 2)) Type: InputForm Using unparse or the most recent version of OpenAxiom this displays as: (2) (float(0,0,2) + (float(0,0,2) + float(1,0,2)*x + float(0,0,2))*((float(0,0,2) + float(1,0,2)*x + float(0,0,2))/float(1,0,2))^(1/(float(5,0,2)/float(1,0,2) )) + float(0,0,2))/float(1,0,2) Type: InputForm (3) > interpret(%)$InputForm 5++ (3) x\x Type: Expression Float  Can anyone explain this odd result? Or this even one? (4) > (x^1.0)::InputForm (4) (/ (+ (+ (float 0 0 2) (* (float 1 0 2) x)) (float 0 0 2)) (float 1 0 2)) Type: InputForm Regards, Bill Page. 
From: Martin Rubey <martin.rubey@un...>  20081016 18:36:50

Whoever Japp is, he's doing a great job! Many thanks, Martin 
From: Francois Maltey <fmaltey@ne...>  20081015 12:46:36

Hello, I feel right about one argument functions without parenthesis. Of corse function call must have a higher priority than operator +*/^. nobody read sin x + y + 3 as sin (x+y+3), and + and  must have the same priority. How must we read 2^3. The only way is 2^(3). Axiom and mupad do so, maple doesn't. But we must remain attentive about 8^2/3 = (8^2)/3, not 8^(2/3)... I find too surprising the ^^^ : the variable ^ + : the variable + ++ : an error ++ : (2) + +++ : a comment Allow the user to add new operators will be more usefull... [maybe the ! factorial... ] Maple doesn't allow to use internal function as variable. so sqrt  1 has no sense. Indeed function names and variables are in the same space, and sin is the code of the function sin... For mupad sqrt  1 was the function x +> sqrt (x)1 because mupad allows operator over functions : f+g is the function x+>f(x)+g(x). As axiom doesn't know operators over functions, so sqrt 1 might be read as sqrt(1), Axiom already reads 2^1 = 2^(1) without problem. I don't code axioms kernel, I don't have the right advice. Francois 
From: Gabriel Dos Reis <gdr@in...>  20081010 18:44:52

On Fri, Oct 10, 2008 at 1:00 PM, Martin Rubey <martin.rubey@...> wrote: > If the changes are sufficiently local, maybe I could try to adapt them for > FriCAS and we could try to work out things together? I'll send something over the weekend.  Gaby 
From: Gabriel Dos Reis <gdr@cs...>  20081010 18:23:25

Waldek Hebisch <hebisch@...> writes: [...]  In the past I looked at code handling closures and my impression  was that the code was "broken by design". More precisely, to  correctly handle closures one needs precise information about  scopes (including information which symbols represent variables).  It seems that part of scope information is never collected and  part is alredy lost when closures are handled. That part of information is collected. It is just that it is overwritten by accident, not design.  Gaby 
From: Martin Rubey <martin.rubey@un...>  20081010 18:00:22

"Gabriel Dos Reis" <gdr@...> writes: > As I had indicated to Francois in private conversations over the last couple > of weeks, I have a basic patch in an experimental local branch, but it causes > regressions elsewhere. I have not had a chance to work it out mostly because > of daytime job priorities. Silence isn't equivalent to inaction. No, I didn't assume that. Does "experimental local branch" indicate that it requires deeper changes to the interpreter? If the changes are sufficiently local, maybe I could try to adapt them for FriCAS and we could try to work out things together? > > Monday morning would be wonderful... > you're aware, by now, that I'm not very good at meeting preset hard deadlines... Yes, sure. But hope dies last, and I wanted to make sure that you know that it's relatively high priority for me, and that I find myself unable to do anything about it, since I'm completely ignorant about these language issues. Would be wonderful if we could make progress on this... Martin 
From: Martin Rubey <martin.rubey@un...>  20081010 17:54:20

Waldek Hebisch <hebisch@...> writes: > This is an old bug. IIRC in Issue Tracker we have a few reports > that looks like this problem. OK, I found "#274 Can't get a parameter in an anonymous function" > In the past I looked at code handling closures and my impression was that the > code was "broken by design". More precisely, to correctly handle closures > one needs precise information about scopes (including information which > symbols represent variables). It seems that part of scope information is > never collected and part is already lost when closures are handled. Oh dear. > I may be wrong and it is possible a simple fix will cure things. However, > given that I do not believe in a simple fix to closure problem I did not > spent much time searching for such fix. OK. There is one thing that gave me hope however: providing type information seems to be a cure: (1) > f3 z == (x: INT):INT +> gcd(x,z) Type: Void (2) > f3 5 (2) theMap(NIL,519) Type: (Integer > Integer) (3) > % 25 (3) 5 Type: PositiveInteger Do you have examples where typing things does not help? I'll add my examples to IssueTracker and bugs2008, in any case. Martin 