You can subscribe to this list here.
2009 
_{Jan}
(2) 
_{Feb}
(5) 
_{Mar}

_{Apr}

_{May}
(2) 
_{Jun}
(8) 
_{Jul}
(4) 
_{Aug}

_{Sep}

_{Oct}
(2) 
_{Nov}
(6) 
_{Dec}


2010 
_{Jan}
(1) 
_{Feb}
(1) 
_{Mar}
(3) 
_{Apr}
(2) 
_{May}
(2) 
_{Jun}
(2) 
_{Jul}
(18) 
_{Aug}
(13) 
_{Sep}
(7) 
_{Oct}

_{Nov}

_{Dec}
(2) 
2011 
_{Jan}

_{Feb}
(11) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(1) 
_{Jul}
(18) 
_{Aug}
(16) 
_{Sep}
(12) 
_{Oct}
(12) 
_{Nov}
(19) 
_{Dec}
(42) 
2012 
_{Jan}
(16) 
_{Feb}
(3) 
_{Mar}
(8) 
_{Apr}
(14) 
_{May}
(30) 
_{Jun}
(5) 
_{Jul}
(7) 
_{Aug}
(3) 
_{Sep}
(10) 
_{Oct}
(4) 
_{Nov}
(10) 
_{Dec}
(1) 
2013 
_{Jan}
(14) 
_{Feb}
(8) 
_{Mar}
(5) 
_{Apr}
(3) 
_{May}
(9) 
_{Jun}
(19) 
_{Jul}

_{Aug}
(27) 
_{Sep}
(5) 
_{Oct}
(18) 
_{Nov}
(12) 
_{Dec}
(8) 
2014 
_{Jan}
(5) 
_{Feb}
(8) 
_{Mar}
(20) 
_{Apr}
(22) 
_{May}
(28) 
_{Jun}
(9) 
_{Jul}
(6) 
_{Aug}
(46) 
_{Sep}
(40) 
_{Oct}
(15) 
_{Nov}
(8) 
_{Dec}
(34) 
2015 
_{Jan}
(20) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1

2

3

4

5

6

7

8

9

10

11

12
(2) 
13

14

15
(1) 
16

17

18

19

20

21

22

23

24
(2) 
25

26

27

28
(2) 
29
(3) 
30







From: Arthur Norman <acn1@ca...>  20120929 12:39:36

I note that int(exp(x^2),x,y,infinity) seems to yield sqrt(pi)*(erf(y)+1)/2 for me when I have specfn loaded, and then when I substitute y=0 it tidies up nicely. So I guess it will be possible but Term is just about to start here and I will be rather loaded for the coming week  I hoep to look into it after that. Arthur On Sat, 29 Sep 2012, Andrey G. Grozin wrote: > In the current reduce (svn 1759), if I say > > load_package specfn; int(exp(x^2),x,0,infinity); > > this produces > > sqrt(pi)*erf(infinity) >  > 2 > > If I don't load specfn, I get > > sqrt(pi) >  > 2 > > So, loading specfn reduces the quality :( Can this be avoided? > > Andrey > 
From: Rainer Schöpf <rainer.schoepf@gm...>  20120929 12:11:18

On Sat, 29 Sep 2012 at 18:30 +0700, Andrey G. Grozin wrote: > In the current reduce (svn 1759), if I say > > load_package specfn; int(exp(x^2),x,0,infinity); > > this produces > > sqrt(pi)*erf(infinity) >  > 2 > > If I don't load specfn, I get > > sqrt(pi) >  > 2 > > So, loading specfn reduces the quality :( Can this be avoided? Yes. The problem are these two rules in the specfn package: int(1/e^(~tt^2),~tt,0,~z) => erf(z)/2*sqrt(pi), int(1/e^(~tt^2),~tt,~z,infinity) => erfc(z)/2*sqrt(pi), which are used even if z is infinity. The obvious solution is to replace them by int(1/e^(~tt^2),~tt,0,~z) => erf(z)/2*sqrt(pi) when z freeof infinity, int(1/e^(~tt^2),~tt,~z,infinity) => erfc(z)/2*sqrt(pi) when z freeof infinity I'll make the change later today. Rainer 
From: Andrey G. Grozin <A.G.G<rozin@in...>  20120929 11:47:40

In the current reduce (svn 1759), if I say load_package specfn; int(exp(x^2),x,0,infinity); this produces sqrt(pi)*erf(infinity)  2 If I don't load specfn, I get sqrt(pi)  2 So, loading specfn reduces the quality :( Can this be avoided? Andrey 
From: Arthur Norman <acn1@ca...>  20120928 09:15:35

Jlisp is coded in Java and all data types are handled as Java objects in a neat class hierarchy and with any and all overheads that Java introduces. Integer arithmetic is done using values that are boxed within a Java class, since there is no obvious way to hold immediate values. CSL is coded in C and PSL is in Lisp that it compiles directly down to machine code. Both will handle modest sized integers a LOT faster. While they still box floats they have storage management tunes for what they expect to happen in Lisp and will have a lot less overhead than Jlisp. In general I always expect Jlisp to be substantially slower than the "real" Lisps. However one thing that MIGHT be causing confusion is that in Jlisp the time() function records wallclock time, so if you test from the keyboard you get the time it takes you to type stuff in included. When I last looked Java did not provide a useful CPU time enquiry. But yes, on my machine I observe a factor of about 25 between jlisp and CSL on that. And I see similar slowness if I go "precision 20" so that Reduce is using bigfloats not native ones, so it is not just floating point as such. If I try a pure Lisplevel test of numerics as in lisp; q := 0; on time; for i := 1:3000 do for j :=1:3000 do q := remainder(q + i*j, 1234567); I see a factor of 20 in time difference between CSL and Jlisp, so this is down to the costs of general arithmetic. I know I put a lot of effort into CSL arithmetic with around 15000 lines of C to do it. In Jlisp because numbers have to be just a subclass of LispObject everything is boxed. I use method dispatch to cope with the dispatch when I have a potential mix of LispSmallInteger, LispBigInteger and floating values  and I have little scope (that I can think of) to optimise for the most common cases. So at present my belief is that this is one of the worst cases of overheap because of use of Java and i do not know how to speed it up. I will provide some potentially less helpful comments and your response may depend on how much of a showstopper this is... (a) there are Java profiling tools that I have hardly ever used and that I am not up to data on  but maybe they would let you really track down exactlly where the time is going. If it is just in all the uses of "new" then you may be out of luck, but maybe it would give you more reliable info than my guesses here. (b) where speed matters I would always prefer CSL to Jlisp, and the "embedded" variant is aimed at being easier to build for interfacing than the regular version that often gets built with a GUI. One could IMAGINE wrapping that as Java native methods if necessary! (c) It could be that manually inlining some methods in Jlisp would really help, but (a) would be needed first to track down what was the hottest spot to hit. Arthur On Fri, 28 Sep 2012, Simon Weitzhofer wrote: > Dear All, > > I noticed that summing a lot of rounded numbers takes a lot of time in > the jlisp version but not in the psl nor in the csl version. > For example "<<on rounded; for i:1:100000 sum sqrt(i)>>;" takes in the > psl version about four seconds, in the csl version less than three > seconds but in the jlisp version more than one minute. > > Does anybody know or have an intuition why this takes so long? Is it > possible to improve the speed of the sum significantly, and what can be > done do so? > > Many thanks, > Simon > > PS: I tried "<<on rounded; a:= for i:=1:100000 collect sqrt i; > aeval(part(a,0):=plus)>>;" too but this is even slower. > 
From: Simon Weitzhofer <simon@ge...>  20120928 07:28:58

Dear All, I noticed that summing a lot of rounded numbers takes a lot of time in the jlisp version but not in the psl nor in the csl version. For example "<<on rounded; for i:1:100000 sum sqrt(i)>>;" takes in the psl version about four seconds, in the csl version less than three seconds but in the jlisp version more than one minute. Does anybody know or have an intuition why this takes so long? Is it possible to improve the speed of the sum significantly, and what can be done do so? Many thanks, Simon PS: I tried "<<on rounded; a:= for i:=1:100000 collect sqrt i; aeval(part(a,0):=plus)>>;" too but this is even slower. 
From: Rainer Schöpf <rainer.schoepf@gm...>  20120924 11:18:31

On Mon, 24 Sep 2012, Raffaele Vitolo wrote: > Dear All, > > I am summing large numbers of monomials (~12000) indexed by an operator > `c' as coefficient. The resulting expression will be of the type > c(1)*mon1 + c(2)*mon2 + ... > > Taking these sums seems to be extremely timeconsuming: on my server it > took 8.5 hours, and this is the slowest part of my computations. I used > the following syntax: > > % Loads the file with the monomials; it is a list > % `linoddt' of about 12000 monomials > in "kz3d_linoddt.red"$ > % Counter for the coefficients: > ctel := 0 $ > % Operator for the coefficients: > operator c $ > % Sum of the monomials: > ct:=(for each el in linoddt sum (c(ctel:=ctel+1)*el))$ > > If you are interested you might find the monomials here (~450k): > http://poincare.unisalento.it/vitolo/tempo/kz3d_linoddt.red > > My question is: it is possible to speed up this sum? Or is this > operation so intrinsically complex that it is not possible to do better? Hello, the main reason why this computation takes such a long time is that REDUCE tries to simplify the intermediate sum for every step of the foreach loop. It is much faster to collect the terms of the sum individually and compute the sum afterwards: ctlist := foreach el in linoddt collect (c(ctel:=ctel+1)*el) $ ct := (part(ctlist,0) := plus)$ It is even faster to not use an operator, but to construct the coefficients as variables, using mkid: ctlist := foreach el in linoddt collect (mkid(c,ctel:=ctel+1)*el) $ ct := (part(ctlist,0) := plus)$ Rainer Schöpf 
From: Raffaele Vitolo <raffaele.vitolo@un...>  20120924 08:25:00

Dear All, I am summing large numbers of monomials (~12000) indexed by an operator `c' as coefficient. The resulting expression will be of the type c(1)*mon1 + c(2)*mon2 + ... Taking these sums seems to be extremely timeconsuming: on my server it took 8.5 hours, and this is the slowest part of my computations. I used the following syntax: % Loads the file with the monomials; it is a list % `linoddt' of about 12000 monomials in "kz3d_linoddt.red"$ % Counter for the coefficients: ctel := 0 $ % Operator for the coefficients: operator c $ % Sum of the monomials: ct:=(for each el in linoddt sum (c(ctel:=ctel+1)*el))$ If you are interested you might find the monomials here (~450k): http://poincare.unisalento.it/vitolo/tempo/kz3d_linoddt.red My question is: it is possible to speed up this sum? Or is this operation so intrinsically complex that it is not possible to do better? Thanks in advance! Raffaele Vitolo.  
From: Tony Hearn <achearn@ve...>  20120915 01:06:28

(Sorry if you receive this twice!) Does anyone know a review about this? I have to give a talk in the near future about it. Thanks. 
From: Rainer Schöpf <rainer.schoepf@gm...>  20120912 17:30:12

Thanks for reporting this. Your copy of REDUCE is rather old; an uptodate version still exhibits the problem, but gives the error message +++ Error attempt to take car of an atom: failed I think I have found the cause of the problem; it seems to be an incorrect treatment of one particular case in the algorithm. As to why it works when you rename one of the variables, I suspect that this renaming leads to a different internal ordering of the variables when computing the Groebner base. I have a preliminary solution; I'll get back to you after I have verified that it is indeed correct. Regards, Rainer ______________________________________________________________________________ On Wed, 12 Sep 2012 at 10:36 0000, Jarmo Hietarinta wrote: > This is distilled from an actual problem. > > Suppose I have already solved 6 equations using groesolve and now add the 7th: > > res3:={c16=(c11**2*c6)/(a11*cp11), > b11=(bp11*cp11)/c11, > b6=c11/cp11, > a6=cp11/c11, > ap11=(a11*ap16*cp11)/c11, > apzz=(a11*cp11*cpzz)/(c11*cp6), >  ap16*cp11*cp6 + c11*c6}; > > I now try to solve this with reduce as follows: > > Reduce (Free CSL version), 14Apr09 ... > > 1: off nat; > > 2: load groebner; > > 3: groebmonfac:=0; > > groebmonfac := 0$ > > 4: > 4: res3:={c16=(c11**2*c6)/(a11*cp11), > 4: b11=(bp11*cp11)/c11, > 4: b6=c11/cp11, > 4: a6=cp11/c11, > 4: ap11=(a11*ap16*cp11)/c11, > 4: apzz=(a11*cp11*cpzz)/(c11*cp6), > 4:  ap16*cp11*cp6 + c11*c6}; > > res3 := {c16=(c11**2*c6)/(a11*cp11), > b11=(bp11*cp11)/c11, > b6=c11/cp11, > a6=cp11/c11, > ap11=(a11*ap16*cp11)/c11, > apzz=(a11*cp11*cpzz)/(c11*cp6), >  ap16*cp11*cp6 + c11*c6}$ > > %then groesolve tilts without giving any result at all; same with solve: > > 5: solve(res3); > > Unknowns: {a11,a6,ap11,ap16,apzz,b11,b6,bp11,c11,c16,c6,cp11,cp6,cpzz} > 6: groesolve(res3); > > % but if after this I change 1 variable in the set: > > 7: sub(bp11=bp10,res3); > > {c16=(c11**2*c6)/(a11*cp11), > b11=(bp10*cp11)/c11, > b6=c11/cp11, > a6=cp11/c11, > ap11=(a11*ap16*cp11)/c11, > apzz=(a11*cp11*cpzz)/(c11*cp6), >  ap16*cp11*cp6 + c11*c6}$ > > % then groesolve can solve it, but uses 12 seconds! > > 8: groesolve ws; > > {{a6=cp11/c11, > ap11=(a11*c6)/cp6, > apzz=(a11*cp11*cpzz)/(c11*cp6), > b11=(bp10*cp11)/c11, > b6=c11/cp11, > c16=(c11**2*c6)/(a11*cp11), > ap16=(c11*c6)/(cp11*cp6)}}$ > > 9: showtime; > > Time: 12139 ms plus GC time: 220 ms > > What goes wrong? > > regards, Jarmo Hietarinta > >  > Prof. Jarmo Hietarinta > Department of Physics and Astronomy > University of Turku, FIN20014 Turku, Finland > mobile: +35840722 5685 >  >  > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Reducealgebradevelopers mailing list > Reducealgebradevelopers@... > https://lists.sourceforge.net/lists/listinfo/reducealgebradevelopers > Rainer Schöpf 
From: Jarmo Hietarinta <hietarin@ut...>  20120912 10:52:05

This is distilled from an actual problem. Suppose I have already solved 6 equations using groesolve and now add the 7th: res3:={c16=(c11**2*c6)/(a11*cp11), b11=(bp11*cp11)/c11, b6=c11/cp11, a6=cp11/c11, ap11=(a11*ap16*cp11)/c11, apzz=(a11*cp11*cpzz)/(c11*cp6),  ap16*cp11*cp6 + c11*c6}; I now try to solve this with reduce as follows: Reduce (Free CSL version), 14Apr09 ... 1: off nat; 2: load groebner; 3: groebmonfac:=0; groebmonfac := 0$ 4: 4: res3:={c16=(c11**2*c6)/(a11*cp11), 4: b11=(bp11*cp11)/c11, 4: b6=c11/cp11, 4: a6=cp11/c11, 4: ap11=(a11*ap16*cp11)/c11, 4: apzz=(a11*cp11*cpzz)/(c11*cp6), 4:  ap16*cp11*cp6 + c11*c6}; res3 := {c16=(c11**2*c6)/(a11*cp11), b11=(bp11*cp11)/c11, b6=c11/cp11, a6=cp11/c11, ap11=(a11*ap16*cp11)/c11, apzz=(a11*cp11*cpzz)/(c11*cp6),  ap16*cp11*cp6 + c11*c6}$ %then groesolve tilts without giving any result at all; same with solve: 5: solve(res3); Unknowns: {a11,a6,ap11,ap16,apzz,b11,b6,bp11,c11,c16,c6,cp11,cp6,cpzz} 6: groesolve(res3); % but if after this I change 1 variable in the set: 7: sub(bp11=bp10,res3); {c16=(c11**2*c6)/(a11*cp11), b11=(bp10*cp11)/c11, b6=c11/cp11, a6=cp11/c11, ap11=(a11*ap16*cp11)/c11, apzz=(a11*cp11*cpzz)/(c11*cp6),  ap16*cp11*cp6 + c11*c6}$ % then groesolve can solve it, but uses 12 seconds! 8: groesolve ws; {{a6=cp11/c11, ap11=(a11*c6)/cp6, apzz=(a11*cp11*cpzz)/(c11*cp6), b11=(bp10*cp11)/c11, b6=c11/cp11, c16=(c11**2*c6)/(a11*cp11), ap16=(c11*c6)/(cp11*cp6)}}$ 9: showtime; Time: 12139 ms plus GC time: 220 ms What goes wrong? regards, Jarmo Hietarinta  Prof. Jarmo Hietarinta Department of Physics and Astronomy University of Turku, FIN20014 Turku, Finland mobile: +35840722 5685  