| 
      
      
      From: Richard F. <fa...@gm...> - 2023-03-25 16:12:29
      
     | 
| I don't think it is nearly the same. Here are a few thoughts.. * There may be bugs in the algorithms programmed for a CAS, but these are presumed to be reproducible. And, at least in principle, they can be identified and fixed. The AIs errors may not be reproducible. Identifying and fixing them seems difficult...wait for version n+1? * The domain of application is (mostly) specified by syntax, semantics in a CAS. Not free-form natural language. * Sometimes (not always) there are checks (eg. divide/multiply, integrate/differentiate; factor/ratsimp; evaluate at test points) in a CAS. Sometimes there are confirmations by physical checks, dimensional analysis, etc. * The programs have authors who (usually) have a knowledge of the subject. The AIs seem to hallucinate, even to the point of making up "original" URL sources. Some of the chat GPT results are not even subject to objective evaluation. Is this a "good" essay? When the answer appears to be objectively correct, and even if it "shows the work", it is unlikely that this represents some ground-level understanding. For instance, it is possible to memorize "2+1=3" and "3+1=4", but is there a number N, such that it will not know how to compute N+1? Arguably (perhaps provably -- I am not an expert on this!) there is such a number N. For a CAS, one could perhaps run out of memory to even store a number N, but it would be noticed. Anyway, my thoughts on this. RJF On Sat, Mar 25, 2023 at 8:17 AM Daniel Volinski via Maxima-discuss < max...@li...> wrote: > Hi Jochen, > > You have exactly the same problem using a CAS like Maxima: you don't know > if the answer is correct or not. > The programmer of some package may have made a mistake or may have not > considered an extreme case and you get a wrong answer. > > But you learn to live with it: you test known cases and verify if the > answer is correct, if it is, you test more complex examples. > Eventually you have to decide if you use it with a real case. > > Calling "don't use an AI" is like calling "don't use a CAS", it's the same. > > Daniel Volinski > > > En sábado, 25 de marzo de 2023, 16:59:48 GMT+3, Jochen Ziegenbalg < > zie...@gm...> escribió: > > > I hope that if I ask Maxima, then I don't have the same credibility > problem as I would with asking ChatGPT, n-th generation. > If maybe, ChatGPT has integrated a CAS by then, how can I know that it > will be using the CAS correctly? > I would hate to have to go to some chat program to get the answer for > mathematical problems. > So: What is the future of Maxima in the light of this development? > > Best regards, Jochen > > https://jochen-ziegenbalg.github.io/materialien/ > > > > On Sat, Mar 25, 2023 at 8:02 AM Francesco Pedulla' <me...@gm...> > wrote: > > In any case, you have the same issue if you ask an expert in a field you > do not know... > > Francesco > > On Sat, Mar 25, 2023 at 7:37 AM Jochen Ziegenbalg < > zie...@gm...> wrote: > > > My concern is that if you do not know the answer, you cannot judge if it > is right or not. > > > And if you know the answer, what sense does it make to ask chatGPT & Co ? > (except for curiosity, of course) > Jochen > https://jochen-ziegenbalg.github.io/materialien/ > > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > https://lists.sourceforge.net/lists/listinfo/maxima-discuss > > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > https://lists.sourceforge.net/lists/listinfo/maxima-discuss > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > https://lists.sourceforge.net/lists/listinfo/maxima-discuss > |