You can subscribe to this list here.
2004 |
Jan
(1) |
Feb
(1) |
Mar
(3) |
Apr
(12) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(3) |
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
|
Feb
|
Mar
(6) |
Apr
(2) |
May
(3) |
Jun
(6) |
Jul
|
Aug
(1) |
Sep
(5) |
Oct
(32) |
Nov
|
Dec
(4) |
2006 |
Jan
(1) |
Feb
(1) |
Mar
(3) |
Apr
(1) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(6) |
Nov
(6) |
Dec
|
2007 |
Jan
|
Feb
(7) |
Mar
(20) |
Apr
(9) |
May
(4) |
Jun
(13) |
Jul
(9) |
Aug
(8) |
Sep
|
Oct
(7) |
Nov
(7) |
Dec
(3) |
2008 |
Jan
|
Feb
(5) |
Mar
(1) |
Apr
(46) |
May
(7) |
Jun
(5) |
Jul
(1) |
Aug
(15) |
Sep
(19) |
Oct
|
Nov
(2) |
Dec
(1) |
2009 |
Jan
(3) |
Feb
(2) |
Mar
(10) |
Apr
(16) |
May
(18) |
Jun
(12) |
Jul
(13) |
Aug
(10) |
Sep
(5) |
Oct
|
Nov
(2) |
Dec
(3) |
2010 |
Jan
(17) |
Feb
(10) |
Mar
(3) |
Apr
(2) |
May
(12) |
Jun
(17) |
Jul
(27) |
Aug
(20) |
Sep
(8) |
Oct
(12) |
Nov
(3) |
Dec
(2) |
2011 |
Jan
(16) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
(11) |
Jul
(4) |
Aug
(9) |
Sep
(10) |
Oct
(8) |
Nov
(10) |
Dec
(3) |
2012 |
Jan
(6) |
Feb
(4) |
Mar
(3) |
Apr
(10) |
May
(7) |
Jun
(5) |
Jul
(4) |
Aug
(18) |
Sep
(14) |
Oct
(17) |
Nov
(12) |
Dec
|
2013 |
Jan
(6) |
Feb
(7) |
Mar
(4) |
Apr
(8) |
May
(5) |
Jun
(7) |
Jul
(4) |
Aug
(1) |
Sep
(3) |
Oct
|
Nov
|
Dec
(6) |
2014 |
Jan
(4) |
Feb
|
Mar
(6) |
Apr
(2) |
May
|
Jun
(10) |
Jul
(1) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(3) |
Apr
(3) |
May
(7) |
Jun
(5) |
Jul
(1) |
Aug
(3) |
Sep
(2) |
Oct
(2) |
Nov
(6) |
Dec
(3) |
2016 |
Jan
(2) |
Feb
|
Mar
|
Apr
(7) |
May
|
Jun
(5) |
Jul
(1) |
Aug
(2) |
Sep
(5) |
Oct
(5) |
Nov
(2) |
Dec
|
2017 |
Jan
(5) |
Feb
(4) |
Mar
(3) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2018 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(6) |
Jun
(1) |
Jul
(3) |
Aug
(2) |
Sep
|
Oct
(6) |
Nov
(4) |
Dec
|
2019 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(5) |
May
|
Jun
(5) |
Jul
(2) |
Aug
(7) |
Sep
(1) |
Oct
(7) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(3) |
Nov
|
Dec
|
2022 |
Jan
(5) |
Feb
(1) |
Mar
|
Apr
|
May
(5) |
Jun
(9) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
(4) |
Dec
|
2023 |
Jan
|
Feb
|
Mar
(1) |
Apr
(4) |
May
|
Jun
(5) |
Jul
(1) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(3) |
2024 |
Jan
(7) |
Feb
(1) |
Mar
|
Apr
(17) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jomi H. <jom...@gm...> - 2019-04-26 15:00:18
|
Hi Amandine, yes, vars can have annotations, but not terms. Using other words, the functor can be a variable (for unification purposes). Jomi > On 26 Apr 2019, at 10:25, Amandine Mayima <am...@la...> wrote: > > Hello, > > Is it normal that beliefs in the form of Var can be added but not in the form of Var(atom/Var) which are parsed when belief addition as +Var; atom/Var; ? When it's Var(atom1/Var2,atom2/Var3), it cannot be parsed. From the EBNF, I understand that it should be possible. > > Of the same style, I cannot use an IA with an argument of the form Var(atom/Var) for example I can do .fail_goal(goto(Place,Speed)), I can do .fail_goal(TaskName) but I cannot do .fail_goal(TaskName(Place,Speed)). > > Best regards, > > Amandine > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Amandine M. <am...@la...> - 2019-04-26 13:25:34
|
Hello, Is it normal that beliefs in the form of Var can be added but not in the form of Var(atom/Var) which are parsed when belief addition as +Var; atom/Var; ? When it's Var(atom1/Var2,atom2/Var3), it cannot be parsed. From the EBNF, I understand that it should be possible. Of the same style, I cannot use an IA with an argument of the form Var(atom/Var) for example I can do .fail_goal(goto(Place,Speed)), I can do .fail_goal(TaskName) but I cannot do .fail_goal(TaskName(Place,Speed)). Best regards, Amandine |
From: Jomi H. <jom...@gm...> - 2019-04-26 11:22:52
|
Thanks Amandine for the careful analysis of this bug! I’ve just committed the fix at github. > On 25 Apr 2019, at 07:58, Amandine Mayima <am...@la...> wrote: > > Hello, > > I wanted to use the internal action .abolish with vars in the annotations. It took me a while to understand why it worked with one var but not several ones : in the function hasSubsetAnnot of the class Pred, the pAnnots iterator is not reset for the second var because of the i2reset variable which had been set to true with the first var (line 361). When I remove the condition !i2reset, it works as I want to. > > Example : > > BB : b(a)[annot1,annot2,source(self)] ; c(d)[annot1,source(self)] > > .abolish(_[annot1,annot2]) => b(a) is removed > .abolish(_[Z]) (with Z=annot1) => c(d) is removed > .abolish(_[annot1,Y]) (with Y=annot2) => b(a) is removed > .abolish(_[Z,Y]) => b(a) is NOT removed because for 2 reasons I think : 1. for the second annotation Y, annot.isVar() && !i2Reset == false and 2. the alphabet order. What happens : it compares annot1 and Z, its doesn't unify, annot1 is removed from pAnnots, then it compared annot2 and Z, it unifies. Then Y is the new annot to be compared with but only source is left in i2 that has not been reset. > > Best regards, > > Amandine > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Amandine M. <am...@la...> - 2019-04-25 11:27:55
|
Hello, I wanted to use the internal action .abolish with vars in the annotations. It took me a while to understand why it worked with one var but not several ones : in the function hasSubsetAnnot of the class Pred, the pAnnots iterator is not reset for the second var because of the i2reset variable which had been set to true with the first var (line 361). When I remove the condition !i2reset, it works as I want to. Example : BB : b(a)[annot1,annot2,source(self)] ; c(d)[annot1,source(self)] .abolish(_[annot1,annot2]) => b(a) is removed .abolish(_[Z]) (with Z=annot1) => c(d) is removed .abolish(_[annot1,Y]) (with Y=annot2) => b(a) is removed .abolish(_[Z,Y]) => b(a) is NOT removed because for 2 reasons I think : 1. for the second annotation Y, annot.isVar() && !i2Reset == false and 2. the alphabet order. What happens : it compares annot1 and Z, its doesn't unify, annot1 is removed from pAnnots, then it compared annot2 and Z, it unifies. Then Y is the new annot to be compared with but only source is left in i2 that has not been reset. Best regards, Amandine |
From: Jomi H. <jom...@gm...> - 2019-03-14 22:33:35
|
Hi Francesco, There are two internal actions related to that: .desire and .intend. See http://jason.sourceforge.net/doc/api/jason/stdlib/package-summary.html#package.description for details. Just for an example, the code below continuously prints the current desires !start(1). !start(2). !show_desires. +!start(X) <- .print("hello world.",X); .wait(1000); !start(X). +!show_desires <- for (.desire(D)) { .print(“my desire ",D) }; .wait(1100); !show_desires. HTH, Jomi > On 14 Mar 2019, at 17:23, Francesco Lanza <fra...@un...> wrote: > > Hello, > I have questions for you regarding the current desire. > How can I get the current desire that the agent is accomplishing? > Witch method or class I have to use to get it ? > > Thanks in advance > > > Francesco Lanza > PhD Student @ RoboticsLab > Università degli Studi di Palermo > Viale delle Scienze, Build 6, floor 3rd - 90128 Palermo > Personal Email: fra...@gm... > Institutional Email: fra...@un... > Mobile: +39-327-3063214 > Skype: iciccio > > > > > > > > > > > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users |
From: Francesco L. <fra...@un...> - 2019-03-14 20:41:11
|
Hello, I have questions for you regarding the current desire. How can I get the current desire that the agent is accomplishing? Witch method or class I have to use to get it ? Thanks in advance Francesco Lanza PhD Student @ RoboticsLab Università degli Studi di Palermo Viale delle Scienze, Build 6, floor 3rd - 90128 Palermo Personal Email: fra...@gm... Institutional Email: fra...@un... Mobile: +39-327-3063214 Skype: iciccio |
From: Amandine M. <am...@la...> - 2019-03-07 09:57:09
|
Ok I understand. I implemented your suggestion. Thank you ! Amandine On 3/5/19 5:23 PM, Jomi Hubner wrote: > Hi Amandine, > > The Environment class is shared by all agents and has its own threads to execute the actions (as you notice). Its instance may run in a different host, for instance. > > The AgArch is on the agent side, it is usually used to connect the agent to the environment. As you notice, the agent thread runs the act() method and so it should be fast — usually just starting the execution of the action somewhere. > > Maybe the following pattern can be used in your case: > > @Override > public void act(final ActionExec action) { > t = ….. some thread/executor/… // create a new thread or reuse one or an Executor > t.start/execute( new Runnable() … { > // do what is necessary… > // start…. wait for it to finish, …. > // if all ok > action.setResult(true); > actionExecuted(action); // place the intention that performed the action back to the queue of running intentions > } > } > > In this case, only the intention performing the action is blocked until it is finished. Other intentions keeps running as usual. > > HTH, > > Jomi > >> On 4 Mar 2019, at 13:40, Amandine Mayima <am...@la...> wrote: >> >> Hello, >> >> I'm implementing for a robot in a real environment so I started to implement its actions in the act method in the robot's AgArch class. I realized it was blocking the execution of the agent's reasoning cycle because the action execution happens in the same thread. Whereas, when an action is implemented in the executeAction method in the Environment class, a new thread is created for it, then it's not blocking for the reasoning cycle, which is the wanted behavior I think. >> >> So, all the actions should be implemented in the Environment class ? If so, what the act method of AgArch is for ? Or should it be in the AgArch as I've done but with some changes to do ? >> >> Best regards, >> >> Amandine >> >> >> >> _______________________________________________ >> Jason-users mailing list >> Jas...@li... >> https://lists.sourceforge.net/lists/listinfo/jason-users > |
From: Jomi H. <jom...@gm...> - 2019-03-05 16:23:37
|
Hi Amandine, The Environment class is shared by all agents and has its own threads to execute the actions (as you notice). Its instance may run in a different host, for instance. The AgArch is on the agent side, it is usually used to connect the agent to the environment. As you notice, the agent thread runs the act() method and so it should be fast — usually just starting the execution of the action somewhere. Maybe the following pattern can be used in your case: @Override public void act(final ActionExec action) { t = ….. some thread/executor/… // create a new thread or reuse one or an Executor t.start/execute( new Runnable() … { // do what is necessary… // start…. wait for it to finish, …. // if all ok action.setResult(true); actionExecuted(action); // place the intention that performed the action back to the queue of running intentions } } In this case, only the intention performing the action is blocked until it is finished. Other intentions keeps running as usual. HTH, Jomi > On 4 Mar 2019, at 13:40, Amandine Mayima <am...@la...> wrote: > > Hello, > > I'm implementing for a robot in a real environment so I started to implement its actions in the act method in the robot's AgArch class. I realized it was blocking the execution of the agent's reasoning cycle because the action execution happens in the same thread. Whereas, when an action is implemented in the executeAction method in the Environment class, a new thread is created for it, then it's not blocking for the reasoning cycle, which is the wanted behavior I think. > > So, all the actions should be implemented in the Environment class ? If so, what the act method of AgArch is for ? Or should it be in the AgArch as I've done but with some changes to do ? > > Best regards, > > Amandine > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users |
From: Amandine M. <am...@la...> - 2019-03-04 16:41:04
|
Hello, I'm implementing for a robot in a real environment so I started to implement its actions in the act method in the robot's AgArch class. I realized it was blocking the execution of the agent's reasoning cycle because the action execution happens in the same thread. Whereas, when an action is implemented in the executeAction method in the Environment class, a new thread is created for it, then it's not blocking for the reasoning cycle, which is the wanted behavior I think. So, all the actions should be implemented in the Environment class ? If so, what the act method of AgArch is for ? Or should it be in the AgArch as I've done but with some changes to do ? Best regards, Amandine |
From: Stephen C. <ste...@ot...> - 2019-03-01 07:58:33
|
Thanks, Jomi. I see that decomposing the body of a plan body in the way you illustrate below results in body terms that are wrapped as action(...), internalAction(...), achieve(...), test(...), etc., terms, and that the functors of the wrappers are the names of the members of jason.asSyntax.PlanBody.BodyType. That seems to do everything that I need. I have a Jason metainterpreter that uses a custom internal action to decompose plan bodies. I'll update the metainterpreter to use this approach when I get a chance, and will post a link to the source here. Regards, Stephen -----Original Message----- From: Jomi Hubner <jom...@gm...> Sent: Wednesday, 27 February 2019 9:24 AM To: Stephen Cranefield <ste...@ot...> Cc: jas...@li... Subject: Re: [Jason-users] Jason lacks support for programmatically examining plan bodies Hi again, I improved a bit the support for meta-programming in Jason. the program —— b(10). !start. @test +!g1 : b(10) <- ?b(X); .print(X); !g(X). +!g(X) <- .print(ok,X). +!start <- .relevant_plans({+!g1}, [Plan|_]); Plan =.. [L,T,C,B]; if (C) { .print("Start running plan ",L); !show_run(B); } else { .print("Context ",C," not satisfied!"); }. +!show_run({}) <- .print("end of execution!"). +!show_run({H; R}) <- .print("command ",H); H; !show_run(R). —— will print: —— [sample_agent] Start running plan test [sample_agent] command ?b(_23) [sample_agent] command .print(10) [sample_agent] 10 [sample_agent] command !g(10) [sample_agent] ok10 [sample_agent] end of execution! —— The new version is in the github, possibly with some bugs since I tested simple cases. Thanks again for the feedback Jomi > On 21 Feb 2019, at 22:45, Stephen Cranefield <ste...@ot...> wrote: > > Hi Jomi, > > Thanks for the suggestion and the quick reply. Perhaps a singleton plan body could unify with a literal that has a 'prefix' or > 'goaltype' annotation. That would mean the output of your program below would then be: > > [a] a[prefix('')] > [a] q[prefix('?')] > [a] g2[prefix('!')] > > It's a bit ugly, but might be the simplest solution. > > Regards, > Stephen > > -----Original Message----- > From: Jomi Hubner <jom...@gm...> > Sent: Friday, 22 February 2019 2:05 PM > To: Stephen Cranefield <ste...@ot...> > Cc: jas...@li... > Subject: Re: [Jason-users] Jason lacks support for programmatically examining plan bodies > > Hi Stephen, > > Very nice request! > > I started something in that direction some time ago… but since it seemed no one is interested, I gave it up :-) > > I stopped with unification for the plan body Literals. The code > —— > !start. > > +!g1 <- a; ?q; !g2. > > +!start <- > .relevant_plans({+!g1}, [Plan|_]); > plan(Label,Trigger,Head,Body) = Plan; > !show(Body). > > +!show({H; R}) <- .print(H); !show(R). > +!show({H}) <- .print(H,"."). > —— > > Will print > > [a] a > [a] q > [a] g2. > > Using unification for the plan body similarly to lists (with ; in place of |). However there is no support to get the kind of formula in the body. We can discuss how to include it in the unification …. > > The objective would be to get some plan, change it programmatically and load it in the plan library. > > Best, > > Jomi > > >> On 21 Feb 2019, at 21:15, Stephen Cranefield <ste...@ot...> wrote: >> >> Jason lacks support for examining plan bodies from agent code. For example, consider the following code: >> >> !start. >> >> +!g1 <- a; ?q; !g2. >> >> +!start <- >> .relevant_plans({+!g1}, [Plan|_]); >> .print('Plan: ', Plan); >> plan(Label,Trigger,Head,Body) = Plan; >> Body =.. DecomposedBody; >> .print('Decomposed body: ', DecomposedBody). >> >> The output is: >> >> Plan: { @l__1 +!g1 <- a; ?q; !g2 } >> Decomposed body: [default,;,[],[]] >> >> Note that this leverages the ability to unify a plan with a term of the form plan(_,_,_,_) . That may be undocumented - I learned it from the source code. However, I don't know of any similar trick to get at the contents of the plan body. The unif operator (=..) didn't work above. >> >> It would be very useful if some stdlib internal actions could be provided for examining plan bodies, e.g.: >> >> .plan_body_term(+Index, +PlanBody, -SubgoalPrefix, -PlanBodyTerm). >> >> It would be even better if this could be done via unification, so that (for example) the following unification succeeded: >> >> plan(_, _, _, [ body_term('', a), body_term('?', q), body_term('!', g2)) ]) = { +!g1 <- a; ?q; !g2 }. >> >> Regards, >> Stephen >> >> >> >> _______________________________________________ >> Jason-users mailing list >> Jas...@li... >> https://lists.sourceforge.net/lists/listinfo/jason-users > -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Jomi H. <jom...@gm...> - 2019-02-26 20:24:33
|
Hi again, I improved a bit the support for meta-programming in Jason. the program —— b(10). !start. @test +!g1 : b(10) <- ?b(X); .print(X); !g(X). +!g(X) <- .print(ok,X). +!start <- .relevant_plans({+!g1}, [Plan|_]); Plan =.. [L,T,C,B]; if (C) { .print("Start running plan ",L); !show_run(B); } else { .print("Context ",C," not satisfied!"); }. +!show_run({}) <- .print("end of execution!"). +!show_run({H; R}) <- .print("command ",H); H; !show_run(R). —— will print: —— [sample_agent] Start running plan test [sample_agent] command ?b(_23) [sample_agent] command .print(10) [sample_agent] 10 [sample_agent] command !g(10) [sample_agent] ok10 [sample_agent] end of execution! —— The new version is in the github, possibly with some bugs since I tested simple cases. Thanks again for the feedback Jomi > On 21 Feb 2019, at 22:45, Stephen Cranefield <ste...@ot...> wrote: > > Hi Jomi, > > Thanks for the suggestion and the quick reply. Perhaps a singleton plan body could unify with a literal that has a 'prefix' or > 'goaltype' annotation. That would mean the output of your program below would then be: > > [a] a[prefix('')] > [a] q[prefix('?')] > [a] g2[prefix('!')] > > It's a bit ugly, but might be the simplest solution. > > Regards, > Stephen > > -----Original Message----- > From: Jomi Hubner <jom...@gm...> > Sent: Friday, 22 February 2019 2:05 PM > To: Stephen Cranefield <ste...@ot...> > Cc: jas...@li... > Subject: Re: [Jason-users] Jason lacks support for programmatically examining plan bodies > > Hi Stephen, > > Very nice request! > > I started something in that direction some time ago… but since it seemed no one is interested, I gave it up :-) > > I stopped with unification for the plan body Literals. The code > —— > !start. > > +!g1 <- a; ?q; !g2. > > +!start <- > .relevant_plans({+!g1}, [Plan|_]); > plan(Label,Trigger,Head,Body) = Plan; > !show(Body). > > +!show({H; R}) <- .print(H); !show(R). > +!show({H}) <- .print(H,"."). > —— > > Will print > > [a] a > [a] q > [a] g2. > > Using unification for the plan body similarly to lists (with ; in place of |). However there is no support to get the kind of formula in the body. We can discuss how to include it in the unification …. > > The objective would be to get some plan, change it programmatically and load it in the plan library. > > Best, > > Jomi > > >> On 21 Feb 2019, at 21:15, Stephen Cranefield <ste...@ot...> wrote: >> >> Jason lacks support for examining plan bodies from agent code. For example, consider the following code: >> >> !start. >> >> +!g1 <- a; ?q; !g2. >> >> +!start <- >> .relevant_plans({+!g1}, [Plan|_]); >> .print('Plan: ', Plan); >> plan(Label,Trigger,Head,Body) = Plan; >> Body =.. DecomposedBody; >> .print('Decomposed body: ', DecomposedBody). >> >> The output is: >> >> Plan: { @l__1 +!g1 <- a; ?q; !g2 } >> Decomposed body: [default,;,[],[]] >> >> Note that this leverages the ability to unify a plan with a term of the form plan(_,_,_,_) . That may be undocumented - I learned it from the source code. However, I don't know of any similar trick to get at the contents of the plan body. The unif operator (=..) didn't work above. >> >> It would be very useful if some stdlib internal actions could be provided for examining plan bodies, e.g.: >> >> .plan_body_term(+Index, +PlanBody, -SubgoalPrefix, -PlanBodyTerm). >> >> It would be even better if this could be done via unification, so that (for example) the following unification succeeded: >> >> plan(_, _, _, [ body_term('', a), body_term('?', q), body_term('!', g2)) ]) = { +!g1 <- a; ?q; !g2 }. >> >> Regards, >> Stephen >> >> >> >> _______________________________________________ >> Jason-users mailing list >> Jas...@li... >> https://lists.sourceforge.net/lists/listinfo/jason-users > -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Stephen C. <ste...@ot...> - 2019-02-22 01:45:41
|
Hi Jomi, Thanks for the suggestion and the quick reply. Perhaps a singleton plan body could unify with a literal that has a 'prefix' or 'goaltype' annotation. That would mean the output of your program below would then be: [a] a[prefix('')] [a] q[prefix('?')] [a] g2[prefix('!')] It's a bit ugly, but might be the simplest solution. Regards, Stephen -----Original Message----- From: Jomi Hubner <jom...@gm...> Sent: Friday, 22 February 2019 2:05 PM To: Stephen Cranefield <ste...@ot...> Cc: jas...@li... Subject: Re: [Jason-users] Jason lacks support for programmatically examining plan bodies Hi Stephen, Very nice request! I started something in that direction some time ago… but since it seemed no one is interested, I gave it up :-) I stopped with unification for the plan body Literals. The code —— !start. +!g1 <- a; ?q; !g2. +!start <- .relevant_plans({+!g1}, [Plan|_]); plan(Label,Trigger,Head,Body) = Plan; !show(Body). +!show({H; R}) <- .print(H); !show(R). +!show({H}) <- .print(H,"."). —— Will print [a] a [a] q [a] g2. Using unification for the plan body similarly to lists (with ; in place of |). However there is no support to get the kind of formula in the body. We can discuss how to include it in the unification …. The objective would be to get some plan, change it programmatically and load it in the plan library. Best, Jomi > On 21 Feb 2019, at 21:15, Stephen Cranefield <ste...@ot...> wrote: > > Jason lacks support for examining plan bodies from agent code. For example, consider the following code: > > !start. > > +!g1 <- a; ?q; !g2. > > +!start <- > .relevant_plans({+!g1}, [Plan|_]); > .print('Plan: ', Plan); > plan(Label,Trigger,Head,Body) = Plan; > Body =.. DecomposedBody; > .print('Decomposed body: ', DecomposedBody). > > The output is: > > Plan: { @l__1 +!g1 <- a; ?q; !g2 } > Decomposed body: [default,;,[],[]] > > Note that this leverages the ability to unify a plan with a term of the form plan(_,_,_,_) . That may be undocumented - I learned it from the source code. However, I don't know of any similar trick to get at the contents of the plan body. The unif operator (=..) didn't work above. > > It would be very useful if some stdlib internal actions could be provided for examining plan bodies, e.g.: > > .plan_body_term(+Index, +PlanBody, -SubgoalPrefix, -PlanBodyTerm). > > It would be even better if this could be done via unification, so that (for example) the following unification succeeded: > > plan(_, _, _, [ body_term('', a), body_term('?', q), body_term('!', g2)) ]) = { +!g1 <- a; ?q; !g2 }. > > Regards, > Stephen > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users |
From: Jomi H. <jom...@gm...> - 2019-02-22 01:05:13
|
Hi Stephen, Very nice request! I started something in that direction some time ago… but since it seemed no one is interested, I gave it up :-) I stopped with unification for the plan body Literals. The code —— !start. +!g1 <- a; ?q; !g2. +!start <- .relevant_plans({+!g1}, [Plan|_]); plan(Label,Trigger,Head,Body) = Plan; !show(Body). +!show({H; R}) <- .print(H); !show(R). +!show({H}) <- .print(H,"."). —— Will print [a] a [a] q [a] g2. Using unification for the plan body similarly to lists (with ; in place of |). However there is no support to get the kind of formula in the body. We can discuss how to include it in the unification …. The objective would be to get some plan, change it programmatically and load it in the plan library. Best, Jomi > On 21 Feb 2019, at 21:15, Stephen Cranefield <ste...@ot...> wrote: > > Jason lacks support for examining plan bodies from agent code. For example, consider the following code: > > !start. > > +!g1 <- a; ?q; !g2. > > +!start <- > .relevant_plans({+!g1}, [Plan|_]); > .print('Plan: ', Plan); > plan(Label,Trigger,Head,Body) = Plan; > Body =.. DecomposedBody; > .print('Decomposed body: ', DecomposedBody). > > The output is: > > Plan: { @l__1 +!g1 <- a; ?q; !g2 } > Decomposed body: [default,;,[],[]] > > Note that this leverages the ability to unify a plan with a term of the form plan(_,_,_,_) . That may be undocumented - I learned it from the source code. However, I don't know of any similar trick to get at the contents of the plan body. The unif operator (=..) didn't work above. > > It would be very useful if some stdlib internal actions could be provided for examining plan bodies, e.g.: > > .plan_body_term(+Index, +PlanBody, -SubgoalPrefix, -PlanBodyTerm). > > It would be even better if this could be done via unification, so that (for example) the following unification succeeded: > > plan(_, _, _, [ body_term('', a), body_term('?', q), body_term('!', g2)) ]) = { +!g1 <- a; ?q; !g2 }. > > Regards, > Stephen > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users |
From: Stephen C. <ste...@ot...> - 2019-02-22 00:15:51
|
Jason lacks support for examining plan bodies from agent code. For example, consider the following code: !start. +!g1 <- a; ?q; !g2. +!start <- .relevant_plans({+!g1}, [Plan|_]); .print('Plan: ', Plan); plan(Label,Trigger,Head,Body) = Plan; Body =.. DecomposedBody; .print('Decomposed body: ', DecomposedBody). The output is: Plan: { @l__1 +!g1 <- a; ?q; !g2 } Decomposed body: [default,;,[],[]] Note that this leverages the ability to unify a plan with a term of the form plan(_,_,_,_) . That may be undocumented - I learned it from the source code. However, I don't know of any similar trick to get at the contents of the plan body. The unif operator (=..) didn't work above. It would be very useful if some stdlib internal actions could be provided for examining plan bodies, e.g.: .plan_body_term(+Index, +PlanBody, -SubgoalPrefix, -PlanBodyTerm). It would be even better if this could be done via unification, so that (for example) the following unification succeeded: plan(_, _, _, [ body_term('', a), body_term('?', q), body_term('!', g2)) ]) = { +!g1 <- a; ?q; !g2 }. Regards, Stephen |
From: Jomi H. <jom...@gm...> - 2019-02-19 19:11:45
|
Hi Amandine sorry for the long delay in answering (I was in vacations) You are right, the latest version of Jason performs 5 actions each reasoning cycle (RC). It is due to the results of Maicon Zatelli’s thesis about performance improvements in the RC, see http://dx.doi.org/10.1007/978-3-319-33509-4_33. Basically, for each “output action” there is a “RC time” (say 1 second), so 1 action / second (in the original Jason). If you do 5 actions each RC, we have (roughly) 5 actions / second….. However, while doing the 5 actions the agent stopped to perceive, loosing thus reactiveness. So, we cannot increase the # of actions by RC without compromising reactiveness (and important feature of BDI approach). 5 is given empirically…. and can be configured (http://jason.sourceforge.net/doc/tech/concurrency.html#_configuration) Cheers, Jomi > On 30 Jan 2019, at 07:35, Amandine Mayima <am...@la...> wrote: > > Hello, > > According to what I understood from the documentation and the description of the reasoning cycle, one action/formula is performed by reasoning cycle : "in one reasoning cycle, at most one formula of one of the intentions is executed." But, seeing that the behavior of my program was not going this way, I dug into the code and saw that it could perform until 5 actions/formulae. Why this choice compared to the documentation, it couldn't find any mention of it ? How this number has been chosen ? > I went to dig into the code with the Eclipse debugging tool. See the example below, after each execution of the internal action (applyExecInt()), the intention is put back in C.I by the function updateIntention() line 754 of the transition system. Then, line 202 of the CentralisedAgArch, i is < to ca for 5 actions because ca=5 (equal to the cycleAct variable) and canSleepAct return false because C.hasRunningIntention() == true because of updateIntention(). > > It is the same with environment actions, see the second example. If the action is short, the feedback action comes back before the next evaluation of canSleepAct, then hasFeedbackAction = true, then canSleepAct = false and the reasoning cycle goes on for 4 more actions. > Best regards, > > Amandine > First example with internal actions : > [bob] ---------- START NEW REASONING CYCLE ------------- > [bob] percepts: [] > [bob] Selected event +!start[source(self)] > [bob] Selected option (@l__3[source(self)] +!start <- .print("1"); .print("2"); .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11").,{}) for event +!start[source(self)] > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("1"); .print("2"); .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > [bob] 1 > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("2"); .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > [bob] 2 > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > [bob] 3 > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > [bob] 4 > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > [bob] 5 > [bob] ---------- START NEW REASONING CYCLE ------------- > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > [bob] 6 > [bob] Selected intention intention 1: > +!start[source(self)] <- ... .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} > > etc.. > > Second example with internal and environment actions : > [tow] Selected event +p(b)[source(percept)] > [tow] Selected option (@l__3[source(self)] +p(b) : (p(a) & ~p(a)) <- act; act2; act3.,{}) for event +p(b)[source(percept)] > [tow] Selected intention intention 1: > +p(b)[source(percept)] <- ... act; act2; act3 / {} > > [tow] ---------- START NEW REASONING CYCLE ------------- > [Environment] act > [tow] Selected event +p(a)[source(percept)] > [tow] Selected option (@l__1[source(self)] +p(a) <- .print("bouh"); .print("hey"); .print("Noticed p(a).").,{}) for event +p(a)[source(percept)] > [tow] Selected intention intention 2: > +p(a)[source(percept)] <- ... .print("bouh"); .print("hey"); .print("Noticed p(a).") / {} > > [tow] bouh > [tow] Selected intention intention 1: > +p(b)[source(percept)] <- ... act2; act3 / {} > > [tow] Selected intention intention 2: > +p(a)[source(percept)] <- ... .print("hey"); .print("Noticed p(a).") / {} > > [Environment] act2 > [tow] hey > [tow] Selected intention intention 2: > +p(a)[source(percept)] <- ... .print("Noticed p(a).") / {} > > [tow] Noticed p(a). > [tow] Returning from IM l__1[source(self)], te=+p(a)[source(percept)] unif={} > [tow] Selected intention intention 1: > +p(b)[source(percept)] <- ... act3 / {} > > [tow] ---------- START NEW REASONING CYCLE ------------- > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Amandine M. <am...@la...> - 2019-01-30 09:35:44
|
Hello, According to what I understood from the documentation and the description of the reasoning cycle, one action/formula is performed by reasoning cycle : "in one reasoning cycle, at most one formula of one of the intentions is executed." But, seeing that the behavior of my program was not going this way, I dug into the code and saw that it could perform until 5 actions/formulae. Why this choice compared to the documentation, it couldn't find any mention of it ? How this number has been chosen ? I went to dig into the code with the Eclipse debugging tool. See the example below, after each execution of the internal action (applyExecInt()), the intention is put back in C.I by the function updateIntention() line 754 of the transition system. Then, line 202 of the CentralisedAgArch, i is < to ca for 5 actions because ca=5 (equal to the cycleAct variable) and canSleepAct return false because C.hasRunningIntention() == true because of updateIntention(). It is the same with environment actions, see the second example. If the action is short, the feedback action comes back before the next evaluation of canSleepAct, then hasFeedbackAction = true, then canSleepAct = false and the reasoning cycle goes on for 4 more actions. Best regards, Amandine *First example with internal actions :* [bob] ---------- START NEW REASONING CYCLE ------------- [bob] percepts: [] [bob] Selected event +!start[source(self)] [bob] Selected option (@l__3[source(self)] +!start <- .print("1"); .print("2"); .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11").,{}) for event +!start[source(self)] [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("1"); .print("2"); .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} [bob] 1 [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("2"); .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} [bob] 2 [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("3"); .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} [bob] 3 [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("4"); .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} [bob] 4 [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("5"); .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} [bob] 5 [bob] ---------- START NEW REASONING CYCLE ------------- [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("6"); .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} [bob] 6 [bob] Selected intention intention 1: +!start[source(self)] <- ... .print("7"); .print("8"); .print("9"); .print("10"); .print("11") / {} etc.. *Second example with internal and environment actions :* [tow] Selected event +p(b)[source(percept)] [tow] Selected option (@l__3[source(self)] +p(b) : (p(a) & ~p(a)) <- act; act2; act3.,{}) for event +p(b)[source(percept)] [tow] Selected intention intention 1: +p(b)[source(percept)] <- ... act; act2; act3 / {} [tow] ---------- START NEW REASONING CYCLE ------------- [Environment] act [tow] Selected event +p(a)[source(percept)] [tow] Selected option (@l__1[source(self)] +p(a) <- .print("bouh"); .print("hey"); .print("Noticed p(a).").,{}) for event +p(a)[source(percept)] [tow] Selected intention intention 2: +p(a)[source(percept)] <- ... .print("bouh"); .print("hey"); .print("Noticed p(a).") / {} [tow] bouh [tow] Selected intention intention 1: +p(b)[source(percept)] <- ... act2; act3 / {} [tow] Selected intention intention 2: +p(a)[source(percept)] <- ... .print("hey"); .print("Noticed p(a).") / {} [Environment] act2 [tow] hey [tow] Selected intention intention 2: +p(a)[source(percept)] <- ... .print("Noticed p(a).") / {} [tow] Noticed p(a). [tow] Returning from IM l__1[source(self)], te=+p(a)[source(percept)] unif={} [tow] Selected intention intention 1: +p(b)[source(percept)] <- ... act3 / {} [tow] ---------- START NEW REASONING CYCLE ------------- |
From: Jomi F. H. <jom...@gm...> - 2018-11-19 11:23:08
|
Hi, we improved the document describing how to run Jason application with Gradle https://github.com/jason-lang/jason/blob/master/doc/tutorials/getting-started/shell-based.adoc#with-gradle It manages dependencies and versions far better than other options and does not requires that the user downloads Jason. Eclipse projects can be created by "Import Gradle Project". Jomi |
From: Jomi H. <jom...@gm...> - 2018-11-18 23:09:24
|
Hi Stephen, for sure this new syntax have to be included in the plugin… I likely missed to inform the plugin developer (Maicon Zatelli, in CC) these changes. The suggested internal action is included in the GitHub / local maven repository. Best, Jomi ps.: you can always run projects with the new syntax using gradle or shell commands. > On 18 Nov 2018, at 19:53, Stephen Cranefield <ste...@ot...> wrote: > > I would like to be able to use the version 2.2 syntax extensions ('@#AtomInSingleQuotes' and elif clauses in if-then-else statements) with the Eclipse plugin. However, Eclipse considers these as errors and will not let me run any code that uses themt. > > I have the latest Jason-2.4 snapshot jar in the Java Build Path in my project properties. Do I need to anything else to get the Eclipse extension to use an up-to-date parser? I am using Eclipse 2018-09 (4.9.0) and version 1.0.21.201704271508 of the Jasonide_feature. > > By the way, it would be useful to have a .version(V) internal action, so users can be sure what version of Jason they are running. > > Regards, > Stephen > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users |
From: Stephen C. <ste...@ot...> - 2018-11-18 21:53:32
|
I would like to be able to use the version 2.2 syntax extensions ('@#AtomInSingleQuotes' and elif clauses in if-then-else statements) with the Eclipse plugin. However, Eclipse considers these as errors and will not let me run any code that uses themt. I have the latest Jason-2.4 snapshot jar in the Java Build Path in my project properties. Do I need to anything else to get the Eclipse extension to use an up-to-date parser? I am using Eclipse 2018-09 (4.9.0) and version 1.0.21.201704271508 of the Jasonide_feature. By the way, it would be useful to have a .version(V) internal action, so users can be sure what version of Jason they are running. Regards, Stephen |
From: Jomi H. <jom...@gm...> - 2018-11-01 12:25:54
|
Hi Stephen, thanks for reporting the problem, as Rafael said, it is a bug! I fixed it and committed in the github. Best, Jomi p.s.: in case you want to get it from a maven repository: http://jacamo.sourceforge.net/maven2/org/jason-lang/jason/2.4-SNAPSHOT/ > On 31 Oct 2018, at 00:52, Stephen Cranefield <ste...@ot...> wrote: > > The following asl code fails to compile in Jason 2.3: > > @foo(1) > +!g <- .print("Plan ", foo(1), " was called"). > > @foo(2) > +!g <- .print("Plan ", foo(2), " was called"). > > I get this exception: > > [test_plan_labels] as2j: error parsing "<...>\test_plan_labels.asl" > jason.JasonException: There already is a plan with label foo(2) > at jason.asSyntax.PlanLibrary.add(PlanLibrary.java:152) > at jason.asSyntax.PlanLibrary.add(PlanLibrary.java:128) > at jason.asSyntax.parser.as2j.agent(as2j.java:215) > at jason.asSemantics.Agent.parseAS(Agent.java:419) > at jason.asSemantics.Agent.parseAS(Agent.java:404) > at jason.asSemantics.Agent.load(Agent.java:174) > at jason.asSemantics.Agent.create(Agent.java:121) > at jason.infra.centralised.CentralisedAgArch.createArchs(CentralisedAgArch.java:72) > at jason.infra.centralised.RunCentralisedMAS.createAgs(RunCentralisedMAS.java:481) > at jason.infra.centralised.RunCentralisedMAS.create(RunCentralisedMAS.java:202) > at jason.infra.centralised.RunCentralisedMAS.main(RunCentralisedMAS.java:79) > > I read on page 58 of the Jason book that "as a label, it is normal practice to use a predicate of arity 0". However, the earlier part of that sentence says "we can use a complex predicate", and the example shown at the top of that page has this label: @shopping(1)[chance_of_success(0.7),usual_payoff(0.9), source(ag1),expires(autumn)]. The book does not say that when a plan label's arity is greater than 0, there must only be one instance of a plan label with the label's functor, so I presume that the exception above is due to a bug, not the code being in error. > > Regards, > Stephen > > > > > > > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Stephen C. <ste...@ot...> - 2018-10-31 03:52:45
|
The following asl code fails to compile in Jason 2.3: @foo(1) +!g <- .print("Plan ", foo(1), " was called"). @foo(2) +!g <- .print("Plan ", foo(2), " was called"). I get this exception: [test_plan_labels] as2j: error parsing "<...>\test_plan_labels.asl" jason.JasonException: There already is a plan with label foo(2) at jason.asSyntax.PlanLibrary.add(PlanLibrary.java:152) at jason.asSyntax.PlanLibrary.add(PlanLibrary.java:128) at jason.asSyntax.parser.as2j.agent(as2j.java:215) at jason.asSemantics.Agent.parseAS(Agent.java:419) at jason.asSemantics.Agent.parseAS(Agent.java:404) at jason.asSemantics.Agent.load(Agent.java:174) at jason.asSemantics.Agent.create(Agent.java:121) at jason.infra.centralised.CentralisedAgArch.createArchs(CentralisedAgArch.java:72) at jason.infra.centralised.RunCentralisedMAS.createAgs(RunCentralisedMAS.java:481) at jason.infra.centralised.RunCentralisedMAS.create(RunCentralisedMAS.java:202) at jason.infra.centralised.RunCentralisedMAS.main(RunCentralisedMAS.java:79) I read on page 58 of the Jason book that "as a label, it is normal practice to use a predicate of arity 0". However, the earlier part of that sentence says "we can use a complex predicate", and the example shown at the top of that page has this label: @shopping(1)[chance_of_success(0.7),usual_payoff(0.9), source(ag1),expires(autumn)]. The book does not say that when a plan label's arity is greater than 0, there must only be one instance of a plan label with the label's functor, so I presume that the exception above is due to a bug, not the code being in error. Regards, Stephen |
From: Jomi H. <jom...@gm...> - 2018-10-26 11:24:13
|
Regarding “self-improving”, Felipe Meneguzzi has published several papers on that … some of them integrating planning and Jason. E.g. http://www.di.unito.it/~baldoni/DALT-2007/papers/paper_3.pdf > On 25 Oct 2018, at 19:17, Jonatan Knud Lauritsen via Jason-users <jas...@li...> wrote: > > Hello! > > I am trying to build self-improving Jason agent (aka Goedel machine - http://people.idsia.ch/~juergen/goedelmachine.html). I have read first 4 chapters from the Jason book (and continuing to read) and as far as I can understand, then at any time AgentSpeak can receive new class names (classes), new object names (instances), new functions (predicates and relations) coming from external percepts. > > What I don't understand clearly and what I wanted to ask it this: > > 1) can AgentSpeak agent create new class/object/function name internally. E.g. Can it have construction +new_name(some_part_of_belief_base), where new_name ir some function that returns the new class/object/function name in string format. E.g. maybe agent has extensive knowledge base about judicial knowledge and this function new_name creates new class name "servitude" (possibly together with relevant definition). There is special field in artificial intelligence that is called "concept generation" and can AgentSpeak generate such concepts internally? I am not sure, but I have some intuition, that all AgentSpeak allow only boolean-valued functions/prediated/relations, is it so? If so, then my new_name function can be be made inside AgentSpeak. But maybe I can implement my concept-generation function externally and receive new class/object/function names as external/perceived events and integrate them into belief base? It is possible? > > 2) I have not found in the Jason book the notion about update of plan library. Can AgentSpeak agent infere new plan or receive new plan from the external source? This is very important question, because plans are nothing more/less than the skills for the agent and it agent should be able to update it skills if it is to become self-learning, self-improving. > > I guess, that 1)+2) are sufficient for implementing self-improving agents. So - is 1) and 2) possible? > > And additionally - is there already some work in self-improving, self-modifying, self-evolving agents? > > p.s. I am trying to follow the https://global.oup.com/academic/product/anatomy-of-the-mind-9780199794553?cc=us&lang=en& - book that forms the bases if Clarion cognitive architecture. Sometimes I wonder, why all the cognitivie architectures (Clarion, OpenCog, ACT-R, Soar) are investing so many resources in building specialised software and why they are not simply writing rules for already available BDI agent platform? > > Regards, > Jonatan > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Jomi H. <jom...@gm...> - 2018-10-26 11:18:42
|
Dear Jonatan, a Jason agent can fully control its mind :-) and influence others. By mind I mean beliefs, intentions and plans. To add plans, simply use the + operator: +x(bal) <- + { +!g : ok <- .print(ok) } . The demo available at demos/communication shows how plans can be exchanged by communication. HTH, Jomi > On 26 Oct 2018, at 07:57, Jonatan Knud Lauritsen via Jason-users <jas...@li...> wrote: > > Rafael already gave good answer on my similar question several months ago: https://sourceforge.net/p/jason/mailman/jason-users/?viewmonth=201806 > > Just wanted to make 2 things clear: > > 1) can AgenSpeak generate its plan inside its own program. I guess that plans can be received only from external reasoner. I am hesitating how to distribute the load - how deeply involve AgentSpeak into reasoning process - do it in AgentSpeak or in external service. > > 2) How, practically I can receive new plan and add to the plan library? I guess, that I can use new variable new_plan_variable to read plan from external service using command perceive(new_plan_variable). What what should I do next to add this new plan to library? Is there command add_new_plan(new_plan_variable)? > > Jonatan > > ----- Forwarded message ----- > From: Jonatan Knud Lauritsen <jon...@ya...> > To: Jason-users <jas...@li...> > Cc: Jonatan Knud Lauritsen <jon...@ya...> > Sent: Friday, 26 October 2018, 01:17:45 EEST > Subject: Is it possible to update (internally or externally) plan library during the agent life? > > Hello! > > I am trying to build self-improving Jason agent (aka Goedel machine - http://people.idsia.ch/~juergen/goedelmachine.html). I have read first 4 chapters from the Jason book (and continuing to read) and as far as I can understand, then at any time AgentSpeak can receive new class names (classes), new object names (instances), new functions (predicates and relations) coming from external percepts. > > What I don't understand clearly and what I wanted to ask it this: > > 1) can AgentSpeak agent create new class/object/function name internally. E.g. Can it have construction +new_name(some_part_of_belief_base), where new_name ir some function that returns the new class/object/function name in string format. E.g. maybe agent has extensive knowledge base about judicial knowledge and this function new_name creates new class name "servitude" (possibly together with relevant definition). There is special field in artificial intelligence that is called "concept generation" and can AgentSpeak generate such concepts internally? I am not sure, but I have some intuition, that all AgentSpeak allow only boolean-valued functions/prediated/relations, is it so? If so, then my new_name function can be be made inside AgentSpeak. But maybe I can implement my concept-generation function externally and receive new class/object/function names as external/perceived events and integrate them into belief base? It is possible? > > 2) I have not found in the Jason book the notion about update of plan library. Can AgentSpeak agent infere new plan or receive new plan from the external source? This is very important question, because plans are nothing more/less than the skills for the agent and it agent should be able to update it skills if it is to become self-learning, self-improving. > > I guess, that 1)+2) are sufficient for implementing self-improving agents. So - is 1) and 2) possible? > > And additionally - is there already some work in self-improving, self-modifying, self-evolving agents? > > p.s. I am trying to follow the https://global.oup.com/academic/product/anatomy-of-the-mind-9780199794553?cc=us&lang=en& - book that forms the bases if Clarion cognitive architecture. Sometimes I wonder, why all the cognitivie architectures (Clarion, OpenCog, ACT-R, Soar) are investing so many resources in building specialised software and why they are not simply writing rules for already available BDI agent platform? > > Regards, > Jonatan > _______________________________________________ > Jason-users mailing list > Jas...@li... > https://lists.sourceforge.net/lists/listinfo/jason-users -- Jomi Fred Hubner Department of Automation and Systems Engineering Federal University of Santa Catarina PO Box 476, Florianópolis, SC 88040-900 Brazil http://jomi.das.ufsc.br -- be good. be kind. be happy. (Conrad Anker) |
From: Jonatan K. L. <jon...@ya...> - 2018-10-26 11:08:15
|
Rafael already gave good answer on my similar question several months ago: https://sourceforge.net/p/jason/mailman/jason-users/?viewmonth=201806 Just wanted to make 2 things clear: 1) can AgenSpeak generate its plan inside its own program. I guess that plans can be received only from external reasoner. I am hesitating how to distribute the load - how deeply involve AgentSpeak into reasoning process - do it in AgentSpeak or in external service. 2) How, practically I can receive new plan and add to the plan library? I guess, that I can use new variable new_plan_variable to read plan from external service using command perceive(new_plan_variable). What what should I do next to add this new plan to library? Is there command add_new_plan(new_plan_variable)? Jonatan ----- Forwarded message ----- From: Jonatan Knud Lauritsen <jon...@ya...>To: Jason-users <jas...@li...>Cc: Jonatan Knud Lauritsen <jon...@ya...>Sent: Friday, 26 October 2018, 01:17:45 EESTSubject: Is it possible to update (internally or externally) plan library during the agent life? Hello! I am trying to build self-improving Jason agent (aka Goedel machine - http://people.idsia.ch/~juergen/goedelmachine.html). I have read first 4 chapters from the Jason book (and continuing to read) and as far as I can understand, then at any time AgentSpeak can receive new class names (classes), new object names (instances), new functions (predicates and relations) coming from external percepts. What I don't understand clearly and what I wanted to ask it this: 1) can AgentSpeak agent create new class/object/function name internally. E.g. Can it have construction +new_name(some_part_of_belief_base), where new_name ir some function that returns the new class/object/function name in string format. E.g. maybe agent has extensive knowledge base about judicial knowledge and this function new_name creates new class name "servitude" (possibly together with relevant definition). There is special field in artificial intelligence that is called "concept generation" and can AgentSpeak generate such concepts internally? I am not sure, but I have some intuition, that all AgentSpeak allow only boolean-valued functions/prediated/relations, is it so? If so, then my new_name function can be be made inside AgentSpeak. But maybe I can implement my concept-generation function externally and receive new class/object/function names as external/perceived events and integrate them into belief base? It is possible? 2) I have not found in the Jason book the notion about update of plan library. Can AgentSpeak agent infere new plan or receive new plan from the external source? This is very important question, because plans are nothing more/less than the skills for the agent and it agent should be able to update it skills if it is to become self-learning, self-improving. I guess, that 1)+2) are sufficient for implementing self-improving agents. So - is 1) and 2) possible? And additionally - is there already some work in self-improving, self-modifying, self-evolving agents? p.s. I am trying to follow the https://global.oup.com/academic/product/anatomy-of-the-mind-9780199794553?cc=us&lang=en& - book that forms the bases if Clarion cognitive architecture. Sometimes I wonder, why all the cognitivie architectures (Clarion, OpenCog, ACT-R, Soar) are investing so many resources in building specialised software and why they are not simply writing rules for already available BDI agent platform? Regards,Jonatan |
From: Jonatan K. L. <jon...@ya...> - 2018-10-25 22:38:12
|
Hello! I am trying to build self-improving Jason agent (aka Goedel machine - http://people.idsia.ch/~juergen/goedelmachine.html). I have read first 4 chapters from the Jason book (and continuing to read) and as far as I can understand, then at any time AgentSpeak can receive new class names (classes), new object names (instances), new functions (predicates and relations) coming from external percepts. What I don't understand clearly and what I wanted to ask it this: 1) can AgentSpeak agent create new class/object/function name internally. E.g. Can it have construction +new_name(some_part_of_belief_base), where new_name ir some function that returns the new class/object/function name in string format. E.g. maybe agent has extensive knowledge base about judicial knowledge and this function new_name creates new class name "servitude" (possibly together with relevant definition). There is special field in artificial intelligence that is called "concept generation" and can AgentSpeak generate such concepts internally? I am not sure, but I have some intuition, that all AgentSpeak allow only boolean-valued functions/prediated/relations, is it so? If so, then my new_name function can be be made inside AgentSpeak. But maybe I can implement my concept-generation function externally and receive new class/object/function names as external/perceived events and integrate them into belief base? It is possible? 2) I have not found in the Jason book the notion about update of plan library. Can AgentSpeak agent infere new plan or receive new plan from the external source? This is very important question, because plans are nothing more/less than the skills for the agent and it agent should be able to update it skills if it is to become self-learning, self-improving. I guess, that 1)+2) are sufficient for implementing self-improving agents. So - is 1) and 2) possible? And additionally - is there already some work in self-improving, self-modifying, self-evolving agents? p.s. I am trying to follow the https://global.oup.com/academic/product/anatomy-of-the-mind-9780199794553?cc=us&lang=en& - book that forms the bases if Clarion cognitive architecture. Sometimes I wonder, why all the cognitivie architectures (Clarion, OpenCog, ACT-R, Soar) are investing so many resources in building specialised software and why they are not simply writing rules for already available BDI agent platform? Regards,Jonatan |