You can subscribe to this list here.
2011 |
Jan
|
Feb
|
Mar
(3) |
Apr
(1) |
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: Igor S. <ish...@ya...> - 2015-10-27 15:17:54
|
<div>Hello!</div><div> </div><div>I'm only getting familiar with MMLF, and so far I like the library very much, although there are some issues with it:</div><div> </div><div>1) mmlf/framework/experiment.py:89 <span>keepOberservers=[registerToObservables] (pip version only)</span></div><div><span>2) StateSpace (created via oldStyleSpace) with string dimensionValues doesn't work:</span></div><div><blockquote><p><span>Traceback (most recent call last):</span></p><p><span><span> </span>File "/usr/local/bin/run_mmlf", line 62, in <module></span></p><p><span><span> </span>startSingleWorld(configPath=options.config, episodes=float(options.episodes))</span></p><p><span><span> </span>File "/usr/local/bin/run_mmlf", line 36, in startSingleWorld</span></p><p><span><span> </span>world.run(episodes)</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/framework/world.py", line 210, in run</span></p><p><span><span> </span>self.iServ.run(numOfEpisodes)</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/framework/interaction_server.py", line 127, in run</span></p><p><span><span> </span>self.loopIteration()</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/framework/interaction_server.py", line 85, in loopIteration</span></p><p><span><span> </span>self.world.agentPollMethod(agentCommandObject) <span> </span></span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/framework/world.py", line 190, in agentPollMethod</span></p><p><span><span> </span>result = method(**argDict)</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/agents/td_agent.py", line 118, in setState</span></p><p><span><span> </span>super(TDAgent, self).setState(state)</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/agents/agent_base.py", line 131, in setState</span></p><p><span><span> </span>self.state = self.stateSpace.parseStateDict(state)</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/framework/spaces.py", line 213, in parseStateDict</span></p><p><span><span> </span>map(lambda name: self[name], sorted(stateDict.keys())))</span></p><p><span><span> </span>File "/home/ishalyminov/.local/lib/python2.7/site-packages/mmlf/framework/state.py", line 61, in __new__</span></p><p><span><span> </span>obj = numpy.asarray(inputArray, dtype=numpy.float64).view(subtype)</span></p><p><span><span> </span>File "/usr/lib/python2.7/dist-packages/numpy/core/numeric.py", line 235, in asarray</span></p><p><span><span> </span>return array(a, dtype, copy=False, order=order)</span></p><p><span>ValueError: could not convert string to float:</span></p></blockquote></div><div> </div><div> </div><div>--</div><div>Best Regards,</div><div>Igor</div> |
From: Davide C. <dav...@gm...> - 2015-05-22 18:49:00
|
Greetings, I was following the tutorial about "Writing an Agent" so I created a new python file (prova_agent.py) located in mmlf/agents by copying random_agent.py and replacing all the RandomAgent occurrences with ProvaAgent and the AgentName with "RoundRobin12" (I've just added 12 to the original name). After that I saved the file and runned the mmlf gui with ./mmlf_gui. The problem is that in the explorer, in the Agent box I see only the default agents, so I cannot test the newest one. Where am I wrong? Davide Carosini |
From: Jan H. M. <jh...@in...> - 2014-05-21 16:07:53
|
Hi, the scikits.ann package is an optional dependency. Most oft the MMLF should work without it. Only some function approximators require it. Otherwise you can ignore the warning. On 21. Mai 2014 17:11:02 MESZ, AbdElMoniem Bayoumi <abd...@im...> wrote: >Thanks it worked, but what about installing the scikits.ann package, is >there any alternative for it. It give me the following errors, when I >use >"sudo easy_install scikits.ann": > >fatal error: stdexcept: No such file or directory > >error: Setup script exited with error: Command "gcc -pthread >-fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes >-fPIC -Iscikits/ann -I/usr/local/include >-I/usr/lib/python2.7/dist-packages/numpy/core/include >-I/usr/include/python2.7 -c >build/src.linux-x86_64-2.7/scikits/ann/ANN_wrap.c -o >build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scikits/ann/ANN_wrap.o" >failed with exit status 1 > > >AbdElMoniem M. Bayoumi > > > > >On Wed, May 21, 2014 at 4:40 PM, Jan Hendrik Metzen < >jh...@in...> wrote: > >> Hi AbdElMoniem, >> more viewers should become available once you have initialized a >world (by >> pressing "Init World"). >> >> HTH, >> Jan >> >> >> >> Hi, >>> >>> I am new to using Maja Framework. I have installed it and it is >working, >>> but it does not provide me any viewer of those mentioned in the >>> documentation, except the float stream viewer. >>> >>> I don't know if there is anything missing to be installed or not, >but >>> there is one point to mention concerning scikits.ann package, I >couldn't >>> install it. But, I don't think that this package would effect the >gui >>> viewers. >>> >>> I hope that anyone can help, and thanks in advance. >>> >>> Best Regards, >>> >>> AbdElMoniem M. Bayoumi >>> >> >> -- >> Jan Hendrik Metzen, Dr.rer.nat. >> Team Leader of Team "Sustained Learning" >> >> Universität Bremen und DFKI GmbH, Robotics Innovation Center >> FB 3 - Mathematik und Informatik >> AG Robotik >> Robert-Hooke-Straße 1 >> 28359 Bremen, Germany >> >> >> Tel.: +49 421 178 45-4123 >> Zentrale: +49 421 178 45-6611 >> Fax: +49 421 178 45-4150 >> E-Mail: jh...@in... >> Homepage: http://www.informatik.uni-bremen.de/~jhm/ >> >> Weitere Informationen: http://www.informatik.uni-bremen.de/robotik >> >> -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. |
From: AbdElMoniem B. <abd...@im...> - 2014-05-21 15:11:30
|
Thanks it worked, but what about installing the scikits.ann package, is there any alternative for it. It give me the following errors, when I use "sudo easy_install scikits.ann": fatal error: stdexcept: No such file or directory error: Setup script exited with error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Iscikits/ann -I/usr/local/include -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c build/src.linux-x86_64-2.7/scikits/ann/ANN_wrap.c -o build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scikits/ann/ANN_wrap.o" failed with exit status 1 AbdElMoniem M. Bayoumi On Wed, May 21, 2014 at 4:40 PM, Jan Hendrik Metzen < jh...@in...> wrote: > Hi AbdElMoniem, > more viewers should become available once you have initialized a world (by > pressing "Init World"). > > HTH, > Jan > > > > Hi, >> >> I am new to using Maja Framework. I have installed it and it is working, >> but it does not provide me any viewer of those mentioned in the >> documentation, except the float stream viewer. >> >> I don't know if there is anything missing to be installed or not, but >> there is one point to mention concerning scikits.ann package, I couldn't >> install it. But, I don't think that this package would effect the gui >> viewers. >> >> I hope that anyone can help, and thanks in advance. >> >> Best Regards, >> >> AbdElMoniem M. Bayoumi >> > > -- > Jan Hendrik Metzen, Dr.rer.nat. > Team Leader of Team "Sustained Learning" > > Universität Bremen und DFKI GmbH, Robotics Innovation Center > FB 3 - Mathematik und Informatik > AG Robotik > Robert-Hooke-Straße 1 > 28359 Bremen, Germany > > > Tel.: +49 421 178 45-4123 > Zentrale: +49 421 178 45-6611 > Fax: +49 421 178 45-4150 > E-Mail: jh...@in... > Homepage: http://www.informatik.uni-bremen.de/~jhm/ > > Weitere Informationen: http://www.informatik.uni-bremen.de/robotik > > |
From: Jan H. M. <jh...@in...> - 2014-05-21 14:58:50
|
Hi AbdElMoniem, more viewers should become available once you have initialized a world (by pressing "Init World"). HTH, Jan > Hi, > > I am new to using Maja Framework. I have installed it and it is > working, but it does not provide me any viewer of those mentioned in > the documentation, except the float stream viewer. > > I don't know if there is anything missing to be installed or not, but > there is one point to mention concerning scikits.ann package, I > couldn't install it. But, I don't think that this package would effect > the gui viewers. > > I hope that anyone can help, and thanks in advance. > > Best Regards, > > AbdElMoniem M. Bayoumi -- Jan Hendrik Metzen, Dr.rer.nat. Team Leader of Team "Sustained Learning" Universität Bremen und DFKI GmbH, Robotics Innovation Center FB 3 - Mathematik und Informatik AG Robotik Robert-Hooke-Straße 1 28359 Bremen, Germany Tel.: +49 421 178 45-4123 Zentrale: +49 421 178 45-6611 Fax: +49 421 178 45-4150 E-Mail: jh...@in... Homepage: http://www.informatik.uni-bremen.de/~jhm/ Weitere Informationen: http://www.informatik.uni-bremen.de/robotik |
From: Issam <iss...@gm...> - 2013-05-04 11:33:42
|
Hi MMLF, Is it possible to load the mountain car world from the library (eg. mmlf.loadWorld(mountaincar)) itself without using the Command prompt? The problem is that I'm using anaconda 64 bit thus the mmlf installer failed to find a python distribution in my computer, however, the mmlf library is integrated with python. Thank you. Best regards, --Issam |
From: Jan H. M. <jh...@in...> - 2012-08-02 06:27:23
|
Hello Ralph, thanks for your interest in the MMLF. Your application sounds quite interesting. I would not consider this behaviour a bug; the semantics of self.reward is as follows: the agent obtains reward via giveReward(reward). It accumulates this reward in the self.reward attribute. This self.reward is initially 0. It is reset to 0 whenever AgentBase.getAction() or AgentBase.nextEpisodeStarted() is called (i.e. when the agent has to choose an action or when an episode terminates). The reason for this semantics is that in some settings, the agent might get several rewards before he can act/learn the next time. For getting the behaviour you want, you can simply do the following changes: * In td_agent.py replace in TDAgent.giveReward() "self.reward += reward" by "self.reward = reward" * In agent_base.py replace in AgentBase.__init__, AgentBase.getAction, and AgentBase.nextEpisodeStarted the statements "self.reward = 0" by "self.reward = None" This should do the job. You have to make sure then that the _train method is never called when self.reward is None since otherwise it would crash since it expects a numeric reward. Best regards, Jan Am 02.08.2012 01:32, schrieb Yuan, Jiangchuan: > > Dear MMLF-support team, > > We are trying design some reinforcement learning models for our equity > trading algorithms. We were doing some test on MMLF, and it seems > pretty convenient so far. > > However, we noticed that the None reward set in the environment seems > to be automatically transformed as reward=0 in the agent. More > specifically, in the evaluateAction(self, actionObject) function, we > sometimes return the resultDict with reward as None. However, if I set > a stop point within getAction(self) in the td_agent.py and check the > value of self.reward, it turns out that self.reward is tranfored as 0 > automatically. > > The reason we need to set the reward as none is because our action > space depends on state. Since evaluateAction sometimes may pass us > actionObject that’s outside of the state space, we have got to reject > these steps, and do not wish the (state, action) pairs appears in the > eligibility trace. > > If possible, can you help me to check about it? > > Thanks a lot, > > Ralph > > ------------------------------------------------------------------------ > > *Jiangchuan Yuan*| Linear Quantitative Research | Electronic Client > Solutions | Global Equities | *J.P. Morgan* | 383 Madison Avenue, New > York, NY, 10179 | T: +1 (212) 622-5624 | jia...@jp... > <mailto:jia...@jp...>| jpmorgan.com <http://jpmorgan.com> > > This email is confidential and subject to important disclaimers and > conditions including on offers for the purchase or sale of securities, > accuracy and completeness of information, viruses, confidentiality, > legal privilege, and legal entity disclaimers, available at > http://www.jpmorgan.com/pages/disclosures/email. > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > MMLF-support mailing list > MML...@li... > https://lists.sourceforge.net/lists/listinfo/mmlf-support -- Dipl. Inf. Jan Hendrik Metzen Universität Bremen FB 3 - Mathematik und Informatik AG Robotik Robert-Hooke-Straße 5 28359 Bremen, Germany Besuchsadresse im Gebäude Unicom 1: Mary-Somerville-Str. 9 28359 Bremen, Germany Phone: +49 (0)421 178 45-4123 Fax: +49 (0)421 178 45-4150 E-Mail: jh...@in... Homepage: http://www.informatik.uni-bremen.de/~jhm/ Weitere Informationen: http://www.informatik.uni-bremen.de/robotik |
From: Yuan, J. <jia...@jp...> - 2012-08-02 00:07:15
|
Dear MMLF-support team, We are trying design some reinforcement learning models for our equity trading algorithms. We were doing some test on MMLF, and it seems pretty convenient so far. However, we noticed that the None reward set in the environment seems to be automatically transformed as reward=0 in the agent. More specifically, in the evaluateAction(self, actionObject) function, we sometimes return the resultDict with reward as None. However, if I set a stop point within getAction(self) in the td_agent.py and check the value of self.reward, it turns out that self.reward is tranfored as 0 automatically. The reason we need to set the reward as none is because our action space depends on state. Since evaluateAction sometimes may pass us actionObject that's outside of the state space, we have got to reject these steps, and do not wish the (state, action) pairs appears in the eligibility trace. If possible, can you help me to check about it? Thanks a lot, Ralph ________________________________ Jiangchuan Yuan | Linear Quantitative Research | Electronic Client Solutions | Global Equities | J.P. Morgan | 383 Madison Avenue, New York, NY, 10179 | T: +1 (212) 622-5624 | jia...@jp...<mailto:jia...@jp...> | jpmorgan.com<http://jpmorgan.com> This email is confidential and subject to important disclaimers and conditions including on offers for the purchase or sale of securities, accuracy and completeness of information, viruses, confidentiality, legal privilege, and legal entity disclaimers, available at http://www.jpmorgan.com/pages/disclosures/email. |
From: Robert F. <rmf...@gm...> - 2012-04-09 16:13:22
|
Please can you add me to the mailing list |
From: Jan H. M. <jh...@in...> - 2011-05-09 11:10:33
|
Hello trex, sorry for the late reply (I have been on vacation last week). The "defaultStateDimDiscretizations" is actually only relevant for continuous state spaces. For discrete ones, just use the TabularStorage function approximator (not CMAC). If you are dealing with a continuous state space and use e.g. the CMAC function approximator, the "defaultStateDimDiscretizations" determines how many tiles per dimension are used. I.e., a uniform tiling is used consisting of defaultStateDimDiscretizations^n tiles (where n is the number of dimensions). See also here: http://www.incompleteideas.net/sutton/book/ebook/node88.html#SECTION04232000000000000000 The whole "defaultStateDimDiscretizations" design is admittedly not very clean and we plan to redesign it in the future. Regards, Jan Am 01.05.2011 23:21, schrieb trex 279: > Hi, > I was just wondering what the option "defaultStateDimDiscretizations" > does. In the documentation, it says "The default “resolution” the > agent uses for every dimension" but it's not very clear to me. Suppose > I have a discrete 1-d state space of 10 items, and I'm using a CMAC > function approximator, what should I set this value as? > > Thanks, > T > > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > > > _______________________________________________ > MMLF-support mailing list > MML...@li... > https://lists.sourceforge.net/lists/listinfo/mmlf-support -- Dipl. Inf. Jan Hendrik Metzen Universität Bremen FB 3 - Mathematik und Informatik AG Robotik Robert-Hooke-Straße 5 28359 Bremen, Germany Besuchsadresse im Gebäude Unicom 1: Mary-Somerville-Str. 9 28359 Bremen, Germany Phone: +49 (0)421 178 45-4123 Fax: +49 (0)421 178 45-4150 E-Mail: jh...@in... Weitere Informationen: http://www.informatik.uni-bremen.de/robotik |
From: trex 2. <27...@gm...> - 2011-05-01 21:21:15
|
Hi, I was just wondering what the option "defaultStateDimDiscretizations" does. In the documentation, it says "The default “resolution” the agent uses for every dimension" but it's not very clear to me. Suppose I have a discrete 1-d state space of 10 items, and I'm using a CMAC function approximator, what should I set this value as? Thanks, T |
From: 279 t. <27...@gm...> - 2011-04-03 06:51:08
|
I was wondering if there is some way to store the graphs generated by the various viewers in files. For example, if I'm running a FloatStream viewer, would it be possible to store the final graph in a file after I stop the world. I see that mmlf by default stores a lot of the results in pdf files, especially the policy learnt, traces etc. but I didn't come across any graphs in the results it stored. Thanks, T |
From: <Jan...@gm...> - 2011-03-24 08:45:37
|
From: Jan H. M. <jh...@in...> - 2011-03-24 08:43:12
|
-- Dipl. Inf. Jan Hendrik Metzen Universität Bremen FB 3 - Mathematik und Informatik AG Robotik Robert-Hooke-Straße 5 28359 Bremen, Germany Besuchsadresse im Gebäude Unicom 1: Mary-Somerville-Str. 9 28359 Bremen, Germany Phone: +49 (0)421 178 45-4123 Fax: +49 (0)421 178 45-4150 E-Mail: jh...@in... Weitere Informationen: http://www.informatik.uni-bremen.de/robotik |