You can subscribe to this list here.
2004 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}


2005 
_{Jan}
(1) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2009 
_{Jan}

_{Feb}

_{Mar}
(1) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}
(17) 
_{Sep}
(34) 
_{Oct}

_{Nov}

_{Dec}

2010 
_{Jan}

_{Feb}
(100) 
_{Mar}
(122) 
_{Apr}
(5) 
_{May}

_{Jun}
(17) 
_{Jul}
(36) 
_{Aug}
(9) 
_{Sep}
(111) 
_{Oct}
(92) 
_{Nov}
(76) 
_{Dec}
(26) 
2011 
_{Jan}
(3) 
_{Feb}
(35) 
_{Mar}
(36) 
_{Apr}
(10) 
_{May}
(9) 
_{Jun}
(2) 
_{Jul}
(3) 
_{Aug}
(2) 
_{Sep}

_{Oct}
(7) 
_{Nov}
(12) 
_{Dec}

2012 
_{Jan}
(19) 
_{Feb}
(1) 
_{Mar}
(4) 
_{Apr}
(1) 
_{May}
(6) 
_{Jun}
(69) 
_{Jul}
(21) 
_{Aug}
(12) 
_{Sep}
(14) 
_{Oct}
(1) 
_{Nov}
(3) 
_{Dec}

2013 
_{Jan}
(6) 
_{Feb}
(1) 
_{Mar}
(6) 
_{Apr}
(3) 
_{May}
(6) 
_{Jun}
(1) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}
(2) 
_{Nov}
(3) 
_{Dec}

2014 
_{Jan}

_{Feb}

_{Mar}
(6) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2015 
_{Jan}
(4) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

From: Lukasz Stafiniak <lukstafi@gm...>  20130326 22:05:40

On Tue, Mar 26, 2013 at 11:00 PM, Lukasz Kaiser <lukaszkaiser@...>wrote: > Hi, > > finally I compiled and tested the longoverdue > Toss release 0.9. It is on sourceforge now, so > feel free to download and check if everything is ok :). > > Best! > > Lukasz > > Bravo! 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20130326 22:01:35

Hi, finally I compiled and tested the longoverdue Toss release 0.9. It is on sourceforge now, so feel free to download and check if everything is ok :). Best! Lukasz 
From: Lukasz Stafiniak <lukstafi@gm...>  20130325 09:27:40

A paper on a learning system using traditional RL applied to multiple video games. http://cs229.stanford.edu/proj2012/JohnsonRobertsFisherLearningToPlay2DVideoGames.pdf 
From: Lukasz Stafiniak <lukstafi@gm...>  20130325 09:24:44

Potential Toss developers might benefit from this functional programming course. Note though that it is not intended with Toss in mind. http://www.ii.uni.wroc.pl/~lukstafi/pmwiki/index.php?n=Functional.Functional 
From: Lukasz Stafiniak <lukstafi@gm...>  20130301 11:22:04

Sorry about the noise, unintentional ;) On Fri, Mar 1, 2013 at 12:05 PM, Google+ <noreply2d0bf1c8@...>wrote: 
From: Lukasz Stafiniak <lukstafi@gm...>  20130211 20:45:22

Hi, I'd like to remind you of current and upcoming courses relevant to Toss: (1) https://www.coursera.org/course/aiplan "Artificial Intelligence Planning" covers "Toss with single player" scenarios. It has already got interesting with backward planning and planspace planning. (2) https://www.coursera.org/course/ggp "General Game Playing" starts in April. (3) https://www.coursera.org/course/machlearning "Machine Learning" covers the relevant topic "Learning sets of rules and logic programs" in week 3. I thought it would mention "probabilistic logic programming" (since it's by the author of Markov Logic Networks), but that's absent from its topics. (No date yet.) 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20130121 22:54:47

Hi. We were all thinking a lot recently about a more quantitative Toss, and I write this mail to sum up what was said so far and to open a thread for further thought and discussion. The problem starts with the fact that the world is not always discrete. We want to do vision, so we need various quantities. These can be probabilities, colors, movements and other things. Of course we support dynamics and quantities in Toss almost from the start. But it is neither efficient enough nor mature enough. Let's start with vision. Lukasz Stafiniak suggested to use an algorithm called Condensation, see e.g. the attached paper. This algorithm is based on a population of samples for which a stochastic model of dynamics is known. One uses the model to choose where to look and resample  speaking very crudely. This way the condensation works  on a very high level  looks to me to be quite similar to the cmaes algorithm for minimizing functions  the one I want to use for selection of continuous moves. A good paper that also describes the difference to particle swarm optimization is attached, but the key point is again the same: we have a number of samples, this time all gaussian, which move to minimize the function. Ok  no dynamics model or adjusting to what is observed here  but the same idea: a numer of samples used to represent a distribution that all together give a nice model. Now  to track the dynamics in condensation one needs to have some model of dynamics already. If we don't have it but want to create one, we could use some methods from another paper Lukasz Stafiniak recently linked. They show how a multimodal symbolic regression (MMRS, attached as well) can be used to derive a model from the training data. We would need to track and derive at the same time, but, except for the added computational cost which might be big, this seems to be doable  at least in principle. But MMRS internally again uses evolutional optimization methods, in some sense it is another layer on top, like a recursive call in a sense. This is another thing that made me think: maybe we need a probabilistic quantitative model of relational structures, and then we could build it all on top of this in a clear systematic way? About adding quantities: there is also an internal need in Toss for doing this in a better way. Let me just recall that we have already 12 cases in our main formula type, and another 9 in real_expr. At some point each of those things was needed, but it starts to be hard to use  and I cannot tell why And and Or are built over a list but Plus and Times are only binary. And we still lack min, max, sin, cos  some things that will be needed for dynamics. So we will need to rework this part, and I think we should first think and discuss and maybe we'll find a better universal model for all this stuff. One thing I see now, is that we could assume *every* relation in the structure to be quantitative, always. It is easy to include true and false, e.g. as true = +infty, false = infty. Then and/or will be max/min, and realvalued functions will not be needed in the structure type any more  they'll be simply predicates. This also unifies formula and real_expr to one type and I think that it would directly make quite a few things easier in our code. But I don't want to make a small restructuring only  let's try to think a bit deeper and find the right model, for both quantities like colors or positions, Boolean values, and probabilities too. One thing to watch in this context is Stuart Russell's lecture about probabilistic relational structures. It will be streamed live from Paris, tomorrow 6pm Paris time, see here. http://colloquium.lip6.fr/ It is hard to tell what the lecture will be about exactly, but attached is a paper on BLOG, a probabilistic logic over relational structures developed by Russell. I suggest to read it before the lecture :). Best! Lukasz P.S. As to time planning  I think we will do a release quite soon, before we move to implement any ideas discussed in this thread. 
From: Lukasz Stafiniak <lukstafi@gm...>  20130118 09:52:15

On Fri, Jan 18, 2013 at 12:57 AM, Lukasz Kaiser <lukaszkaiser@...> wrote: > Dear Thomas, > >> In your mini OCaml tutorial, you explain how to compile a project using >> ocamllex, menhir and js_of_ocaml using ocamlbuild but your explanations make >> the assumption that the files needed for js_of_ocaml are in >> /opt/local/lib/ocaml/sitelib. Unfortunately I installed js_of_ocaml using >> OPAM and don't know where it is installed (and don't want to because it may >> change if I switch to another version of ocaml/js_of_ocaml). > > indeed  we make some assumptions because it is very hard to > anticipate all possible ways of installing js_of_ocaml. If you are > under a POSIX system (e.g. Linux or Mac OS) then maybe it is > enough if you do "locate js_of_ocaml" in the terminal. Łukasz: This is not a principled solution... Thomas: Bear in mind that the tutorial is not intended to describe the best OCaml practices but rather to invite people to learn Toss and perhaps contribute to it. We have considered a better support for packaging but didn't have resources to pursue it yet. Regards. 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20130117 23:58:17

Dear Thomas, > In your mini OCaml tutorial, you explain how to compile a project using > ocamllex, menhir and js_of_ocaml using ocamlbuild but your explanations make > the assumption that the files needed for js_of_ocaml are in > /opt/local/lib/ocaml/sitelib. Unfortunately I installed js_of_ocaml using > OPAM and don't know where it is installed (and don't want to because it may > change if I switch to another version of ocaml/js_of_ocaml). indeed  we make some assumptions because it is very hard to anticipate all possible ways of installing js_of_ocaml. If you are under a POSIX system (e.g. Linux or Mac OS) then maybe it is enough if you do "locate js_of_ocaml" in the terminal. (You must have locate installed, it often comes by default, but not always.) This should show you the path to js_of_ocaml on your system. A shorter option with locate is e.g. "locate pa_js.cmo", you can also search your system for the pa_js.cmo file in many other ways. I hope this helps to find your js_of_ocaml  I'd be happy to add a more general solution to the tutorial, but the OPAM websited so not seem to give any direct hints where the OPAM packages are placed, so I don't know how to do this for now. I still hope you will locate your packages without problems, do not hesitate to write if you need anything, Best regards! Lukasz Kaiser 
From: Thomas HUET <thomasynchrotron@gm...>  20130117 20:06:05

Hello, In your mini OCaml tutorial, you explain how to compile a project using ocamllex, menhir and js_of_ocaml using ocamlbuild but your explanations make the assumption that the files needed for js_of_ocaml are in /opt/local/lib/ocaml/sitelib. Unfortunately I installed js_of_ocaml using OPAM and don't know where it is installed (and don't want to because it may change if I switch to another version of ocaml/js_of_ocaml). How can I compile your mini tutorial with my installation of js_of_ocaml ?  Thomas HUET 
From: Lukasz Stafiniak <lukstafi@gm...>  20130108 23:52:23

"A hybrid dynamical system is a mathematical model suitable for describing an extensive spectrum of multimodal, timeseries behaviors, ranging from bouncing balls to air traffic controllers. This paper describes multimodal symbolic regression (MMSR): a learning algorithm to construct nonlinear symbolic representations of discrete dynamical systems with continuous mappings from unlabeled, timeseries data. MMSR consists of two subalgorithmsclustered symbolic regression, a method to simultaneously identify distinct behaviors while formulating their mathematical expressions, and transition modeling, an algorithm to infer symbolic inequalities that describe binary classification boundaries. These subalgorithms are combined to infer hybrid dynamical systems as a collection of apt, mathematical expressions. MMSR is evaluated on a collection of four synthetic data sets and outperforms other multimodal machine learning approaches in both accuracy and interpretability, even in the presence of noise. Furthermore, the versatility of MMSR is demonstrated by identifying and inferring classical expressions of transistor modes from recorded measurements." (I haven't looked into it yet.) http://jmlr.csail.mit.edu/papers/v13/ly12a.html 
From: Lukasz Stafiniak <lukstafi@gm...>  20130103 10:42:26

I suggest we have a look at: http://www.robots.ox.ac.uk/~misard/condensation.html http://www.robots.ox.ac.uk/~misard/abstracts/thesis.html http://link.springer.com/article/10.1023%2FA%3A1008078328650 I got there from http://en.wikipedia.org/wiki/Video_tracking Happy New Year. "The problem of tracking curves in dense visual clutter is challenging. Kalman ﬁltering is inadequate because it is based on Gaussian densities which, being unimodal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near realtime." 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20121116 14:47:52

At italkproject.org you can read more about it, they use the iCub robot to test their hypoteses. What I found interesting for Toss is that paper http://www.tech.plym.ac.uk/SoCCE/ITALK/documents/ITALKroadmap2010.pdf especially Section VI B and C (p.20,21, also attached). It discusses the relationship between action, social games and language, which I compare to the problem we have of integrating Term with the rest of Toss. Lukasz 
From: Lukasz Stafiniak <lukstafi@gm...>  20121113 22:45:03

http://jveness.info/publications/veness_phd_thesis_final.pdf "A B S T R A C T This thesis is split into two independent parts. The ﬁrst is an investigation of some practical aspects of Marcus Hutter’s Universal Artiﬁcial Intelligence theory [29]. The main contributions are to show how a very general agent can be built and analysed using the mathematical tools of this theory. Before the work presented in this thesis, it was an open question as to whether this theory was of any relevance to reinforcement learning practitioners. This work suggests that it is indeed relevant and worthy of future investigation. The second part of this thesis looks at selfplay learning in two player, deterministic, adversarial turnbased games. The main contribution is the introduction of a new technique for training the weights of a heuristic evaluation function from data collected by classical game tree search algorithms. This method is shown to outperform previous selfplay training routines based on Temporal Difference learning when applied to the game of Chess. In particular, the main highlight was using this technique to construct a Chess program that learnt to play master level Chess by tuning a set of initially random weights from self play games." It is developed from the paper we looked at quite some time ago. 
From: Lukasz Stafiniak <lukstafi@gm...>  20121105 07:48:06

http://cs.jhu.edu/~jason/papers/#filardoeisner2012iclp "Arithmetic circuits arise in the context of weighted logic programming languages, such as Datalog with aggregation, or Dyna. A weighted logic program defines a generalized arithmetic circuit—the weighted version of a proof forest, with nodes having arbitrary rather than boolean values. In this paper, we focus on finite circuits. We present a flexible algorithm for efficiently *querying* node values as they change under *updates* to the circuit's inputs. Unlike traditional algorithms, ours is agnostic about which nodes are tabled (materialized), and can vary smoothly between the traditional strategies of forward and backward chaining. Our algorithm is designed to admit future generalizations, including cyclic and infinite circuits and propagation of delta updates." 
From: Lukasz Stafiniak <lukstafi@gm...>  20121030 22:30:42

Hi, I recommend you to update Tuareg to the newest revision: http://forge.ocamlcore.org/scm/?group_id=43 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20120921 23:20:16

> Nullary (with regard to subterms) terms are translated as predicates > over a single element, unary terms as relations between the element > corresponding to the term and the element corresponding to the > subterm, arity N terms into arity N+1 relations. All supertypes are > translated as predicates / relations whose first argument is the same > element as the element generated for the whole term. That's the idea > behind translating terms to structures: a new element for each subterm > unless it is shared, but no new elements for supertypes. If you > recall, in the "formal" notation we represent terms by "f (supertypes > ; subterms)", and supertypes are of this form as well so can introduce > more subterms. You are absolutely right  I somehow misunderstood the previous mail very badly. Do you think we could get back formulas from the structures generated by terms relatively easily? That would be a good motivation, at least for me, to think about implementing this translation finally :). But I am still not entirely sure how this will help with formulas (even though now I think I start to see the point). Best! Lukasz 
From: Lukasz Stafiniak <lukstafi@gm...>  20120921 23:14:47

On Sat, Sep 22, 2012 at 12:54 AM, Lukasz Kaiser <lukaszkaiser@...> wrote: > > I'm afraid that I do not fully understand  why are these quantified variants > of translating to structures, and why is that easier? I am surely in favour > of starting with the easier thing! Nullary (with regard to subterms) terms are translated as predicates over a single element, unary terms as relations between the element corresponding to the term and the element corresponding to the subterm, arity N terms into arity N+1 relations. All supertypes are translated as predicates / relations whose first argument is the same element as the element generated for the whole term. That's the idea behind translating terms to structures: a new element for each subterm unless it is shared, but no new elements for supertypes. If you recall, in the "formal" notation we represent terms by "f (supertypes ; subterms)", and supertypes are of this form as well so can introduce more subterms. 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20120921 22:55:03

> These are quantified variants of translating terms to structures... So > it would be natural to share code between translating to formulas and > translating to structures. And start with the latter since it's simpler. I'm afraid that I do not fully understand  why are these quantified variants of translating to structures, and why is that easier? I am surely in favour of starting with the easier thing! Lukasz 
From: Lukasz Stafiniak <lukstafi@gm...>  20120921 22:34:13

On Sat, Sep 22, 2012 at 12:19 AM, Lukasz Kaiser <lukaszkaiser@...> wrote: > (2) typed formulas; I mean support for multityped structures, when the > universe is divided into many subsets of different type and formulas are > implicitly only about the subset; the guards in the formula could be then > automatically derived from the types, so e.g you could say > "ex x parent(x, y) and ex z drives_bicycle(y, z)" and automatically it would > become "ex x (Person (x) and parent (x, y) and ex z (Object (z) and ...)))". > (3) finally support for functional formulas and constants! It is very disturbing > to write every time "ex zero (is_zero(zero) and ex one (succ (zero, one) and > succ (one, x))" when you want to say "x = succ (succ (zero))". We could try > to put this inside Formula and Structure ml, but I think a lot can be done > just by preprocessing. Still  it is too complex for FormulaParser I think, > and it could be a good test for some features of Term. These are quantified variants of translating terms to structures... So it would be natural to share code between translating to formulas and translating to structures. And start with the latter since it's simpler. 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20120921 22:20:36

Hi. > I see no (short term) value in doing that without a "vision". I think > I will do a tests/Polish.trs or some other test to expose the > "hierarchical" / inheritance aspect of new terms. I finally have an > idea of which "supertypes" to display, and how :) it is related to > exposing GLB for use from the trs level. The default for printing > terms will be to display those superclasses that are more specific > than the corresponding declared supertypes. So I'm only thinking about > finishing the Speagram work... I think polish trs is a very nice idea, independent of everything else :). But I am really convinced that doing formulas in Term is also important. And yes  it is not a "vision"  and it has nothing visionary in itself. But it is integration, and visions without integration become irrelevant very fast. Still  I do have a few points which I think could be done with Term when basic integration with Toss is ready. These are very preliminary suggestions, so do not treat them too seriously  but maybe at least one is important. (1) a languageinterface to Toss finally (using SGRS and Term) (2) typed formulas; I mean support for multityped structures, when the universe is divided into many subsets of different type and formulas are implicitly only about the subset; the guards in the formula could be then automatically derived from the types, so e.g you could say "ex x parent(x, y) and ex z drives_bicycle(y, z)" and automatically it would become "ex x (Person (x) and parent (x, y) and ex z (Object (z) and ...)))". (3) finally support for functional formulas and constants! It is very disturbing to write every time "ex zero (is_zero(zero) and ex one (succ (zero, one) and succ (one, x))" when you want to say "x = succ (succ (zero))". We could try to put this inside Formula and Structure ml, but I think a lot can be done just by preprocessing. Still  it is too complex for FormulaParser I think, and it could be a good test for some features of Term. I think I could also see a few more use cases just for Formulas, but the main point  I hope you can see it  is to start really *using* the new Term features. It will surely reveal some problems and maybe we will learn other nice ways of doing some things  but I think Term is ready to start being really used :). Best! Lukasz 
From: Lukasz Stafiniak <lukstafi@gm...>  20120921 19:20:29

On Fri, Sep 21, 2012 at 2:40 PM, Lukasz Kaiser <lukaszkaiser@...> wrote: > > This looks nice  it will finish the term part without sharing, right? > But the next step I was talking about does not require any vision > at all. I meant exactly replacing the current FormulaParser with Term, > i.e. just writing a trs specification for formulas, a function to get them > back in ocaml, and actually replacing the parser with Term. > What do you think about that? I see no (short term) value in doing that without a "vision". I think I will do a tests/Polish.trs or some other test to expose the "hierarchical" / inheritance aspect of new terms. I finally have an idea of which "supertypes" to display, and how :) it is related to exposing GLB for use from the trs level. The default for printing terms will be to display those superclasses that are more specific than the corresponding declared supertypes. So I'm only thinking about finishing the Speagram work... 
From: Lukasz Kaiser <lukaszkaiser@gm...>  20120921 12:40:53

Hi. > I don't have a vision of how Toss specifications should look like, so > I don't see shortterm benefits. I'll finish basing rewriting on > ISAmatching (it only needs revision of associating rules with > functors), and expose Greatest Lower Bound so that the implemented > machinery is available for use; leaving only the explicit sharing and > term<>structure translations not implemented. This looks nice  it will finish the term part without sharing, right? But the next step I was talking about does not require any vision at all. I meant exactly replacing the current FormulaParser with Term, i.e. just writing a trs specification for formulas, a function to get them back in ocaml, and actually replacing the parser with Term. What do you think about that? Lukasz 
From: Lukasz Stafiniak <lukstafi@gm...>  20120920 19:06:55

On Mon, Sep 17, 2012 at 6:30 PM, Lukasz Kaiser <lukaszkaiser@...> wrote: > > a hard look at where we are with Toss, thinking about a release, > and it seems to me that the new things (Term and Diagram) have > one big problem  lack of integration. Of course  sharing will help > integrate terms with structures a lot, but I think we should start > by using Term as a parser for formulas. What do you think? > Formulas are in fact the biggest part of any toss file, so if we > move with formulas to termbased parsing, that will be a lot. > And it could finally allow some typechecking inside formulas! I don't have a vision of how Toss specifications should look like, so I don't see shortterm benefits. I'll finish basing rewriting on ISAmatching (it only needs revision of associating rules with functors), and expose Greatest Lower Bound so that the implemented machinery is available for use; leaving only the explicit sharing and term<>structure translations not implemented. Good luck! 