toss-devel Mailing List for Toss (Page 7)
Status: Beta
Brought to you by:
lukaszkaiser
You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(17) |
Sep
(34) |
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
(100) |
Mar
(122) |
Apr
(5) |
May
|
Jun
(17) |
Jul
(36) |
Aug
(9) |
Sep
(111) |
Oct
(92) |
Nov
(76) |
Dec
(26) |
| 2011 |
Jan
(3) |
Feb
(35) |
Mar
(36) |
Apr
(10) |
May
(9) |
Jun
(2) |
Jul
(3) |
Aug
(2) |
Sep
|
Oct
(7) |
Nov
(12) |
Dec
|
| 2012 |
Jan
(19) |
Feb
(1) |
Mar
(4) |
Apr
(1) |
May
(6) |
Jun
(69) |
Jul
(21) |
Aug
(12) |
Sep
(14) |
Oct
(1) |
Nov
(3) |
Dec
|
| 2013 |
Jan
(6) |
Feb
(1) |
Mar
(6) |
Apr
(3) |
May
(6) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(3) |
Dec
|
| 2014 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(4) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(3) |
| 2016 |
Jan
(6) |
Feb
(1) |
Mar
(3) |
Apr
(1) |
May
(3) |
Jun
(1) |
Jul
(1) |
Aug
(3) |
Sep
(2) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
| 2017 |
Jan
|
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Lukasz S. <luk...@gm...> - 2012-06-07 15:54:28
|
*Term rewriting as type inference* 'Person' is type. 'Name' is type. 'John' is name. Variable Kate is person. Variable Johnie is person. Johnie is John. 'Son of' person is person. Son of Kate is Johnie. What name is son of Kate? ;-) |
|
From: Lukasz S. <luk...@gm...> - 2012-06-07 15:41:01
|
*Concepts and tokens* We can have a "lifting" operator, that replaces token nodes in parsed text with their direct "isa" concept nodes. (Other possibility is to have a selective operator that lifts only one token, of the following term, something like "concept of" -- but it seems to make less sense.) Boy is young. -- Introducing a boy and a young age and "isa" between them. Generally boy is young. -- "Generally" is a "lifting" operator: adding "young" to superclasses of "boy". We should also have a binding operator. Recall that introducing a variable creates a token and gives it a name. It turns out that it's equivalent to binding: giving a name to the token of the term that follows. Variable Johnie is boy. -- where "boy" is primarily a type (i.e. class, i.e. category). Variable Johnie is son of Kate. -- where "son of" is primarily a function. So perhaps "Let ... be ..." syntax would be lighter (it's shorter only by two key presses). I write "primarily" above, because the only difference is that "son of" has reduction rules (rewrite rules) and "boy" doesn't. |
|
From: Lukasz S. <luk...@gm...> - 2012-06-07 15:04:06
|
On Wed, Jun 6, 2012 at 11:20 PM, Lukasz Kaiser <luk...@gm...> wrote: > > The problem I wanted to address before is the following: most (if not all) > our disambiguation problems come from associativity and precedence of > operators. Arbitrary resulting-term-based disambiguation function, which > we have now, can of course handle it - but looks like an overkill and is far > too inefficient. So I simply think that we should handle these basic things > (precedence and associativity at least) already during parsing. Now: whether > we do it by constraints or adding a subterm-based mechanism, I don't know, > but we should do it before generating a lot of useless resulting terms. > Anyway, this looks like a quite simple technical issue, I hope you agree. Yes, it is easy with subtyping, perhaps even easier to understand. Basically you use the standard EBNF grammar symbols as intermediate types (classes, categories) in the type hierarchy. Since the hierarchy is transitive and allows multiple inheritance, there should be no problem with adding nodes in-between for special purposes. (Although that might require some support...) 'Numeral' is a number. ... 'Sum' is a number. 'Below sum' is a number. ... Below sum '+' below sum is a sum. Sum '+' below sum is a sum. Sum '-' below sum is a sum. ... 'Product' is a below sum. ... |
|
From: Lukasz S. <luk...@gm...> - 2012-06-07 10:45:04
|
On Thu, Jun 7, 2012 at 12:43 PM, Lukasz Stafiniak <luk...@gm...> wrote: > everywhere. The relations that originally were over AB, now are over A > and "override" the same relations over A. Errata: "override" the same relations over B. |
|
From: Lukasz S. <luk...@gm...> - 2012-06-07 10:43:29
|
*Tokens vs. classes and default inheritance*
(Suggested by "Word Grammar".)
My initial thinking was:
Syntax definitions introduce classes with multiple inheritance and
parsing introduces token nodes plus relations over tokens. A class is
both a concept node and a relation over this node and token nodes
(from the definition) -- the relation provides attributes specific to
the class. Inheritance are binary "isa" relations between the concept
node and some token nodes (when we say "concept A inherits from
concept B" the graph actually looks like "concept node A isa token
node isa concept node B").
A variable definition introduces a token node and an "isa" relation
between this node and some other token node (which of course "isa" the
concept node for the concept the variable stands for).
The intermediate tokens are both natural: they are introduced by
parsing, and necessary: they allow for one concept to "isa" another
concept in a specific way.
But "Word Grammar" only has tokens "at the bottom", intermediate
tokens don't make sense pretty much "by definition". This is easy to
fix: we flatten "concept node A isa token node AB isa concept node B"
into "concept node A isa concept node B" and replace AB by A
everywhere. The relations that originally were over AB, now are over A
and "override" the same relations over A.
E.g. A="Nil", B="List", List-length(B, "Natural number"),
List-length(AB, "0") is replaced by List-length(A, "0").
*Parsing*
Parsing a variable finds the node that variable definition introduced.
Parsing anything else introduces a token node ("parsed token") that
"isa" the concept node of the corresponding syntax definition, and a
tuple over this node and the token nodes of parsed constituents for
the relation introduced by the syntax defintion. It also introduces
the concrete inferred type for the token: it could introduce an "isa"
from the "parsed token" to the token standing for the inferred type,
but since we don't like "isa token", we introduce these relations
directly over the "parsed token": they "override" the corresponding
relations over the syntax definition node.
Do you have any thoughts, especially questions for clarification?
|
|
From: Lukasz S. <luk...@gm...> - 2012-06-06 23:10:28
|
*Translation* A term "f(t1,...,tn):ty" is translated as the sum of translations of "t1", ..., "tn", plus a tuple f(e0,e1,...,en) (of relation "f"), where elements e1 upto en were introduced to represent terms "t1"..."tn" and e0 is introduced to represent f(t1,...,tn), plus translation of "ty", plus a tuple isa(e0,et) of relation "isa" (representing both "of type" and "subtype") where et was introduced to represent "ty". So, a list "Cons" would be a triple, not a pair as currently in examples/Parsing.toss. We might also have a "projection" operation that takes all tuples containing elements connected via "isa" to a given element (given type), and contracting the hypergraph into a structure without those elements, thus generating a clean structure, where for example the above "Cons" would be a pair (binary) connecting directly the elements in the structure. |
|
From: Lukasz S. <luk...@gm...> - 2012-06-06 22:49:55
|
On Wed, Jun 6, 2012 at 11:24 PM, Lukasz Kaiser <luk...@gm...> wrote: > > So, what do you think? Shall we try to go this way, more or less? If it's not an overkill... I don't know, it doesn't seem important. |
|
From: Lukasz K. <luk...@gm...> - 2012-06-06 21:25:44
|
Hi, I'll write about the first part first, and subtyping after that. But let me start with a very short remark about probably the simplest thing. > The logic of constraints can take anything into account as long as it > stays monotonic (i.e. adding more constraints does not turn > unsatisfiable problems satisfiable). There is no problem with adding > subterm-based disambiguation to constraints (of course > resulting-term-based disambiguation can be only applied at the end...) The problem I wanted to address before is the following: most (if not all) our disambiguation problems come from associativity and precedence of operators. Arbitrary resulting-term-based disambiguation function, which we have now, can of course handle it - but looks like an overkill and is far too inefficient. So I simply think that we should handle these basic things (precedence and associativity at least) already during parsing. Now: whether we do it by constraints or adding a subterm-based mechanism, I don't know, but we should do it before generating a lot of useless resulting terms. Anyway, this looks like a quite simple technical issue, I hope you agree. > I don't mean that the parsing-and-terms part should be made redundant > wrt. to hypergraph-rewriting, just that we should cast it into a > common language (conceptually, not yet in the code) to enable more > interactions later in the development. Parsing, unification, subtyping > goes beyond what is natural for a hypergraph rewrite rule. Rather than > just terms feeding into graphs we can get a richer framework. > [...] > Use of Toss games is orthogonal to this Speagram issue. [...] > We could add Toss real expression based soft constraints in the > future. The general direction is towards "integration" (here: > introduce "common interpretation") and "preserving results of > computational effort" (here: attach the inferred types to parsed > terms). To perform whole games to learn just a single bit (or a couple > of bits) is against the latter directive... I agree that parsing and unification, not to mention subtyping and constraints, are far beyond basic structure rewriting. But I also think that we should try to discuss from the start what we expect from this "integration", i.e. what interactions we imagine to enable later and what we expect from the common interpretation. Here are my ideas. First of all, I ask myself: why do we want to have a common conceptual base for the structure rewriting part and for the terms, types and parsing part? And I think the main reason, aside from conceptual beauty, is to allow future users to somehow comprehend and get to learn and like the whole system. There are a lot of concepts to learn in Toss, and it is very important, I think, to have a path from the basic ones towards the more difficult ones - and for that a common conceptual ground is crucial. So, when getting to learn Toss, someone - I assume - will first get to know structures and basic structure rewriting. And, starting from that, we will try to explain terms, types and parsing, which will be the more advanced interface to Toss. Do you more or less agree with this view of things? Now, following the above, I think that a simple toss parsing game - something with only a few examples and only basic types, like the one I'm trying to do recently - will be useful at this stage, i.e. for someone who has understood structure rewriting but has not started parsing and types yet. Playing this parsing game should give a feeling for how the parsing really works, what are the basic types, and when ambiguity or no-parse situations happen. If we manage to do it right, it could also be used as definition (or a part of the definition) in the documentation. And also as an error reporting tool: if an error (e.g. an ambiguity) occurs during real parsing (i.e. the one made in code, in ParseArc, not the game) we could show the different ambiguous parses as different plays in the game interface of the parsing game. I am not sure if this will be a good error reporting help - but this is something that I'd expect from a successful integration and parsing-with-game interaction. So, what do you think? Shall we try to go this way, more or less? Lukasz |
|
From: Lukasz S. <luk...@gm...> - 2012-06-06 16:18:17
|
On Wed, Jun 6, 2012 at 5:24 PM, Lukasz Stafiniak <luk...@gm...> wrote: > > We should follow closely [...] the "Word > Grammar" theory on the other (more Toss-ish) side. As for "Word Grammar", not closely, "in spirit" rather than "to the letter". |
|
From: Lukasz S. <luk...@gm...> - 2012-06-06 15:59:15
|
On Wed, Jun 6, 2012 at 5:24 PM, Lukasz Stafiniak <luk...@gm...> wrote: > *Subtyping* > > We should follow closely the HPSG > formalism (e.g. ALE) on one (Speagram-ish) side, and the "Word > Grammar" theory on the other (more Toss-ish) side. The consequence is that the "of type" relation is the same as the "subtype" relation, i.e. "te : ty" is the same as "te <: ty", where "subty <: superty" (my first mail in this thread might have suggested otherwise). |
|
From: Lukasz S. <luk...@gm...> - 2012-06-06 15:25:00
|
*Subtyping* The mistake we made in the middle Speagram was to use structural subtyping. Structural subtyping only makes sense in programming languages, where the structure of types follows the structure of programs. In knowledge representation, nominal subtyping makes much more sense. We should use class-based inheritance (as in popular object-oriented languages). We should follow closely the HPSG formalism (e.g. ALE) on one (Speagram-ish) side, and the "Word Grammar" theory on the other (more Toss-ish) side. *No argument names* HPSG uses these nice boxes with capital-letter atributes inside. We shouldn't use attribute names, just stick to the positional system. A type is a term, the head of the term is the "class" and the arguments are the "attributes" or "direct members" of the class (a class does also have indirect members -- direct members of its superclasses). But we should have syntax to access positions in terms and means to give names to positions and construct positions out of these names (so when someone "builds a class", positions to the arguments can be given names). Actually, since old-fashioned attribute/member name is a position on argument list of a "class" (a term head, i.e. a functor), I'm saying we should give names to *paths*, but allowing that a path matches when classes (functors) appearing in a term are subclasses of those in the path (it would agree with HPSG/ALU, which uses named attributes). |
|
From: Lukasz S. <luk...@gm...> - 2012-06-06 14:50:54
|
On Sun, Jun 3, 2012 at 2:57 PM, Lukasz Kaiser <luk...@gm...> wrote: > > I have absolutely no problem with that - in fact I even wanted to > replace term rewriting with structure rewriting in code, but it seems > to be more hassle than it is worth. But I *did* change old speagram > semantics and now require all rewriting rules to be left-linear, with > equality being a special function now. In this way term rewriting is very > similar to structure rewriting - up to copying subterms on the right side. I don't mean that the parsing-and-terms part should be made redundant wrt. to hypergraph-rewriting, just that we should cast it into a common language (conceptually, not yet in the code) to enable more interactions later in the development. Parsing, unification, subtyping goes beyond what is natural for a hypergraph rewrite rule. Rather than just terms feeding into graphs we can get a richer framework. > What you are writing here suggests one interesting idea to me: > how about making an experiment and implementing type arc parsing > (i.e. speagram-style parsing) as a Toss game? The only harder part > would be type unification, and for basic types it is not that hard, right? > Maybe I'll take some time and have a shot at this - it looks really very > interesting to me. Especially when you think about one weak point of > the current term system (i.e. old speagram): that disambiguation happens > only after full parse. Just to be clear to other readers: in the old Speagram, all disambiguation happened after full parse, but in the "middle" Speagram, only resulting-term-based disambiguation (which you talk about) happened after full parse, and type-based disambiguation happened online. (I'm having a look at ParseArc right now which comes from the "old" Speagram, and actually there are "optimizations" that add type checking during parsing, but discard the results, so type inference is performed multiple times for the same terms.) > This both makes the disambiguation rules hard to > understand, and makes the whole system far slower than it should be. > If we have it as a game - and different parses correspond to different > possible moves during the play - then maybe we can use payoffs or some > other mechanism to exclude stupid parses before generating all of them? Use of Toss games is orthogonal to this Speagram issue. In the "middle" Speagram (which I'll be porting to the Toss codebase, but probably rewriting since the code got a bit "involved"), partial parses are excluded as soon as they make constraints unsatisfiable. The logic of constraints can take anything into account as long as it stays monotonic (i.e. adding more constraints does not turn unsatisfiable problems satisfiable). There is no problem with adding subterm-based disambiguation to constraints (of course resulting-term-based disambiguation can be only applied at the end...) We could add Toss real expression based soft constraints in the future. The general direction is towards "integration" (here: introduce "common interpretation") and "preserving results of computational effort" (here: attach the inferred types to parsed terms). To perform whole games to learn just a single bit (or a couple of bits) is against the latter directive... |
|
From: Lukasz K. <luk...@gm...> - 2012-06-03 12:58:29
|
Hi!
> (1) For terms and term rewriting systems, besides the presentation
> based on (extending) the textbook presentation of them, we should
> provide presentation-translation into structures and "Toss systems",
> even if at no point the translation is performed.
I have absolutely no problem with that - in fact I even wanted to
replace term rewriting with structure rewriting in code, but it seems
to be more hassle than it is worth. But I *did* change old speagram
semantics and now require all rewriting rules to be left-linear, with
equality being a special function now. In this way term rewriting is very
similar to structure rewriting - up to copying subterms on the right side.
> (2) Types should be parts of terms. I.e. in the translation to
> structures, when a node corresponds to an instance of a root of a
> term, the node is connected via a "type" edge to a (representation of
> a) term that is its type. The hierarchy ends with a "Type" term whose
> translation has the "type" edge connected to itself ("Type : Type").
> In the textbook presentation, we might require that the arity is >= 1
> and the first argument is always a type, or just extend the standard
> terms to type-carrying terms with a handy syntax like "f(t1 :
> ty_1,...,tn : ty_n) : ty".
>
> (2a) The corollary is that term rewriting rules will match against
> types, where the types come primarily from parsing and possibly from
> some type coercion syntax.
What you are writing here suggests one interesting idea to me:
how about making an experiment and implementing type arc parsing
(i.e. speagram-style parsing) as a Toss game? The only harder part
would be type unification, and for basic types it is not that hard, right?
Maybe I'll take some time and have a shot at this - it looks really very
interesting to me. Especially when you think about one weak point of
the current term system (i.e. old speagram): that disambiguation happens
only after full parse. This both makes the disambiguation rules hard to
understand, and makes the whole system far slower than it should be.
If we have it as a game - and different parses correspond to different
possible moves during the play - then maybe we can use payoffs or some
other mechanism to exclude stupid parses before generating all of them?
I'm not sure that is what you meant - and it might be much harder to do
more complex types and their unification as a toss game - but maybe it
really is worth a try? Write what you think! :)
Lukasz
|
|
From: Lukasz S. <luk...@gm...> - 2012-06-03 11:46:55
|
Hi,
Thanks for thinking about porting Speagram to Toss. I hope to write
more in this thread, but I'll start it with some propositions that you
might already have a problem with.
(1) For terms and term rewriting systems, besides the presentation
based on (extending) the textbook presentation of them, we should
provide presentation-translation into structures and "Toss systems",
even if at no point the translation is performed.
(2) Types should be parts of terms. I.e. in the translation to
structures, when a node corresponds to an instance of a root of a
term, the node is connected via a "type" edge to a (representation of
a) term that is its type. The hierarchy ends with a "Type" term whose
translation has the "type" edge connected to itself ("Type : Type").
In the textbook presentation, we might require that the arity is >= 1
and the first argument is always a type, or just extend the standard
terms to type-carrying terms with a handy syntax like "f(t1 :
ty_1,...,tn : ty_n) : ty".
(2a) The corollary is that term rewriting rules will match against
types, where the types come primarily from parsing and possibly from
some type coercion syntax.
I'll talk about subtyping in my next email.
|
|
From: Lukasz S. <luk...@gm...> - 2012-05-26 13:17:45
|
On Thu, May 24, 2012 at 12:38 PM, Lukasz Kaiser <luk...@gm...> wrote: > >> I've mentioned that Toss "is undecided" whether to be a system >> modeling tool or a cognitive system. [...] > > it is crucial to model many agents, and not only their possible actions, > but foremost their goals and preferences. Then, to run the system, you need to > perform game simulations, not only solve equations - and in this way modeling > gets this cognitive system flavour - because you need to play the game. Maybe > in some time we will actually see, that there isn't really a big difference ;). What I think distinguishes a cognitive system from a modeler tool is: (1) learning, (2) integration, (3) continuous operation (could be called autonomous or online operation or continuous initiative, i.e. not just a command-response model; continuous operation "per force" guarantees transfer learning). |
|
From: Lukasz K. <luk...@gm...> - 2012-05-24 10:39:43
|
Hi, > I've mentioned that Toss "is undecided" whether to be a system > modeling tool or a cognitive system. [...] > Well, that was just a disclaimer to posting a recent development in > the system modeler world. > http://blog.wolfram.com/2012/05/23/announcing-wolfram-systemmodeler/ I like the system modeler a lot - at least from what I see on their website :). But it still lacks this one thing that is essential in Toss - modeling players! In fact it - and all competitors they mention - should be called a deterministic system modeler. And I am convinced, as always, that this is not enough and that it is crucial to model many agents, and not only their possible actions, but foremost their goals and preferences. Then, to run the system, you need to perform game simulations, not only solve equations - and in this way modeling gets this cognitive system flavour - because you need to play the game. Maybe in some time we will actually see, that there isn't really a big difference ;). Best! Lukasz |
|
From: Lukasz S. <luk...@gm...> - 2012-05-23 19:27:26
|
I've mentioned that Toss "is undecided" whether to be a system
modeling tool or a cognitive system. It was initially designed as the
first, but has recently drifted a bit towards the other, and I like
it, cognition is very exciting ("the ultimate thing" in some sense).
But it does not have a cognitive architecture integrating the
different modules.
Well, that was just a disclaimer to posting a recent development in
the system modeler world.
http://blog.wolfram.com/2012/05/23/announcing-wolfram-systemmodeler/
|
|
From: Lukasz K. <luk...@gm...> - 2012-05-12 18:54:26
|
> Great news! What about waiting with 0.8 release till next weekend? > I'll finally start taking a look at recent commits next week. I can wait, but one reason I wanted to do it is that I want to remove Term ml and some of the symbolic stuff just after that - but I prefer to have a release to go back to in case we need it back one day. So maybe I should just do it, and your testing and help will be needed anyway, especially when I get to rewriting diff solver for speed, as you have much better feeling for where we are slow! So, what do you think, should I wait or go? Lukasz |
|
From: Lukasz S. <luk...@gm...> - 2012-05-12 16:36:07
|
On Sat, May 12, 2012 at 5:32 PM, Lukasz Kaiser <luk...@gm...> wrote: > Hi. > > I was recently interested in some bioinformatic problems > and of course I started modelling them in Toss. This is why > we now have a cell cycle example, a simple model from Tyson > from 1991. On the way, I corrected a few bugs we had with > continuous dynamics and added animations to the JS interface. > > One interesting thing about the implementation of ODEs in Toss > is that it works symbolically: you can leave free parameters, but > it is slower due to that of course. I never knew whether that was > a good decision, but now I could finally test it a bit. [...] > numerical part faster in Toss. At least I think I will focus more on > that in the next release. But first: 0.8 - I hope to make it soon :). Great news! What about waiting with 0.8 release till next weekend? I'll finally start taking a look at recent commits next week. Cheers. |
|
From: Lukasz K. <luk...@gm...> - 2012-05-12 15:33:47
|
Hi. I was recently interested in some bioinformatic problems and of course I started modelling them in Toss. This is why we now have a cell cycle example, a simple model from Tyson from 1991. On the way, I corrected a few bugs we had with continuous dynamics and added animations to the JS interface. One interesting thing about the implementation of ODEs in Toss is that it works symbolically: you can leave free parameters, but it is slower due to that of course. I never knew whether that was a good decision, but now I could finally test it a bit. The example (in examples/Cell-Cycle-Tyson-1991) results in the following system of ODEs (using v0 ... v5 for variable names here), assuming we leave the two significant parameters, k4 and k6, symbolic. v0' = 0.015 + -200. * v0 * v3 v1' = -0.6 * v2 + k6 * v4 v2' = -100. * v2 + k6 * v4 + 100. * v3 v3' = -100. * v3 + 100. * v2 + -200. * v0 * v3 v4' = -k6 * v4 + 0.018 * v5 + k4 * v5 * v4 * v4 v5' = -0.018 * v5 + 200. * v0 * v3 + -k4 * v5 * v4 * v4 These two parameters, k4 and k6, range, say, from 10 to 1000 (k4) and from 0 to 10 (k6), and normally, e.g. with k4=180, k6=1, the system oscilates with a peak of v5 after time t=30, more or less. One interesting question, which Francois Fages and others solve with BIOCHAM in contraintes.inria.fr/~fages/Papers/RBFS09tcs.pdf is whether one can find k4 and k6 such that v5 at, say, t=30, is > 0.3. The idea of doing it symbolically is that, instead of searching through the parameter space, one computes a conjunction of assertions over the first-order theory of the real field and uses a solver for that. Since Runge-Kutta equations, and we use the standard RK4 as described on http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods with a constant time interval (we use h=0.005 in each step) operates on polynomials, it is easy to generate the system of polynomial equations for a simulation of N steps. It is even linear in N and uses (N+1)*6+2 free variables: v0atK, v1atK, ..., v5atK for k=0,...,N to represent the values of each variable vi at time step K, and the two parameters k4, k6. I made a basic interface in TermTest to generate these constraints in the smt2 format (used in SMT competitions), so if you first do in Toss "make Arena/TermTest.native" and then run "./TermTest.native -steps 10" you will get the formula for 10 steps. You can then add assertions about k4, k6, or the final values as you wish and feed it to any QF_NRA solver. I generated the formula for 100 steps of the above Tyson model and added the intervals for k4, k6, and initial values (0,0,1,0,0,0). The file in smt2 format is attached. Then I ran the best solver I know on it, and after 15 minutes it still did nothing. This does not look promising, because the system with only these constraints should be trivially satisfiable. But it also shows that this kind of formulas are probably very hard for symbolic solvers. Moreover, numercal methods and especially evolutionary optimization techniques such as CMA-ES http://en.wikipedia.org/wiki/CMA-ES http://hal.inria.fr/docs/00/58/36/69/PDF/hansen2011impacts.pdf give much better results here much faster. So maybe it is time to remove some of the symbolic stuff and work more on making the numerical part faster in Toss. At least I think I will focus more on that in the next release. But first: 0.8 - I hope to make it soon :). Best! Lukasz |
|
From: Lukasz S. <luk...@gm...> - 2012-04-09 21:58:09
|
Interesting question at MetaOptimize: http://metaoptimize.com/qa/questions/9837/extracting-structure-on-a-very-complexly-interacting-feature-space |
|
From: Lukasz K. <luk...@gm...> - 2012-03-15 00:37:32
|
Hi. Our Toss Client is done in Html and JS, so it runs in a browser. But I wanted to be able to also start it by clicking, so I just commited a simple script to do that. A deeper problem with running in the browser is how to integrate the video recognition stuff and in general how to work with video at all. I thought that this may have to wait quie a bit, but apparently people are working on such stuff. http://developer.pointcloud.io/browser/ There are not may details there yet, but from the examples it looks like a relatively high-level interface. If it really manages to construct a decent scene-like graph from camera images, we could try to integrate it with our learning stuff :). Best! Lukasz |
|
From: Lukasz K. <luk...@gm...> - 2012-03-12 11:54:24
|
> I vote for "CURRENT". It's done, run the longer tests some time to check. Best! Lukasz |
|
From: Lukasz S. <luk...@gm...> - 2012-03-11 21:38:56
|
On Sun, Mar 11, 2012 at 10:24 PM, Lukasz Kaiser <luk...@gm...> wrote: > > My current idea is to make > another printing function, which prints the state together > with a MODEL followed by the current structure - and in > such cases, to use the current one instead of re-making > all moves. But maybe I should change "MODEL" to some > other name, like "CURRENT"? Or maybe some other idea > will be better? Write what you think! I vote for "CURRENT". ~ Take care. |
|
From: Lukasz K. <luk...@gm...> - 2012-03-11 21:25:39
|
Hi. Recently, we had some problems with the meaning of the structure after MODEL in a .toss file. The problem comes from the fact that some time ago it used to be the *current* structure - the last state of the play. But since we use play history, this changed to the *starting* structure of the game. To make this clear, I changed the .toss file syntax in my last commit - it now expects "START" instead of "MODEL". After the above, it should generally be clear - but there is a looming performance problem. Because now, to reconstruct the current structure from a .toss state representation, we need to repeat all the moves. In general, this is fast and not a problem if the file is used just for saving the games. But we also use the format for client-server communication at present - and with 1s for suggest and after 40 moves it might start to be a problem. My current idea is to make another printing function, which prints the state together with a MODEL followed by the current structure - and in such cases, to use the current one instead of re-making all moves. But maybe I should change "MODEL" to some other name, like "CURRENT"? Or maybe some other idea will be better? Write what you think! Lukasz |