Wow! I want to know more!

2008-02-21
2013-05-17
  • Nicolas Godbout

    Nicolas Godbout - 2008-02-21

    I am impressed by that package.  It turns out to be exactly what I need for a pet project: (yet another) Python-based computer algebra system.  In a couple of hours I was able to implement abstract set ownership determination, including the time to learn how to use PyKE.  In the long run, I may be able to harness the Plan scheme to compile custom simplification routines across different algebras.

    This brings me a lot of questions:
    PyKE reminds me of Prolog.  Was that the inspiration behind this package?
    What algorithms are used under the hood?  Is there a book chapter or some reference on which it is based?
    Does PyKE build and keep all possible facts derived from forward-chaining rules in its database, or does it attempt to chain facts to reach a particular goal?

    Keep up the good work.  This is the most compelling package I have seen yet.  Predicate-based systems have a magical appeal that is hard to resist.

    --ng

     
    • Bruce Frederiksen

      I am glad that you like pyke!

      Yes, pyke was inspired principally by prolog.  There is no book or reference though to describe pyke; as the additions to it (forward-chaining, multiple selectable rule bases, plans), over what prolog does, are my own ideas.

      The forward-chaining is done exhaustively when a rule base is activated.  So, no, the forward-chaining is not goal driven.  The "then" part of forward-chaining rules (the "assert" clauses) are added as new case specific facts (meaning that they are deleted when a "reset" is done).  However, backward-chaining rules do not assert new facts (same as prolog).  The forward-chaining rules can be used to store intermediate facts that may be used multiple times by the backward-chaining rules, rather than re-running backward-chaining rules each time these facts are needed.  One of the differences of pyke's forward-chaining vs. other forward-chaining production systems is that fact retraction is not supported.  Thus, there is no need for the Rete algorithm.

      It is difficult in the space of this post to fully describe the algorithms used under the hood.  Any of these type of systems necessarily requires a fair amount of smoke and mirrors!  I'll try to go over them briefly here.

      First, facts are indexed by the fixed arguments when a lookup is done.  So, for example, if you test "some_facts.fact_1(1, $x, 2, $y)", all of the 'fact_1' facts in the 'some_facts' fact base are indexed on their first and third arguments (using a python dictionary).  Then (1, 2) translates immediately to a list of facts with those argument values.  These indexes are maintained so that a subsequent test of some_facts.fact_1 specifying the first and third arguments reuses the same index.  If another test is done for "some_facts.fact_1(1, 2, 3, $y)", then another index is built indexing the first three arguments.  These indexes are used by both forward-chaining and backward-chaining rules.  All indexes are deleted when a "reset" is done.

      There are "context" objects that keep track of the pattern variable bindings.  A new context object is created for each rule invocation to store the pattern variables used by that rule.  A shallow binding approach is used that makes pattern variable lookup fast, but requires that bindings be undone on backtracking.  (Verses a deep binding approach like the lisp 1.5 association lists).  The context objects have the mechanism to track which bindings need to be undone and to undo them.

      The rules (both forward chaining and backward chaining) are compiled into nested "for" loops for each subgoal in the "foreach" clause (for forward-chaining rules) or the "when" clause (for backward-chaining rules).

      The body of the innermost "for" loop in forward-chaining functions is a list of "engine.assert_" calls.  The forward-chaining rules are linked to the fact lists prior to running them so that asserting a new fact will trigger all forward-chaining rules that depend on that fact.  An optimization is done here by recording which clause in the forward-chaining rule causes the dependency on the new fact.  Then the new fact is passed into the forward-chaining function with that clause number to avoid looking up all of the other facts that have already been processed.  It is possible that a forward-chaining rule will be triggered more than once with the same pattern variable bindings.  In this case, all of the asserts will be duplicates and will be ignored.  The use of nested "for" loops causes the fact assertions to be done for all possible combinations of fact values.  Another optimization for forward-chaining rules is not to rerun the rule when a fact is asserted if it has not already been run once.  Thus, the overall process for running all of the forward-chaining rules is simply to go down the list of rules and run each in turn and let the consequences of new fact assertions work themselves out automatically.

      Backward-chaining rules are compiled into python generator functions with a yield in the inner most "for" loop body.  As the pattern variable bindings are maintained in the context objects, the yield only has to return the plan (which is managed by the context object, so it's actually the context object which is yielded, see below).  Thus, yield will be called for each combination of proofs of the subgoals, varying the last subgoal the fastest (like prolog does).

      Finally, plans are compiled as functions with an additional "hidden" first parameter.  This parameter is an immutable dictionary containing all of the pattern variable bindings required by the plan function.  This includes the bindings to sub-functions (sub-plans) that this plan must call (which themselves have hidden parameters).  The standard python functools.partial capability is used to cook this hidden dictionary into the function, so that each plan function is actually a functools.partial with the pattern variable bindings as the first argument to the actual python function.  This "cooking" of the necessary pattern variable bindings is done by the context object yielded by the backward-chaining function (by calling context.create_plan()).  This circumvents having to create these pattern binding dictionaries and functool.partials unnecessarily for plans that are only discarded later when a dead-end is encountered in the inferencing.

      I hope that this hasn't been too shallow to grasp.  You can look at the generated .py files to get a better idea of what's going on.  The *_fc.py files have the compiled forward-chaining functions.  The *_bc.py files, the backward-chaining functions.  And the *_plans.py files the compiled plan functions.  The functions in all three of these files are named by the rule name given in the .krb file.

       
    • Nicolas Godbout

      Nicolas Godbout - 2008-02-22

      Thanks for the information.  It does give me a much better idea of what's going on.  I had looked at the generated files and saw a lot of dictionary activity but could not quickly figure it out.

      The first (simple) use I have for PyKE is type checking, or more precisely matching.  For example, I want Sin(<integer>) or Sin(<rational>) to match Sin(<real>) and not having to define all variations. Since this is a core functionality, it has to be fast. I don't want to hard-code types since I want to implement abstract algebra capability as well as many different types with complex relationship predicates. Can I conclude from what you are saying that if I set up forward-chaining rules, "proving" is nearly optimal, i.e. merely makes as many dictionary lookups as there are arguments?  Of course, finding the "best" match will require some backward-chaining, but I can live with that, or use PyKE itself as a cache by asserting results that were harder to compute.

      This reminds me of what is my first feature request: arbitrary placement of free variables when goal proving.  We can do it in the krb file, is there a good reason not to provide this capability when proving a goal?  I gather from the documentation that even if I manually build the pattern object, it has to provide the first n bound arguments.  Or do I have to build a context object?  That section of the docs is not clear to me.

      Thanks a lot for your clarifications,

      --ng

       
    • Bruce Frederiksen

      Well a few things here.  The dictionary lookups relate to facts, not forward or backward chaining.  So whenever facts are examined by either forward-chaining or backward-chaining rules, dictionary lookups are used based on the fixed arguments (either literal values or bound pattern variables).  If you want to use rules to assert new facts (other than the ones that you've asserted yourself in your python program that uses pyke), then forward-chaining rules (but not backward-chaining rules) will do this.

      As pyke is still very new, there is not much experience with it to say "this way is faster" and "that way is slower".  So I can only say now what seems to make sense, more from an academic viewpoint. 

      To me it seems to make sense to look at the number of possible inputs (facts) compared to the number of possible outputs (conclusions). 

      If the number of possible inputs is much greater than the number of possible outputs, then it seems like forward-chaining would be more efficient.  Only the rules matching *actual* inputs are fired, leading to their conclusions.  If backward-chaining is used in this case, it would result in many dead ends exploring all *possible* inputs, most of which are not present in any specific situation.

      Conversely, if the number of possible outputs is much greater than the number of possible inputs, then it seems like backward-chaining would be more efficient.  In this case you are looking for one or a few actual outputs out of a much larger set of possible outputs.  Only the rules concluding these outputs are tried.  If forward-chaining is used in this case, it would result in all possible outputs being generated; even though only one or a few outputs are interesting.  A good example here is the family relations example.  There are many more outputs of who is related to how, then there are inputs.  It would be quicker to ask "how are Bruce and Jill related" using backward chaining rules, than using forward chaining rules to produce all possible outputs and then sift through them looking for just Bruce and Jill.

      Having both forward-chaining and backward-chaining rules available allows you to break the problem up to use forward-chaining for the parts of the problem that have many more inputs than outputs (and generate all possible outputs as facts), and use backward-chaining for the parts of the problem that have many more outputs than inputs, allowing you to focus on just the interesting output without having to generate all of the uninteresting output as well.

      Another argument for forward-chaining is that it can establish its conclusions as stored facts, so that the use of these same conclusions several times within the broader proof does not result in these rules being rerun each time.  Using backward-chaining rules in these cases would result in the rules being rerun each time their conclusion is needed.

      This is how I currently think about it, and why it seemed to make sense to support both forward-chaining and backward-chaining rules in pyke: roughly that the forward-chaining rules start at the input and the backward-chaining rules start at the output, and they meet in the middle somewhere.  Of course, as more actual experience is gained using pyke, these assumptions will be tested and, no doubt, modified.  (That's where you come in! :-)

      Now to your feature request.

      Pyke already supports the ability to use pattern variables in arbitrary positions, or even within tuples.  If you look at the "Using Pyke" page, there is a "some_engine.prove" function in the "Other Functions" section.  And the very last section ("Creating Your Own Patterns") of the "Using Pyke" page shows how to make patterns and look up the values that were bound to your pattern variables.  Since creating your own patterns and contexts is a little cumbersome, I also included the "prove_1" and "prove_n" functions to simplify the common use cases.

       
    • Nicolas Godbout

      Nicolas Godbout - 2008-02-23

      I see your point regarding the preferred use of forward- or backward-chaining rules.

      While re-reading the last sections of the User guide, I realized that 'some_engine.lookup()' is exactly what I need for much of what I need to do.  The gist of my earlier comment was that I expect such an operation to boil down to strictly directory lookups if all arguments of lookup() are literals.

      One technical question: can I rely on the fact that PyKE will FIRST try to match facts when prove() is called, THEN try bc rules?  If that is the case, I would attach a plan as a marker to all bc rules and assert_() results of prove() that come with a plan so that later calls with the same arguments are quickly matched to a fact.  This inspires me to ask another question which have no immediate impact for me but might have in some applications: are there consequences to asserting facts in between calls to the next() method of a generator returned by prove()?  That's a classic problem.  An eventual FAQ might contain a statement about thoses cases, which might reasonably be "Don't do it or else".

      I finally figured out how to manually build patterns with arbitrary placement.  A short example in the documentation would be very useful. Just seeing the following would show how to put the ingredients together.
      >>> my_context = contexts.simple_context()
      >>> son = contexts.variable('son')
      >>> generator = test.engine.prove('family' , 'son_of', my_context,
      >>>   ( son,
      >>>     pattern.pattern_literal('bruce'),
      >>>     contexts.anonymous(),
      >>>   ))
      >>> generator.next()
      >>> son.as_data(my_context)
      'm_thomas'
      >>> generator.next()
      >>> son.as_data(my_context)
      'david_a'
      >>> generator.next()
      StopIteration

      Cheers,

      --ng

       
    • Nicolas Godbout

      Nicolas Godbout - 2008-02-23

      PS: Some computer ate my indenting spaces in the four lines following "generator = ".

       
    • Bruce Frederiksen

      Pyke doesn't mix facts and rules in the same knowledge base.  Each knowledge base is either a fact base or a rule base.  "Prove" done on fact bases looks up facts, while "prove" done on rule bases invokes backward chaining rules.  For example, in the family_relations example the "family" knowledge base is a fact base, while the "example" knowledge base is a rule base.  Thus "family.foobar(...)" looks for "foobar(...)" facts in "family", while "example.foobar(...)" looks for backward-chaining rules that prove "foobar(...)" in "example".  Thus "family.foobar(...)" and "example.foobar(...)" are logically two different things in pyke.  All of this comes to answer your question of whether pyke first checks facts, then rules: as no, it either does one or the other, but not both.

      As to asserting facts between generator.next() calls, I believe this should work; but I haven't tried it.  I also believe that you should safely be able to call "prove" (recursively) from a python premise to run bc rules and assert the results as facts in a different fact base (declaring this python function in the "bc_extras" section of the .krb file).  But, again, I haven't tried this.  Let us know if you give it try!

       

Log in to post a comment.