From: Barry B. <bar...@st...> - 2009-04-23 12:19:02
|
Hi Murat, The whole point of the well-founded semantics is to provide a certain interpretation of negation. A stratified model for negation can not be computed when the rule set can not be stratified, e.g. p(X) :- q(X), !r(X). r(X) :- p(X) When applying a well-founded interpretation of negation to the 'difficult' rule-set above then we are able to compute something. In fact, we infer things which are definitely true, as well as things that are definitely not true, but also things that we can't tell with this interpretation. In logic programming, things that we can't be sure of are assumed to be false. Hence, the ultimate output from the reasoner contains only those statements that are are definitely true (in the given interpretation). I have used a lot of words to say very little - I hope it makes sense! barry Murat Knecht wrote: > Hi Barry, > > thanks for your comments. I'll have to think about your two answers at > the bottom some more. > > My question regarding the "two-valued logics" came from the simple > observation that no query will ever return "undefined", when using the > well-founded semantics. I thought the point of the well-founded > semantics was two-fold: First, it allows additional (negative) > conclusions from unfounded sets and, second, a literal in a well-founded > model is either true, false or undefined. > I do believe that you build your model correctly, but the distinction > between "false" and "unknown" is dropped after the KB build up. > > All the best, > Murat > > > > Barry Bishop schrieb: >> Hi Murat, >> >> Thanks for your continued interest! Comments in line: >> >> Murat Knecht wrote: >> >>> Hi, >>> >>> I've been having a look at the implementation of the well-founded >>> semantics in IRIS, having just read the paper from van Gelder, et. al. >>> on the topic. >>> >>> As far as I understand it, you do not implement the approach from the >>> paper, but rather resort to a two-valued logic. From this point of view >>> the implementation is somewhat unfinished - no query will ever return >>> "undefined". I believe the root of the problem lies in the interface of >>> the evaluation strategies, which only allows to return a relation of >>> (positive) terms. Is this correct, or am I missing something big? >>> >> The 3 valued logic is implicit in the alternating fixed point algorithm. >> See the attached paper. >> >> The input program is 'doubled' and one half computes the definitely true >> facts, while the other half computes possibly false facts. Any possible >> statement that does not appear in the minimal models of each half is >> therefore 'unknown'. >> >> Once a stable model for each half is reached, nothing can be said about >> statements whose truth value is 'unknown' and so only those definitely >> true facts are considered. >> >> >>> The other question: You reorder the rules, before building up the >>> well-founded model. This is not a prerequisite, right? I also am not >>> quite sure, how this helps speeding things up, since you always build >>> the _complete_ model ... >>> >> Rule re-ordering is a general optimisation and not required by the >> alternating-fixed point algorithm. It takes time to actually evaluate a >> rule and since the output of one can feed the input of another, then it >> makes sense to try to order rules such that this can happen. >> >> >>> In the method WellFoundedEvaluationStrategy.calculateMinimalModel() you >>> have the following comment: >>> >>> // Keep these facts and re-use. The positive side is monotonic >>> increasing, >>> // so we don't need to throw away and start again each time. >>> // However, the negative side is monotonic decreasing, so we do >>> have to >>> // throw away each time. >>> >>> It does make sense(to me, but I don't see its optimization reflected in >>> the code. For both the positive and negative rules, you do exactly the same. >>> >> Yes, we do the same things each time, but the result (the minimal model >> for each half of the program) is not the same. Each time round the whole >> loop, we calculate more definitely true statements and less possibly >> false statements. >> >> Since the positive side computes things that are definitely always true, >> then we can keep them and use them as a better starting point each time >> we compute the positive side minimal model. >> >> I hope this helps, >> barry > -- Barry Bishop Senior Scientific Programmer Semantic Technology Institute (STI) University of Innsbruck, Austria ----------------------------------- E-Mail: bar...@st... Tel: +43 512 507 6469 ----------------------------------- |