|
From: Jens L. <le...@in...> - 2012-11-29 15:36:00
|
Dear June, Am 09.11.2012 16:36, schrieb June: > Hi all, > > I am a little confused about the calculation of predictive accuracy > here. It will be quite grateful if someone in this list can kick out my > troubles~ > > Suppose we are doing a leave one out validation, and we want to learn > the description for concept Female. > The knowledge base has Female(ANNE), and ANNE is the one left out for > testing. > The result of DLLearner is (not Male) in this case, which is quite > ideal. But how to test whether ANNE is correctly classified? Do we need > to ask the knowledge base whether (not Male(ANNE)) is true? But the > knowledge base will always return false for these negated concepts. In most DL-Learner algorithms, the reasoner and the learning problem is configurable. You can find a list of components and options here: http://dl-learner.svn.sourceforge.net/viewvc/dl-learner/trunk/interfaces/doc/configOptions.html (also linked from http://dl-learner.org) Please also have a look at this paper (and later ones by d'Amato, Fanizzi and Esposito: http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-426/swap2008_submission_14.pdf In essence, there are different heuristics for measuring the accuracy of concepts. Some of them take specifically the open world assumption of OWL into account (which basically results in ternary classification), whereas others are standard binary classification methods (predictive accuracy, F-measure). DL-Learner can be configured to use different measures - the exact details depend on your specific requirements. Another directly related issue are the reasoners: You can either use a standard OWL reasoner (which by default assumes an open world), but you could also use the incomplete "fast instance checker" in DL-Learner, which makes a closed world assumption (in a nutshell, it computes basic inferences on the background ontologies using a standard OWL reasoner, but then applies simplified fast instance check algorithms on top of the obtained inferences). By the way, DL-Learner already has an included mechanism for measuring cross validation accuracy: http://dl-learner.svn.sourceforge.net/viewvc/dl-learner/trunk/examples/cross-validation/ (It can also be extended to support "leave one out".) It's a somewhat lengthy answer, but I hope it helps. Kind regards, Jens -- Dr. Jens Lehmann Head of AKSW/MOLE group, University of Leipzig Homepage: http://www.jens-lehmann.org GPG Key: http://jens-lehmann.org/jens_lehmann.asc |