|
From: Jens L. <le...@in...> - 2012-09-20 14:35:44
|
Hello June, Am 20.09.2012 14:44, schrieb June: > Hi, Jens, > > I am a little confused about how the accuracy is measured in your paper > "concept learning in DLs using refinement operator(machine learning > journal)". > > In the API dllearner-1.0-beta-2, suppose 10 solutions can be found with > 100% accuracy and f-measure for the definition of each concept, and > together there are 5 concept. How did you calculate your accuracy > measure in the experiment? > > Did you calculate the correct definitions manually in the total 50 > solutions, and then divide by 50? Or select at most one correct > definition from the solutions for one concept, and calculate the count > divide by 5? Or else? In the experiments in the paper [1], we used cross validation [2]. For each fold in the validation, we measured (predictive) accuracy [3], i.e. the percentage of correctly classified examples (positives and negatives). For this, we always only picked the single best solution generated by an algorithm. The DL-Learner commandline interface usually displays several solutions, but for the experiments only the first one is taken into account. I hope this clarifies the issue. Kind regards, Jens [1] http://jens-lehmann.org/files/2010/concept_learning_mlj.pdf [2] http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29 [3] http://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification -- Dr. Jens Lehmann Head of AKSW/MOLE group, University of Leipzig Homepage: http://www.jens-lehmann.org GPG Key: http://jens-lehmann.org/jens_lehmann.asc |