|
From: Diogo FC P. <djo...@gm...> - 2015-02-18 17:53:13
|
Hello First, congratulations on the release of this exciting project! I have a knowledge base about cancer patients with 6.240.880 triples, loaded on a open source virtuoso server. Before loading, we materialized inferences using pellet - it took almost 3 days in a 30GB RAM server. I have a configuration dl-learner file (based on the Actors example) which queries the SPARQL endpoint. But as I set the recursion level to 4, dl-learner end up taking too much memory and throws and exception. I have two questions: 1) Is it possible to configure what would be queried by dl-learner? There are data and object properties which could be removed from the analysis. Is there a way to filter it? 2) Can I turn off the built-in inference provided by dl-learner? (considering that everything is materialized in my KB). Thanks! -- diogo patrão |