2) Which data sets to load.
Although I like the idea that every chapter should be free to enrich their
endpoints in any way they would like, perhaps we should try to normalize at
least what we will load into our named graphs identified by
*xx*.dbpedia.orgDo we have a list of data sets that were loaded by es
and it, for example?
We'd like to load similar ones for pt. Having this more or less normalized
would help with comparisons, on-the-fly data fusion, etc.
I am for documentation over normative guidelines. It is enough, if
every chapters writes down, what is loaded.
We could create a table in the chapters page with extractors as rows and languages as columns and we all tick what we load
do we have a volunteer? :)
3) Ontology localization.
We have all added our language labels to the ontology. It frustrates me
that we still have to see qNames like dbpedia-owl:birthDate in our Linked
Data HTML rendering. I would like to see "data de nascimento" in
pt.dbpedia.org and "Geburtsdatum" in de.dbpedia.org
Has anybody checked how much work it would be to implement this in the
It is possible to include a PHP compiler into virtuoso, so the VAD
might be replaced by some PHP script. This project needs further
The DBpedia VAD just needs to be fixed /reengineered once. This is
easier than rewriting it in php (because we would have to deal with
By the way, where is the source code from openlink? Is it online
The code is already in our repo, for now I committed the "greek version" which is not yet merged with openlink's latest changes. I'll probably work on it from next week.
I already worked with the code and it's not as scary as it looks :)
auto language detection for labels could also work and we can do many more things but I agree that this need further discussion