From: Chris D. <c_...@ya...> - 2011-08-26 14:31:29
|
Am I correct in observing that SMW 1.6 does not support sparql queries directly included in wiki pages, but only allows for ask queries which are then translated into sparql syntax? The reason for the question is that I'm one of the developers of the SparqlExtension, and would like to migrate to SMW 1.6. However, I've become somewhat addicted to/dependent on the more advanced features of sparql such as federated queries, type conversions, custom functions, etc (see http://enipedia.tudelft.nl/ for what this enables). Is this something that will be eventually integrated in the core code, or am I better off for the moment setting up an extension that enables this more advanced functionality with SMW 1.6? Best regards, Chris |
From: Markus K. <ma...@se...> - 2011-08-26 16:56:15
|
On 26/08/11 15:31, Chris Davis wrote: > Am I correct in observing that SMW 1.6 does not support sparql queries > directly included in wiki pages, but only allows for ask queries which > are then translated into sparql syntax? > > The reason for the question is that I'm one of the developers of the > SparqlExtension, and would like to migrate to SMW 1.6. However, I've > become somewhat addicted to/dependent on the more advanced features of > sparql such as federated queries, type conversions, custom functions, > etc (see http://enipedia.tudelft.nl/ for what this enables). > > Is this something that will be eventually integrated in the core code, > or am I better off for the moment setting up an extension that enables > this more advanced functionality with SMW 1.6? We recently discussed that this would be nice to have in the core, but that some more work is needed to get it there. There are certainly various parties who would be interested in this feature, but someone needs to take the lead in implementing what remains to be done to get this into SMW. Below are some facts to summarise the status. What SMW 1.6 provides is support for SPARQL to the extent needed to have a SPARQL-based backend for query answering. So the following functional components are in SMW: * a SPARQL communication abstraction, similar to MW's DataBase * SPARQL result parser for the standard XML result format * Some basic mechanism for interpreting SPARQL elements (esp. URIs) as WikiPages; could be extended to wrap literals into accoding SMW data items as well * SMW-data to RDF translation * #ask to SPARQL translation that works with the SMW-data to RDF translation in the sense that the results over SPARQL/RDF should be the same as over SQL What is missing to support #sparql-ike queries to external SPARQL services from the wiki: * some mechanism to deal with slow and dead SPARQL services to ensure that they do not delay page display; maybe some asynchronous result loading + caching; * some mechanism to configure available SPARQL services to avoid people having to enter the full URL when sending a query (this is also a possible security issue); * extended SPARQL result interpretation to wrap all elements (URIs, literals) that occur in results into SMW data items that can be displayed to users * adaptor code that allows all existing query printers to act on SPARQL results thus reeived (probably implement a "virtual" store that only retrieves data that came from one SPARQL query); * add an #ask-like parser function to make this available to users Asynchronous retrieval and caching seem to be the hardest problems here. Caching could possibly be built into the existing SPARQL communication abstraction layer. What is missing to have SMW itself provide a SPARQL service to the outside world in general: * a parser for SPARQL queries (could be a library), * code to translate SPARQL queries into SQL queries over SMW's database The second item is a lot of work. Alternatively, one can of course simply connect SMW to an RDF store and let this store provide the SPARQL endpoint. This seems more viable but then SPARQL is not a standard functionality of SMW. Regards, Markus |
From: Dan B. <dan...@gm...> - 2011-09-05 12:44:41
|
I can't comment in detail, but I think that having the ability to query specific SPARQL endpoints and map the results to property values would be a killer feature. There is so much data on the semantic web, being able to navigate it in my wiki and being able to augment my wiki's content with it would be fantastic. I'll try looking at the code, but I've a lot to learn. Cheers, Dan. On 26 August 2011 17:56, Markus Krötzsch <ma...@se...> wrote: > On 26/08/11 15:31, Chris Davis wrote: >> Am I correct in observing that SMW 1.6 does not support sparql queries >> directly included in wiki pages, but only allows for ask queries which >> are then translated into sparql syntax? >> >> The reason for the question is that I'm one of the developers of the >> SparqlExtension, and would like to migrate to SMW 1.6. However, I've >> become somewhat addicted to/dependent on the more advanced features of >> sparql such as federated queries, type conversions, custom functions, >> etc (see http://enipedia.tudelft.nl/ for what this enables). >> >> Is this something that will be eventually integrated in the core code, >> or am I better off for the moment setting up an extension that enables >> this more advanced functionality with SMW 1.6? > > We recently discussed that this would be nice to have in the core, but > that some more work is needed to get it there. There are certainly > various parties who would be interested in this feature, but someone > needs to take the lead in implementing what remains to be done to get > this into SMW. Below are some facts to summarise the status. > > > What SMW 1.6 provides is support for SPARQL to the extent needed to have > a SPARQL-based backend for query answering. So the following functional > components are in SMW: > > * a SPARQL communication abstraction, similar to MW's DataBase > * SPARQL result parser for the standard XML result format > * Some basic mechanism for interpreting SPARQL elements (esp. URIs) as > WikiPages; could be extended to wrap literals into accoding SMW data > items as well > * SMW-data to RDF translation > * #ask to SPARQL translation that works with the SMW-data to RDF > translation in the sense that the results over SPARQL/RDF should be the > same as over SQL > > > What is missing to support #sparql-ike queries to external SPARQL > services from the wiki: > > * some mechanism to deal with slow and dead SPARQL services to ensure > that they do not delay page display; maybe some asynchronous result > loading + caching; > * some mechanism to configure available SPARQL services to avoid people > having to enter the full URL when sending a query (this is also a > possible security issue); > * extended SPARQL result interpretation to wrap all elements (URIs, > literals) that occur in results into SMW data items that can be > displayed to users > * adaptor code that allows all existing query printers to act on SPARQL > results thus reeived (probably implement a "virtual" store that only > retrieves data that came from one SPARQL query); > * add an #ask-like parser function to make this available to users > > Asynchronous retrieval and caching seem to be the hardest problems here. > Caching could possibly be built into the existing SPARQL communication > abstraction layer. > > > What is missing to have SMW itself provide a SPARQL service to the > outside world in general: > > * a parser for SPARQL queries (could be a library), > * code to translate SPARQL queries into SQL queries over SMW's database > > The second item is a lot of work. Alternatively, one can of course > simply connect SMW to an RDF store and let this store provide the SPARQL > endpoint. This seems more viable but then SPARQL is not a standard > functionality of SMW. > > > Regards, > > Markus > > > ------------------------------------------------------------------------------ > EMC VNX: the world's simplest storage, starting under $10K > The only unified storage solution that offers unified management > Up to 160% more powerful than alternatives and 25% more efficient. > Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev > _______________________________________________ > Semediawiki-devel mailing list > Sem...@li... > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel > |
From: Chris D. <c_...@ya...> - 2011-09-05 22:14:12
|
> I can't comment in detail, but I think that having the ability to > query specific SPARQL endpoints and map the results to property values > would be a killer feature. > There is so much data on the semantic web, being able to navigate it > in my wiki and being able to augment my wiki's content with it would > be fantastic. At a minimum, we're currently working on updating the SparqlExtension to function with SMW 1.6. With this extension we already allow querying external endpoints (we need to update the documentation), and it's possible with it to map results to property values as you mention. As a caveat, we've tested this with Joseki and Sesame, although not yet with 4Store and Virtuoso. Markus brings up a lot of good points, and below are a few responses to these, based on our experience with the SparqlExtension: > * some mechanism to deal with slow and dead SPARQL services to ensure > that they do not delay page display; maybe some asynchronous result > loading + caching; We haven't had a problem with dead SPARQL endpoints. For example, when the CIA World Factbook has been down, no data gets returned, and the page loads quickly. Slow services are another issue though, and we haven't tackled it yet, and asynchronous result loading is a great idea. To help things out, we currently use a Squid server to cache both incoming and outgoing query results, and we have code that clears the query results in the cache whenever the page containing those queries is purged. > * some mechanism to configure available SPARQL services to avoid people > having to enter the full URL when sending a query (this is also a > possible security issue); As for security, is the main concern about the update URL? I assume that the query URL should be ok to be public knowledge. For the rest of Markus' points (SMW data items, #ask parsers, existing query printers, etc.) the bottom line is that we haven't set up the SparqlExtension to be tightly integrated with the SMW code yet, and this is an area for improvement. > Alternatively, one can of course > simply connect SMW to an RDF store and let this store provide the SPARQL > endpoint. This seems more viable but then SPARQL is not a standard > functionality of SMW. Is the main concern about having something that just works out of the box with minimal installation hassle? Could there be an option for a kind of "expert functionality" in the interim where you can rely on the functionality of the RDF store while the rest of the features mentioned are being implemented? Regards, Chris ________________________________ From: Dan Bolser <dan...@gm...> To: Markus Krötzsch <ma...@se...> Cc: Chris Davis <c_...@ya...>; "sem...@li..." <sem...@li...> Sent: Monday, September 5, 2011 2:44 PM Subject: Re: [SMW-devel] SPARQL queries and SMW 1.6 I can't comment in detail, but I think that having the ability to query specific SPARQL endpoints and map the results to property values would be a killer feature. There is so much data on the semantic web, being able to navigate it in my wiki and being able to augment my wiki's content with it would be fantastic. I'll try looking at the code, but I've a lot to learn. Cheers, Dan. On 26 August 2011 17:56, Markus Krötzsch <ma...@se...> wrote: > On 26/08/11 15:31, Chris Davis wrote: >> Am I correct in observing that SMW 1.6 does not support sparql queries >> directly included in wiki pages, but only allows for ask queries which >> are then translated into sparql syntax? >> >> The reason for the question is that I'm one of the developers of the >> SparqlExtension, and would like to migrate to SMW 1.6. However, I've >> become somewhat addicted to/dependent on the more advanced features of >> sparql such as federated queries, type conversions, custom functions, >> etc (see http://enipedia.tudelft.nl/ for what this enables). >> >> Is this something that will be eventually integrated in the core code, >> or am I better off for the moment setting up an extension that enables >> this more advanced functionality with SMW 1.6? > > We recently discussed that this would be nice to have in the core, but > that some more work is needed to get it there. There are certainly > various parties who would be interested in this feature, but someone > needs to take the lead in implementing what remains to be done to get > this into SMW. Below are some facts to summarise the status. > > > What SMW 1.6 provides is support for SPARQL to the extent needed to have > a SPARQL-based backend for query answering. So the following functional > components are in SMW: > > * a SPARQL communication abstraction, similar to MW's DataBase > * SPARQL result parser for the standard XML result format > * Some basic mechanism for interpreting SPARQL elements (esp. URIs) as > WikiPages; could be extended to wrap literals into accoding SMW data > items as well > * SMW-data to RDF translation > * #ask to SPARQL translation that works with the SMW-data to RDF > translation in the sense that the results over SPARQL/RDF should be the > same as over SQL > > > What is missing to support #sparql-ike queries to external SPARQL > services from the wiki: > > * adaptor code that allows all existing query printers to act on SPARQL > results thus reeived (probably implement a "virtual" store that only > retrieves data that came from one SPARQL query); > * add an #ask-like parser function to make this available to users > > Asynchronous retrieval and caching seem to be the hardest problems here. > Caching could possibly be built into the existing SPARQL communication > abstraction layer. > > > What is missing to have SMW itself provide a SPARQL service to the > outside world in general: > > * a parser for SPARQL queries (could be a library), > * code to translate SPARQL queries into SQL queries over SMW's database > > The second item is a lot of work. Alternatively, one can of course > simply connect SMW to an RDF store and let this store provide the SPARQL > endpoint. This seems more viable but then SPARQL is not a standard > functionality of SMW. > > > Regards, > > Markus > > > ------------------------------------------------------------------------------ > EMC VNX: the world's simplest storage, starting under $10K > The only unified storage solution that offers unified management > Up to 160% more powerful than alternatives and 25% more efficient. > Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev > _______________________________________________ > Semediawiki-devel mailing list > Sem...@li... > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel > |
From: Markus K. <ma...@se...> - 2011-09-06 14:13:59
|
On 05/09/11 23:14, Chris Davis wrote: >> I can't comment in detail, but I think that having the ability to >> query specific SPARQL endpoints and map the results to property values >> would be a killer feature. > >> There is so much data on the semantic web, being able to navigate it >> in my wiki and being able to augment my wiki's content with it would >> be fantastic. > > At a minimum, we're currently working on updating the SparqlExtension to > function with SMW 1.6. With this extension we already allow querying > external endpoints (we need to update the documentation), and it's > possible with it to map results to property values as you mention. As a > caveat, we've tested this with Joseki and Sesame, although not yet with > 4Store and Virtuoso. Nice. > > Markus brings up a lot of good points, and below are a few responses to > these, based on our experience with the SparqlExtension: > >> * some mechanism to deal with slow and dead SPARQL services to ensure >> that they do not delay page display; maybe some asynchronous result >> loading + caching; > > We haven't had a problem with dead SPARQL endpoints. For example, when > the CIA World Factbook has been down, no data gets returned, and the > page loads quickly. Slow services are another issue though, and we > haven't tackled it yet, and asynchronous result loading is a great idea. > To help things out, we currently use a Squid server to cache both > incoming and outgoing query results, and we have code that clears the > query results in the cache whenever the page containing those queries is > purged. Good to know that. Maybe we just need to try how it works out in practice before bothering too much about caching. > >> * some mechanism to configure available SPARQL services to avoid people >> having to enter the full URL when sending a query (this is also a >> possible security issue); > > As for security, is the main concern about the update URL? I assume that > the query URL should be ok to be public knowledge. My concern was rather data injection/spamming. If we would allow SPARQL queries to work like #ask, including "further results" special page, then one should be certain that no malicious data source can use this to inject large amounts of annoying or even dangerous data into the site. Some of these concerns might already be covered by MW's mechanisms, but I am still not fully comfortable by allowing content from arbitrary sources to be pulled into the wiki, when it could possibly circumvent some of MW's defences against spam or script attacks. > > For the rest of Markus' points (SMW data items, #ask parsers, existing > query printers, etc.) the bottom line is that we haven't set up the > SparqlExtension to be tightly integrated with the SMW code yet, and this > is an area for improvement. Ok. What I was wondering about is how to accomplish mapping. Technically, SPARQL communicates all data via full URIs or literals. The latter is easy enough (mapping to suitable datatypes) but the treatment of URIs (both input and output) is not so clear to me. Writing and displaying full URIs seems not such a nice way of presenting data. One can of course always query for the rdfs:label with all data to have some hope for getting a user-readable label, but this may not always work. And then it might be that some external URIs refer to an object in the wiki, suggesting some more elaborate mapping mechanisms. > >> Alternatively, one can of course >> simply connect SMW to an RDF store and let this store provide the SPARQL >> endpoint. This seems more viable but then SPARQL is not a standard >> functionality of SMW. > > Is the main concern about having something that just works out of the > box with minimal installation hassle? Could there be an option for a > kind of "expert functionality" in the interim where you can rely on the > functionality of the RDF store while the rest of the features mentioned > are being implemented? Yes, this seems to be feasible. As a middle way, one could also have a reduced support for "simple" SPARQL queries in the default SMW implementation (accept only queries that can be expressed in #ask and that are simple in some other respects), and allow this SPARQL support to be extended with an optional RDF store. Markus > *From:* Dan Bolser <dan...@gm...> > *To:* Markus Krötzsch <ma...@se...> > *Cc:* Chris Davis <c_...@ya...>; > "sem...@li..." > <sem...@li...> > *Sent:* Monday, September 5, 2011 2:44 PM > *Subject:* Re: [SMW-devel] SPARQL queries and SMW 1.6 > > I can't comment in detail, but I think that having the ability to > query specific SPARQL endpoints and map the results to property values > would be a killer feature. > > There is so much data on the semantic web, being able to navigate it > in my wiki and being able to augment my wiki's content with it would > be fantastic. > > I'll try looking at the code, but I've a lot to learn. > > > Cheers, > Dan. > > On 26 August 2011 17:56, Markus Krötzsch <ma...@se... > <mailto:ma...@se...>> wrote: > > On 26/08/11 15:31, Chris Davis wrote: > >> Am I correct in observing that SMW 1.6 does not support sparql queries > >> directly included in wiki pages, but only allows for ask queries which > >> are then translated into sparql syntax? > >> > >> The reason for the question is that I'm one of the developers of the > >> SparqlExtension, and would like to migrate to SMW 1.6. However, I've > >> become somewhat addicted to/dependent on the more advanced features of > >> sparql such as federated queries, type conversions, custom functions, > >> etc (see http://enipedia.tudelft.nl/ for what this enables). > >> > >> Is this something that will be eventually integrated in the core code, > >> or am I better off for the moment setting up an extension that enables > >> this more advanced functionality with SMW 1.6? > > > > We recently discussed that this would be nice to have in the core, but > > that some more work is needed to get it there. There are certainly > > various parties who would be interested in this feature, but someone > > needs to take the lead in implementing what remains to be done to get > > this into SMW. Below are some facts to summarise the status. > > > > > > What SMW 1.6 provides is support for SPARQL to the extent needed to have > > a SPARQL-based backend for query answering. So the following functional > > components are in SMW: > > > > * a SPARQL communication abstraction, similar to MW's DataBase > > * SPARQL result parser for the standard XML result format > > * Some basic mechanism for interpreting SPARQL elements (esp. URIs) as > > WikiPages; could be extended to wrap literals into accoding SMW data > > items as well > > * SMW-data to RDF translation > > * #ask to SPARQL translation that works with the SMW-data to RDF > > translation in the sense that the results over SPARQL/RDF should be the > > same as over SQL > > > > > > What is missing to support #sparql-ike queries to external SPARQL > > services from the wiki: > > > > * adaptor code that allows all existing query printers to act on SPARQL > > results thus reeived (probably implement a "virtual" store that only > > retrieves data that came from one SPARQL query); > > * add an #ask-like parser function to make this available to users > > > > Asynchronous retrieval and caching seem to be the hardest problems here. > > Caching could possibly be built into the existing SPARQL communication > > abstraction layer. > > > > > > What is missing to have SMW itself provide a SPARQL service to the > > outside world in general: > > > > * a parser for SPARQL queries (could be a library), > > * code to translate SPARQL queries into SQL queries over SMW's database > > > > The second item is a lot of work. Alternatively, one can of course > > simply connect SMW to an RDF store and let this store provide the SPARQL > > endpoint. This seems more viable but then SPARQL is not a standard > > functionality of SMW. > > > > > > Regards, > > > > Markus > > > > > > > ------------------------------------------------------------------------------ > > EMC VNX: the world's simplest storage, starting under $10K > > The only unified storage solution that offers unified management > > Up to 160% more powerful than alternatives and 25% more efficient. > > Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev > > _______________________________________________ > > Semediawiki-devel mailing list > > Sem...@li... > <mailto:Sem...@li...> > > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel > > > > |
From: Samuel L. <sam...@ri...> - 2011-10-01 01:50:43
|
On 09/06/2011 04:13 PM, Markus Krötzsch wrote: > What I was wondering about is how to accomplish mapping. Technically, > SPARQL communicates all data via full URIs or literals. The latter is > easy enough (mapping to suitable datatypes) but the treatment of URIs > (both input and output) is not so clear to me. Writing and displaying > full URIs seems not such a nice way of presenting data. > > One can of course always query for the rdfs:label with all data to have > some hope for getting a user-readable label, but this may not always > work. And then it might be that some external URIs refer to an object in > the wiki, suggesting some more elaborate mapping mechanisms. Yea, it seems that this "URI to wiki page title" [1] mapping problem is a very common one - it was mentioned in at least three talks at SMWCon, with all three having a very similar solution! :) Thus, I guess the optimal thing would have to solve this once and for all in one re-usable component. This should IMO be something where you can prioritize which properties to use in first hand, second hand etc... with an indefinite list of configurable fallbacks ... like rdfs:label as the first one, then dc:title, and so on and so on ... and as a last fallback the local/last part of the URI string itself can be used ... all as shown in slides 17-22 in the RDFIO talk [2]. Maximum configureability + sensible defaults could go a long way, I think. Though still, as you said, someone needs to do the work also :) (in getting a common component, in this case) // Samuel [1] Since, as I understand, all URI:s should be ultimately be mapped to a "wiki page", at least if persisted ... (?) [2] http://www.slideshare.net/SamuelLampa/hooking-up-semantic-mediawiki-with-external-tools-via-sparql -- Samuel Lampa --------------------------------------- Bioinformatician @ Uppsala University Blog: http://saml.rilspace.org --------------------------------------- |
From: Dan B. <dan...@gm...> - 2011-10-01 07:54:16
|
On 1 October 2011 02:50, Samuel Lampa <sam...@ri...> wrote: > On 09/06/2011 04:13 PM, Markus Krötzsch wrote: >> One can of course always query for the rdfs:label with all data to have >> some hope for getting a user-readable label, but this may not always >> work. And then it might be that some external URIs refer to an object in >> the wiki, suggesting some more elaborate mapping mechanisms. > Though still, as you said, someone needs to do the work also :) > (in getting a common component, in this case) Can the Rap RDF API used to help here? Seems pretty powerful, but, with limited PHP experience, I haven't got it to work yet! http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/tests.html Or are these features already implemented within SMW's RDF handler? Cheers, Dan. |
From: Samuel L. <sam...@ri...> - 2011-10-01 10:17:42
|
On 10/01/2011 09:54 AM, Dan Bolser wrote: > On 1 October 2011 02:50, Samuel Lampa<sam...@ri...> wrote: >> On 09/06/2011 04:13 PM, Markus Krötzsch wrote: > >>> One can of course always query for the rdfs:label with all data to have >>> some hope for getting a user-readable label, but this may not always >>> work. And then it might be that some external URIs refer to an object in >>> the wiki, suggesting some more elaborate mapping mechanisms. > >> Though still, as you said, someone needs to do the work also :) >> (in getting a common component, in this case) > > Can the Rap RDF API used to help here? Seems pretty powerful, but, > with limited PHP experience, I haven't got it to work yet! > > http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/tests.html > > Or are these features already implemented within SMW's RDF handler? > > Cheers, > Dan. I guess the main problem with the RAP is it is more or less abandoned? v. 0.9.6 released 2008-02-29. Also, I think this specific mapping problem is ,pre unique to the Wiki/RDF combo ("how to represent RDF as Wiki pages"), rather than for RDF in general, so RAP might not be very much focused on that problem (though I neither has looked very close at it). Cheers // Samuel -- Samuel Lampa --------------------------------------- Bioinformatician @ Uppsala University Blog: http://saml.rilspace.org --------------------------------------- |
From: Markus K. <ma...@se...> - 2011-10-01 12:58:12
|
On 01/10/11 11:17, Samuel Lampa wrote: > On 10/01/2011 09:54 AM, Dan Bolser wrote: >> On 1 October 2011 02:50, Samuel Lampa<sam...@ri...> wrote: >>> On 09/06/2011 04:13 PM, Markus Krötzsch wrote: >> >>>> One can of course always query for the rdfs:label with all data to have >>>> some hope for getting a user-readable label, but this may not always >>>> work. And then it might be that some external URIs refer to an >>>> object in >>>> the wiki, suggesting some more elaborate mapping mechanisms. >> >>> Though still, as you said, someone needs to do the work also :) >>> (in getting a common component, in this case) >> >> Can the Rap RDF API used to help here? Seems pretty powerful, but, >> with limited PHP experience, I haven't got it to work yet! >> >> http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/tests.html >> >> Or are these features already implemented within SMW's RDF handler? >> >> Cheers, >> Dan. > > > I guess the main problem with the RAP is it is more or less abandoned? > v. 0.9.6 released 2008-02-29. Yes, this is why we removed all RAP support from SMW a while ago. Another RDF library for PHP is ARC2, though it also may have an uncertain future. > > Also, I think this specific mapping problem is ,pre unique to the > Wiki/RDF combo ("how to represent RDF as Wiki pages"), rather than for > RDF in general, so RAP might not be very much focused on that problem > (though I neither has looked very close at it). Exactly, it is just a general purpose library that helps processing RDF data. Regards, Markus |
From: Dan B. <dan...@gm...> - 2011-10-01 19:37:28
|
On 1 October 2011 13:58, Markus Krötzsch <ma...@se...> wrote: > On 01/10/11 11:17, Samuel Lampa wrote: >> >> On 10/01/2011 09:54 AM, Dan Bolser wrote: >>> >>> On 1 October 2011 02:50, Samuel Lampa<sam...@ri...> wrote: >>>> >>>> On 09/06/2011 04:13 PM, Markus Krötzsch wrote: >>> >>>>> One can of course always query for the rdfs:label with all data to have >>>>> some hope for getting a user-readable label, but this may not always >>>>> work. And then it might be that some external URIs refer to an >>>>> object in >>>>> the wiki, suggesting some more elaborate mapping mechanisms. >>> >>>> Though still, as you said, someone needs to do the work also :) >>>> (in getting a common component, in this case) >>> >>> Can the Rap RDF API used to help here? Seems pretty powerful, but, >>> with limited PHP experience, I haven't got it to work yet! >>> >>> http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/tests.html >>> >>> Or are these features already implemented within SMW's RDF handler? >>> >>> Cheers, >>> Dan. >> >> >> I guess the main problem with the RAP is it is more or less abandoned? >> v. 0.9.6 released 2008-02-29. > > Yes, this is why we removed all RAP support from SMW a while ago. Another > RDF library for PHP is ARC2, though it also may have an uncertain future. > >> >> Also, I think this specific mapping problem is ,pre unique to the >> Wiki/RDF combo ("how to represent RDF as Wiki pages"), rather than for >> RDF in general, so RAP might not be very much focused on that problem >> (though I neither has looked very close at it). > > Exactly, it is just a general purpose library that helps processing RDF > data. Right, that's why it may be useful to implement the rules suggested. Because its open source, it would be very easy to revive it (or at least the useful bits). Just a random idea, so please don't worry. Dan. > Regards, > > Markus > > |
From: Benedikt K. <ben...@ki...> - 2011-10-05 12:58:51
Attachments:
smime.p7s
|
Hi. For Semantic Web Browser we use EasyRDF [1] for parsing RDF. Best, Benedikt [1] http://www.aelius.com/njh/easyrdf/ -- AIFB, Karlsruhe Institute of Technology (KIT) Phone: +49 721 608-47946 Email: ben...@ki... Web: http://www.aifb.kit.edu/web/Hauptseite/en > -----Original Message----- > From: Dan Bolser [mailto:dan...@gm...] > Sent: Saturday, October 01, 2011 9:37 PM > To: Markus Krötzsch > Cc: sem...@li...; Chris Davis > Subject: Re: [SMW-devel] SPARQL queries and SMW 1.6 > > On 1 October 2011 13:58, Markus Krötzsch <ma...@se...> > wrote: > > On 01/10/11 11:17, Samuel Lampa wrote: > >> > >> On 10/01/2011 09:54 AM, Dan Bolser wrote: > >>> > >>> On 1 October 2011 02:50, Samuel Lampa<sam...@ri...> wrote: > >>>> > >>>> On 09/06/2011 04:13 PM, Markus Krötzsch wrote: > >>> > >>>>> One can of course always query for the rdfs:label with all data to have > >>>>> some hope for getting a user-readable label, but this may not always > >>>>> work. And then it might be that some external URIs refer to an > >>>>> object in > >>>>> the wiki, suggesting some more elaborate mapping mechanisms. > >>> > >>>> Though still, as you said, someone needs to do the work also :) > >>>> (in getting a common component, in this case) > >>> > >>> Can the Rap RDF API used to help here? Seems pretty powerful, but, > >>> with limited PHP experience, I haven't got it to work yet! > >>> > >>> http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/tests.html > >>> > >>> Or are these features already implemented within SMW's RDF handler? > >>> > >>> Cheers, > >>> Dan. > >> > >> > >> I guess the main problem with the RAP is it is more or less abandoned? > >> v. 0.9.6 released 2008-02-29. > > > > Yes, this is why we removed all RAP support from SMW a while ago. Another > > RDF library for PHP is ARC2, though it also may have an uncertain future. > > > >> > >> Also, I think this specific mapping problem is ,pre unique to the > >> Wiki/RDF combo ("how to represent RDF as Wiki pages"), rather than for > >> RDF in general, so RAP might not be very much focused on that problem > >> (though I neither has looked very close at it). > > > > Exactly, it is just a general purpose library that helps processing RDF > > data. > > Right, that's why it may be useful to implement the rules suggested. > > Because its open source, it would be very easy to revive it (or at > least the useful bits). > > Just a random idea, so please don't worry. > > > Dan. > > > Regards, > > > > Markus > > > > > > ---------------------------------------------------------------------------- -- > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2dcopy2 > _______________________________________________ > Semediawiki-devel mailing list > Sem...@li... > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel |
From: Dan B. <dan...@gm...> - 2011-09-06 08:31:59
|
On 5 September 2011 23:14, Chris Davis <c_...@ya...> wrote: >> I can't comment in detail, but I think that having the ability to >> query specific SPARQL endpoints and map the results to property values >> would be a killer feature. > >> There is so much data on the semantic web, being able to navigate it >> in my wiki and being able to augment my wiki's content with it would >> be fantastic. > > At a minimum, we're currently working on updating the SparqlExtension to > function with SMW 1.6. With this extension we already allow querying > external endpoints (we need to update the documentation), and it's possible > with it to map results to property values as you mention. As a caveat, > we've tested this with Joseki and Sesame, although not yet with 4Store and > Virtuoso. Great! Previously, I read the SparqlExtension documentation and came to the conclusion that I would have to mirror the entire content of the endpoint in a local triple store before being able to query it in my wiki. This put me off using the SparqlExtension, because I only need a tiny fraction of the data in the endpoint (probably less than 0.1%) in my wiki. It's good to know that this is no longer required (if I understand correctly). Please do post links to the updated documentation when that is done. Cheers, Dan. |
From: Chris D. <c_...@ya...> - 2011-09-07 09:16:03
|
>Previously, I read the SparqlExtension documentation and came to the >conclusion that I would have to mirror the entire content of the >endpoint in a local triple store before being able to query it in my >wiki. This put me off using the SparqlExtension, because I only need a >tiny fraction of the data in the endpoint (probably less than 0.1%) in >my wiki. The only thing that gets mirrored is the RDF data from your own wiki - included in the extension is a script that dumps all the wiki RDF data into the triple store. If you don't run this, the triple store will just get progressively populated whenever you edit pages. Chris |