sparql4j-devel Mailing List for sparql4j (Page 2)
Status: Pre-Alpha
Brought to you by:
jsaarela
You can subscribe to this list here.
2005 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
(11) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(13) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Janne S. <jan...@pr...> - 2005-11-11 11:14:34
|
> You're right - email would be best for discussion of the document. Let me switch to email from the word doc for a while. 1. I would now agree that let's not override the existing semantics of the execute() method i.e. let's manage boolean return values via ResultSet with 1 single row. 2. In order to make work proceed step-by-step, should we define the development via milestones where M1 would include boolean results as well as result sets but not graphs. M2, once planned later, would then include graphs, too. We have some use cases for their eventual inclusion but in order to validate sparql protocol practical aspects we could target M1 first. Your remark on the timezone was a good one - what I don't know (yet) is whether the javax.xml.datatypes could be returned from getTimestamp() methods (i.e. do these types inherit any of the java basic data types) or if we need a method signature? Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Seaborne, A. <and...@hp...> - 2005-11-07 11:32:53
|
Commnets added into the document. -------- Original Message -------- > From: Janne Saarela <> > Date: 4 November 2005 10:54 >=20 > I committed my 1st draft in the cvs module under 'doc' directory. > Please study the MS Word document and let me know if the use cases > match with the idea you have for using the driver. =20 >=20 > The requirements section is currently derived from the use cases and I > added some non-functional reqs to match with my expectations.=20 >=20 > I didn't even get to design before I get your word on the preceding > parts - I hope you find the time to study the doc soon enough.=20 >=20 > I/we may have to convert the document to something ascii based soon > enough because merging of concurrent work is just not going to work > with MS Word. =20 You're right - email would be best for discussion of the document. Andy >=20 > Cheers, > Janne > -- > Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin > kaari 12, 02600 Espoo, Finland=20 > Internet: http://www.profium.com >=20 >=20 >=20 > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. > Download=20 > it for free - -and be entered to win a 42" plasma tv or your very own > Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php > _______________________________________________ > Sparql4j-devel mailing list > Spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparql4j-devel |
From: Janne S. <jan...@pr...> - 2005-11-04 10:54:57
|
I committed my 1st draft in the cvs module under 'doc' directory. Please study the MS Word document and let me know if the use cases match with the idea you have for using the driver. The requirements section is currently derived from the use cases and I added some non-functional reqs to match with my expectations. I didn't even get to design before I get your word on the preceding parts - I hope you find the time to study the doc soon enough. I/we may have to convert the document to something ascii based soon enough because merging of concurrent work is just not going to work with MS Word. Cheers, Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Seaborne, A. <and...@hp...> - 2005-11-02 13:31:53
|
-------- Original Message -------- > From: Janne Saarela <> > Date: 1 November 2005 19:37 >=20 > Thanks for the ping acks to everybody. >=20 > > what is the architecture you have in mind, building on top of ARQ and > > Jena the first prototype? or consider other Java APIs like JRDF as=20 > > well as Kowari and OpenRDF/Sesame? This is just my initial thoughts and I'm open to all other possibiliities. I believe thevalue in sparql4j is to be a bridge from non-RDF applications to Semantci Web repositories. This would imply that the client side is "RDF free" or rather knows about RDF terms (literals, bNodes, URIs) but not RDF graphs. It's (at least initially) about making SELECT queries to a SPARQL server with presentation of results in a JDBC library fashion. The connection could be HTTP, SOAP or native (e.g. direct connect for efficiency). Sesame's binary result format is interesting. SPARQL XML Results should be an option for compatibility. >=20 > Let's start with a design document that matches with the high-level goal > of creating the client side driver code that can be used across all > SPARQL protocol and SPARQL query language compliant query engines. >=20 > I'm ready to make the 1st draft and I'll probably log it to cvs=20 Good idea - and thanks for kicking this off. > - please > have the skeleton cvs project imported in the meantime. Done! I'm an Eclipse user so checking in a self contained project is good, if we are all Eclipse users or it does not get in the way Andy >=20 > Once the design is settled, I will be asking you again what parts you > feel like contributing to. >=20 > Regards, > Janne > -- > Janne Saarela <janne.saarela at profium.com> > Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland > Internet: http://www.profium.com |
From: Seaborne, A. <and...@hp...> - 2005-11-02 13:26:41
|
-------- Original Message -------- > From: Janne Saarela <> > Date: 27 October 2005 20:10 >=20 > I've never done jdbc validation before so I wonder if the Sun test suite > would be useful for us? >=20 > http://java.sun.com/products/jdbc/jdbctestsuite-1_3_1.html >=20 > I could imagine we could use it to test the driver to some extent, maybe > not SQL parsing/error management. Tests are good - my only caveat is that we still have to deal with the difference bewtween the SQL world view and SPARQL world view so we can't just take the JDBC tests as they are. (e.g. types of data items are not per column; SPARQL gives variable number of columns per row (I'm assuming we null the holes - but it is a design decision to be worked through)). Andy >=20 > Janne > -- > Janne Saarela <janne.saarela at profium.com> > Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland > Internet: http://www.profium.com >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. > Get Certified Today * Register for a JBoss Training Course > Free Certification Exam for All Training Attendees Through End of 2005 > Visit http://www.jboss.com/services/certification for more information > _______________________________________________ > Sparql4j-devel mailing list > Spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparql4j-devel |
From: Janne S. <jan...@pr...> - 2005-11-01 19:37:30
|
Thanks for the ping acks to everybody. > what is the architecture you have in mind, building on top of ARQ and > Jena the first prototype? or consider other Java APIs like JRDF as well > as Kowari and OpenRDF/Sesame? Let's start with a design document that matches with the high-level goal of creating the client side driver code that can be used across all SPARQL protocol and SPARQL query language compliant query engines. I'm ready to make the 1st draft and I'll probably log it to cvs - please have the skeleton cvs project imported in the meantime. Once the design is settled, I will be asking you again what parts you feel like contributing to. Regards, Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Alberto R. <al...@as...> - 2005-10-28 08:11:23
|
On Oct 27, 2005, at 9:34 AM, Janne Saarela wrote: > Hi all > > After many quiet months, Profium is now ready to allocate resources =20= > to push this work forward. Can I please have an acknowledgement =20 > from the current subscribers of this list what your "busyness" =20 > level is for the remainder of the year? > > I can say from Profium's behalf that all the three resources, =20 > namely me, Samppa Saarela and Timo Westk=E4mper will be contributing =20= > to this work - resource allocation levels still to be determined. > > The acknowledgements will help us plan schedule and distribution of =20= > work. Janne/Andy, at the moment we at Asemantics we do not have any direct commitment =20 on this work - we have some Java and RDF work which might be of =20 relevance here - but I would be interested to contribute to the =20 project even personally. what is the architecture you have in mind, building on top of ARQ and =20= Jena the first prototype? or consider other Java APIs like JRDF as =20 well as Kowari and OpenRDF/Sesame? Alberto= |
From: Janne S. <jan...@pr...> - 2005-10-27 19:10:00
|
I've never done jdbc validation before so I wonder if the Sun test suite would be useful for us? http://java.sun.com/products/jdbc/jdbctestsuite-1_3_1.html I could imagine we could use it to test the driver to some extent, maybe not SQL parsing/error management. Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Seaborne, A. <and...@hp...> - 2005-10-27 09:26:44
|
-------- Original Message -------- > From: Janne Saarela <> > Date: 27 October 2005 09:35 >=20 > Hi all >=20 > After many quiet months, Profium is now ready to allocate resources to > push this work forward. Can I please have an acknowledgement from the > current subscribers of this list what your "busyness" level is for the > remainder of the year? =20 >=20 > I can say from Profium's behalf that all the three resources, namely > me, Samppa Saarela and Timo Westk=E4mper will be contributing to this > work - resource allocation levels still to be determined. =20 >=20 > The acknowledgements will help us plan schedule and distribution of > work.=20 >=20 > Regards, > Janne Janne, Thanks for doing this.=20 My busyness level is as per-DAWG :-) ARQ has a vaguely JDBC-like API for delivering results. There is a uses = HTTP to send the query and processes the results (streaming) for an = iterator view of the results (see the engineHTTP package). Not an JDBC = driver per se but it has been useful in understanding some issue like = streaming. Andy |
From: Janne S. <jan...@pr...> - 2005-10-27 08:35:15
|
Hi all After many quiet months, Profium is now ready to allocate resources to push this work forward. Can I please have an acknowledgement from the current subscribers of this list what your "busyness" level is for the remainder of the year? I can say from Profium's behalf that all the three resources, namely me, Samppa Saarela and Timo Westkämper will be contributing to this work - resource allocation levels still to be determined. The acknowledgements will help us plan schedule and distribution of work. Regards, Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Seaborne, A. <and...@hp...> - 2005-01-25 12:15:50
|
-------- Original Message -------- > From: Alberto Reggiori <> > Date: 20 January 2005 10:36 >=20 > On Jan 20, 2005, at 10:04 AM, Janne Saarela wrote: >=20 > > > > I would like to propose the datasource for this driver to be > > > >=20 > > > > jdbc:sparql4j:[http|other]://www.serveraddress.com[:port]/path/more; > > > > param1=3Dvalue1 > > >=20 >=20 > [...snip...] >=20 > > > Hmm, - actually there is a level missing. We need something like: > > > jdbc:sparql4j:sparql (yuk - 2 sparql's) to say: > > > jdbc:sparql4j > > > Get JDBC to route requests to our driver > > > jdbc:sparql4j:sparql > > > our driver is going to use the SPARQL protocol > > > jdbc:sparql4j:sparql:http://server/path > > > our driver is going to use SPARQL protocol at the named service > > > May be that's excessive. An alternative is to not have sparql4j be > > > the subprotocol name and directly name the connection protocol: >=20 > and we have to keep inmind that URIs has generally a <4k max size (or > the network/firewalls might limit that) - and we want to be sure that > those "connection" parameters are not too verbose (I would be in favor > or k/v paris vs. full blown URIs if not necessary) Agree about 4k - there are some 1k limits in old proxies I gather (never met one myself). Connection URL and service request URL are different. In my quick-hack driver, I found the javax.sql.DataSource interface useful - the service description can be passed in there. To stick to java.sql, then a nasty encoding into the JDBC URL (which is unpacked by the driver (never leaves the machine - no 4k limit). c.f. the JDBC-ODBC bridge where the driver takes params.=20 >=20 > > I'm wondering what happens if the sparql protocol enables short names > > in the 'lang' attribute value, will the datasource effectively have > > to be something like=20 > >=20 > > jdbc:sparql:http://server/queryprocessor?lang=3Dsparql > >=20 > > This would mean any service description part of the protocol would > > need to be available before an application is built. >=20 > what do we mean for "service description" ? I hope not the WS XML world > meaning of it - or even worse OWL-S :) >=20 > Default is SPARQL (no lang param) - otherwise specified That seems to be the way the F2F was going but we have to await draft text to be sure. It's all very fluid. >=20 > > A datasource such as > >=20 > > jdbc:sparql:http://server/queryprocessor > >=20 > > would effectively have to fetch a service description before launching > > the first query in order to determine the 'lang' value. Luckily that > > value should be cacheable not having to fetch it before every single > > query. >=20 > also me worried about how to process that "service description" - I am > digging into JDBC3.0 spec chapter 7 (Database Metadata) and other > implementations around - which has some methods to retrieve general > information about the data source - not sure how much flexible > though... Good point. java.sql does rather prescibe the DB details but something might be workable. >=20 > Alberto Andy |
From: Seaborne, A. <and...@hp...> - 2005-01-25 12:08:12
|
-------- Original Message -------- > From: Janne Saarela <> > Date: 20 January 2005 09:05 >=20 > > > I would like to propose the datasource for this driver to be > > >=20 > > > jdbc:sparql4j:[http|other]://www.serveraddress.com[:port]/path/more;para m1=3Dvalue1 > > >=20 > >=20 > >=20 > > Checking, I found that the syntax is formally: > >=20 > > jdbc:<subprotocol>:<subname> > >=20 > > and very little restrictions on the subname (it can include : and /). > >=20 > > We can have our own subsubprotocol so: > >=20 > > jdbc:sparql4j:<subsubprotocol> > >=20 > > and we don't need to have a fixed remainder; it can be dependent on > > the subsubprotocol. For http, your suggestion of > >=20 > > =20 > > jdbc:sparql4j:http://www.serveraddress.com[:port]/path/more;param1=3Dvalu= e 1 > >=20 > > look right. To be clear, we have a URL for the service/model that we > > are going to query (the URL is NOT the query to go to the SPARQL > > server). The parameter is a parameter for the driver, not the SPARQL > > protocol.=20 > >=20 > > Hmm, - actually there is a level missing. We need something like: > >=20 > > jdbc:sparql4j:sparql > >=20 > > (yuk - 2 sparql's) to say: > >=20 > > jdbc:sparql4j > > Get JDBC to route requests to our driver > >=20 > > jdbc:sparql4j:sparql > > our driver is going to use the SPARQL protocol > >=20 > > jdbc:sparql4j:sparql:http://server/path > > our driver is going to use SPARQL protocol at the named service > >=20 > > May be that's excessive. An alternative is to not have sparql4j be > > the subprotocol name and directly name the connection protocol: >=20 > I'm wondering what happens if the sparql protocol enables short names in > the 'lang' attribute value, will the datasource effectively have to be > something like >=20 > jdbc:sparql:http://server/queryprocessor?lang=3Dsparql There is a different between the URL for the JDBC connection and the URL for the service request. The ":sparql:", and any service description, should be enough to tell the driver how to build the servcie request. The terminology caught be out for a while but in JDBC, the jdbc: URL names a connection, and a connection can have many request with different query strings. Hacking up a driver made it clearer for me :-) Andy >=20 > This would mean any service description part of the protocol would need > to be available before an application is built. >=20 > A datasource such as >=20 > jdbc:sparql:http://server/queryprocessor >=20 > would effectively have to fetch a service description before launching > the first query in order to determine the 'lang' value. Luckily that > value should be cacheable not having to fetch it before every single > query.=20 >=20 > Janne > -- > Janne Saarela <janne.saarela at profium.com> > Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland > Internet: http://www.profium.com >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting > Tool for open source databases. Create drag-&-drop reports. Save time > by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. > Download a FREE copy at http://www.intelliview.com/go/osdn_nl > _______________________________________________ > Sparql4j-devel mailing list > Spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparql4j-devel |
From: Alberto R. <al...@as...> - 2005-01-20 09:33:28
|
On Jan 20, 2005, at 10:04 AM, Janne Saarela wrote: >>> I would like to propose the datasource for this driver to be >>> >>> jdbc:sparql4j:[http|other]://www.serveraddress.com[:port]/path/more; >>> param1=value1 >> [...snip...] >> Hmm, - actually there is a level missing. We need something like: >> jdbc:sparql4j:sparql >> (yuk - 2 sparql's) to say: >> jdbc:sparql4j >> Get JDBC to route requests to our driver >> jdbc:sparql4j:sparql >> our driver is going to use the SPARQL protocol >> jdbc:sparql4j:sparql:http://server/path >> our driver is going to use SPARQL protocol at the named service >> May be that's excessive. An alternative is to not have sparql4j be >> the subprotocol name and directly name the connection protocol: and we have to keep inmind that URIs has generally a <4k max size (or the network/firewalls might limit that) - and we want to be sure that those "connection" parameters are not too verbose (I would be in favor or k/v paris vs. full blown URIs if not necessary) > I'm wondering what happens if the sparql protocol enables short names > in the 'lang' attribute value, will the datasource effectively have to > be something like > > jdbc:sparql:http://server/queryprocessor?lang=sparql > > This would mean any service description part of the protocol would > need to be available before an application is built. what do we mean for "service description" ? I hope not the WS XML world meaning of it - or even worse OWL-S :) Default is SPARQL (no lang param) - otherwise specified > A datasource such as > > jdbc:sparql:http://server/queryprocessor > > would effectively have to fetch a service description before launching > the first query in order to determine the 'lang' value. Luckily that > value should be cacheable not having to fetch it before every single > query. also me worried about how to process that "service description" - I am digging into JDBC3.0 spec chapter 7 (Database Metadata) and other implementations around - which has some methods to retrieve general information about the data source - not sure how much flexible though... Alberto |
From: Janne S. <jan...@pr...> - 2005-01-20 09:04:54
|
>> I would like to propose the datasource for this driver to be >> >> jdbc:sparql4j:[http|other]://www.serveraddress.com[:port]/path/more;param1=value1 >> > > > Checking, I found that the syntax is formally: > > jdbc:<subprotocol>:<subname> > > and very little restrictions on the subname (it can include : and /). > > We can have our own subsubprotocol so: > > jdbc:sparql4j:<subsubprotocol> > > and we don't need to have a fixed remainder; it can be dependent on the > subsubprotocol. For http, your suggestion of > > jdbc:sparql4j:http://www.serveraddress.com[:port]/path/more;param1=value1 > > look right. To be clear, we have a URL for the service/model that we > are going to query (the URL is NOT the query to go to the SPARQL > server). The parameter is a parameter for the driver, not the SPARQL > protocol. > > Hmm, - actually there is a level missing. We need something like: > > jdbc:sparql4j:sparql > > (yuk - 2 sparql's) to say: > > jdbc:sparql4j > Get JDBC to route requests to our driver > > jdbc:sparql4j:sparql > our driver is going to use the SPARQL protocol > > jdbc:sparql4j:sparql:http://server/path > our driver is going to use SPARQL protocol at the named service > > May be that's excessive. An alternative is to not have sparql4j be the > subprotocol name and directly name the connection protocol: I'm wondering what happens if the sparql protocol enables short names in the 'lang' attribute value, will the datasource effectively have to be something like jdbc:sparql:http://server/queryprocessor?lang=sparql This would mean any service description part of the protocol would need to be available before an application is built. A datasource such as jdbc:sparql:http://server/queryprocessor would effectively have to fetch a service description before launching the first query in order to determine the 'lang' value. Luckily that value should be cacheable not having to fetch it before every single query. Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Alberto R. <al...@as...> - 2005-01-15 16:38:38
|
hello On Jan 15, 2005, at 10:18 AM, Janne Saarela wrote: > I very much agree with your goals for this project. me too - let's start simple SELECT HTTP GET - but design should also accommodate future extensions eventually - plug-in idea of Andy is a good one as a starting hook for other things. > The non-RDF applications will find it easy to access SPARQL enabled > repositories. The easiness comes via using the familiar programming > concepts relating to access relational databases using JDBC. In > addition, the easiness comes via the use of on single jdbc driver > instead of having to download a separate one for each repository. I agree - we should start to provide an SQL alike tabular interface to RDF result sets, rather than graphs. > >> This, together with the scoping of JDBC >> >> """ javadoc java.sql (1.5.0) > > I would be in favor of targetting 1.4.2 JDK to start with. This is due > to our product which ships with 1.4.2 support as we speak with 1.5 > support coming in the future. > > I should check what changes there are in java.sql from 1.4.2 to 1.5. > Would you remember by heart? I can not help here too much - I guess Andy knows more about these issues - or we can check on Sun specs. Last spec is JDBC-3.0 > >> means I see SPARQL4J as a JDBC driver that is mainly about issuing >> SPARQL >> SELECT queries, rather than CONSTRUCT or DESCRIBE. A release with >> just >> SPARQL SELECT, using the plain XML result format, would be very >> useful to >> application writers - one, conventional, interface to RDF published >> data. >> Toolkit independent. > > SELECT we start with - let's see how CONSTRUCT and DESCRIBE can be > tweaked in the long run. exactly - the design should allow to accommodate extensions for more graph-ical alike queries Speaking about the Perl DBI world, what we have added some extra methods in addition to traditional SQL/relational operations (I would call it RDBC for RDF- DataBaseConnectivity) fetchrow_XML() -> fetch next XML chunk using DAWG-xml format fetchall_XML() -> fetch all the XML result-set in one go using DAWG-xml fetchsubgraph_serialize() -> return a serilization (RDF/XML, N-Triples or other) of the next subgraph resulting from the query (either SELECT, DESCRIBE, CONSTRUCT) fetchallgraph_serialize() -> return a serilization (RDF/XML, N-Triples or other) of the whole subgraph resulting from the query (merge of all subgraph of previous method) fetchsubgraph() -> return the next subgraph resulting from the query (e.g. GraphModel) fetchallgraph() -> return the whole subgraph resulting from the query Then the Perl API has explicitly a method called func() to call an extension function/method - I guess we will be able to tweak something similar in the JDBC, perhaps extending/sub-classing. > >> We could be merely inspired by JDBC but actually produce a new >> interface that is more SPARQL suitable. I'd like to avoid this for >> now >> and try to implement "pure" JDBC. > > My goal is the very same - let's know get into extensions right away. incremental - let's expose canonical (old) RDQL (SELECT only) functionality through JDBC - then in the meantime start with the rest of the features > >> I'm assuming that the connection to the database (the RDF store, the >> knowledge base) is HTTP. Now I would like to be able to take the >> SPARQL4J codebase and plug-in an adapter, instead of HTTP, to get a >> JDBC >> driver for ARQ [0] directly for local use. We need a plug-in layer >> for that >> which is a connection layer with SPARQL-centric mechanisms and then >> have >> common code for the presentation of results as JDBC methods. > > Ok, I see. Internally we can create a factory that gives the protocol > part implementation for the other parts of the driver. From the user > perspective this protocol should perhaps be visible on the datasource > string? What do you think? Let's start a separate thread on the > datasource. it might be JDBC database metadata (catalog) methods will help in there to bootstrap and negotiate the protocol part - or carefully define an initial list of possible protocols in addition to HTTP - for sure a local/hijacked one will be needed. > >> Update - out of scope: Until there is a standard (de facto or de jure) >> language, servers won't implement a common way to do RDF update - >> there are >> several major decisions, like handling bNodes, to be settled first. > > Agreed - out of scope for now. +1 > >> A quick review of the JDBC interfaces shows few tricky parts: >> >> 1/ NULLs. SQL has NULLs; SPARQL has unbound variables. That in itself >> is not important - NULL would be just a way of saying "not bound". >> But >> the getXXX methods must return a value of the given type and getInt >> returns >> 0 on NULL, which isn't very distinguishing value. But in RDF, there >> isn't >> always a value in the result set for a given row - NULLs become more >> common >> and the "return zero" solution is a bit weak. >> >> Solutions?: Default values that are more unusual values (e.g. >> MIN_VALUE+2) >> or ones that can be set by the app (requires an extension to JDBC). > > I would be happy with a default value. Let's see what the eventual > other users think. again - we might need to have a better look at the JDBC metadata/catalog layer to see what it offers to negotiate this if possible > >> 2/ Metadata. Each solution to a SPARQL query can have different types >> of values for the same property. Returning anything meaningful as >> "metadata" will need design work. Solution?: For now, return very >> little and see if applications use the metadata information much. > > Let's make a, if not dummy, a very basic implementation of the > resultset metadata and listen to use cases that would require a more > elaborate implementation. agree - let's try to get basic conjunctive query to work (even no optional if too hard) - then move on to next step > >> 3/ Error conditions: The JDBC interface assumes a conventional >> connection to >> a local database. SPARQL is a web protocol and many error conditions >> matter >> - the difference between soft errors like can't contact and hard >> errors like >> invalid query make more of a difference to a web app. > > I think the different errors could be modelled as a hierarchy of > Exceptions subclassed from SQLExeption. This would enable client code > determine different flavours of errors without having to do string > parsing from a SQLException to see what happened. agree also here - but if we have HTTP protocol and XML results, we might have status/errors also encoded into data - or we want to avoid that for JDBC? It would be interesting to look how for example XQuery or other XML-DB people have done similar things over JDBC Alberto |
From: Seaborne, A. <and...@hp...> - 2005-01-15 15:27:25
|
Janne Saarela wrote: > I would like to propose the datasource for this driver to be > > jdbc:sparql4j:[http|other]://www.serveraddress.com[:port]/path/more;param1=value1 Checking, I found that the syntax is formally: jdbc:<subprotocol>:<subname> and very little restrictions on the subname (it can include : and /). We can have our own subsubprotocol so: jdbc:sparql4j:<subsubprotocol> and we don't need to have a fixed remainder; it can be dependent on the subsubprotocol. For http, your suggestion of jdbc:sparql4j:http://www.serveraddress.com[:port]/path/more;param1=value1 look right. To be clear, we have a URL for the service/model that we are going to query (the URL is NOT the query to go to the SPARQL server). The parameter is a parameter for the driver, not the SPARQL protocol. Hmm, - actually there is a level missing. We need something like: jdbc:sparql4j:sparql (yuk - 2 sparql's) to say: jdbc:sparql4j Get JDBC to route requests to our driver jdbc:sparql4j:sparql our driver is going to use the SPARQL protocol jdbc:sparql4j:sparql:http://server/path our driver is going to use SPARQL protocol at the named service May be that's excessive. An alternative is to not have sparql4j be the subprotocol name and directly name the connection protocol: jdbc:sparql to mean our driver using the SPARQL protocol. Our driver code using a different connection protocol would a different name at the JDBC protocol level. This exposes the base connection technology to the application but that seems good and bad. Good - it allows the app to choose Bad - the app writer has another decision to make. So taking the Jena/ARQ direct connect, it would have it's own name, own java.sql.Driver implementation (inheriting code from sparql4j). Implementing javax.sql.DataSource is also possible but, for now, I don't see sufficient value over reusing the java.sql.DriverManager. Given all that thinking aloud, I think the combination of: Class.forName("net.sf.sparql4j.Driver") ; and a JDBC URL of jdbc:sparql:http://server/path[;param=value]* is best. The change from Janne's sugegstion is that the connection protocol isnamed, not the driver implementation and it allows other drivers to handle the SPARQL protocol. We have decided the SPARQL JDBC format - a member submission or working group note could document it if a paragraph in the protocol document can't be added. Registering "sparql" as a JDBS subprotocol could be done as well - later. There isn't so much online documentation on writing the driver as choosing and using them (strangely that!). The MySQL JDBC driver is GPL but the PostgreSQL has BSD-like license so if you're looking at the source code, this is safer. http://jdbc.postgresql.org/index.html Andy PS We have the option of using the org.sparql. Java package space as I have access to that. > > This would enable addition of other protocols apart from HTTP. > HTTP itself could express the full URI (in encoded form) with > additional parameters at the very end as allowed by a datasource string. > > Janne > -- > Janne Saarela <janne.saarela at profium.com> > Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland > Internet: http://www.profium.com > > > > ------------------------------------------------------- > The SF.Net email is sponsored by: Beat the post-holiday blues > Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. > It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > Sparql4j-devel mailing list > Spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparql4j-devel |
From: Seaborne, A. <and...@hp...> - 2005-01-15 14:43:26
|
Janne Saarela wrote: > I very much agree with your goals for this project. > The non-RDF applications will find it easy to access SPARQL enabled repositories. >The easiness comes via using the familiar programming concepts relating to access >relational databases using JDBC. In addition, the easiness comes via the use of >on single jdbc driver instead of having to download a separate one for each repository. > > >>This, together with the scoping of JDBC >> >>""" javadoc java.sql (1.5.0) > > > I would be in favor of targetting 1.4.2 JDK to start with. So do I - I was just identifying exactly which javadoc I was quoting from. J2EE is still 1.4 based and installations don't rush to change. 1.5 has already had an update :-) > This is due to our product which ships with 1.4.2 support as we speak with > 1.5 support coming in the future. > > I should check what changes there are in java.sql from 1.4.2 to 1.5. > Would you remember by heart? JDBC 3.0 API was made part of J2SE 1.4 > > >>means I see SPARQL4J as a JDBC driver that is mainly about issuing SPARQL >>SELECT queries, rather than CONSTRUCT or DESCRIBE. A release with just >>SPARQL SELECT, using the plain XML result format, would be very useful to >>application writers - one, conventional, interface to RDF published data. >>Toolkit independent. > > > SELECT we start with - let's see how CONSTRUCT and DESCRIBE can be tweaked in the long run. > > >>We could be merely inspired by JDBC but actually produce a new >>interface that is more SPARQL suitable. I'd like to avoid this for now >>and try to implement "pure" JDBC. > > > My goal is the very same - let's know get into extensions right away. > > >>I'm assuming that the connection to the database (the RDF store, the >>knowledge base) is HTTP. Now I would like to be able to take the >>SPARQL4J codebase and plug-in an adapter, instead of HTTP, to get a JDBC >>driver for ARQ [0] directly for local use. We need a plug-in layer for that >>which is a connection layer with SPARQL-centric mechanisms and then have >>common code for the presentation of results as JDBC methods. > > > Ok, I see. Internally we can create a factory that gives the protocol part > implementation for the other parts of the driver. From the user > perspective this protocol should perhaps be visible on the datasource string? > What do you think? Let's start a separate thread on the datasource. > > >>Update - out of scope: Until there is a standard (de facto or de jure) >>language, servers won't implement a common way to do RDF update - there are >>several major decisions, like handling bNodes, to be settled first. > > > Agreed - out of scope for now. > > >>A quick review of the JDBC interfaces shows few tricky parts: >> >>1/ NULLs. SQL has NULLs; SPARQL has unbound variables. That in itself >>is not important - NULL would be just a way of saying "not bound". But >>the getXXX methods must return a value of the given type and getInt returns >>0 on NULL, which isn't very distinguishing value. But in RDF, there isn't >>always a value in the result set for a given row - NULLs become more common >>and the "return zero" solution is a bit weak. >> >>Solutions?: Default values that are more unusual values (e.g. MIN_VALUE+2) >>or ones that can be set by the app (requires an extension to JDBC). > > > I would be happy with a default value. Let's see what the eventual other users think. > > >>2/ Metadata. Each solution to a SPARQL query can have different types >>of values for the same property. Returning anything meaningful as >>"metadata" will need design work. Solution?: For now, return very >>little and see if applications use the metadata information much. > > > Let's make a, if not dummy, a very basic implementation of the resultset > metadata and listen to use cases that would require a more elaborate > implementation. > > >>3/ Error conditions: The JDBC interface assumes a conventional connection to >>a local database. SPARQL is a web protocol and many error conditions matter >>- the difference between soft errors like can't contact and hard errors like >>invalid query make more of a difference to a web app. > > > I think the different errors could be modelled as a hierarchy of > Exceptions subclassed from SQLExeption. This would enable client > code determine different flavours of errors without having to do > string parsing from a SQLException to see what happened. Yes but this means that the client handling our JDBC driver differently - we have here an extension in the new exceptions introduced that the client may be handling. Andy > > Janne > -- > Janne Saarela <janne.saarela at profium.com> > Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland > Internet: http://www.profium.com > > > > ------------------------------------------------------- > The SF.Net email is sponsored by: Beat the post-holiday blues > Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. > It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > Sparql4j-devel mailing list > Spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparql4j-devel |
From: Janne S. <jan...@pr...> - 2005-01-15 09:22:54
|
I would like to propose the datasource for this driver to be jdbc:sparql4j:[http|other]://www.serveraddress.com[:port]/path/more;param= 1=3Dvalue1 This would enable addition of other protocols apart from HTTP. HTTP itself could express the full URI (in encoded form) with additional = parameters at the very end as allowed by a datasource string. Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Janne S. <jan...@pr...> - 2005-01-15 09:18:24
|
I very much agree with your goals for this project. The non-RDF applications will find it easy to access SPARQL enabled repos= itories. The easiness comes via using the familiar programming concepts r= elating to access relational databases using JDBC. In addition, the easin= ess comes via the use of on single jdbc driver instead of having to downl= oad a separate one for each repository. > This, together with the scoping of JDBC > > """ javadoc java.sql (1.5.0) I would be in favor of targetting 1.4.2 JDK to start with. This is due to= our product which ships with 1.4.2 support as we speak with 1.5 support = coming in the future. I should check what changes there are in java.sql from 1.4.2 to 1.5. Would you remember by heart? > means I see SPARQL4J as a JDBC driver that is mainly about issuing SPAR= QL > SELECT queries, rather than CONSTRUCT or DESCRIBE. A release with just= > SPARQL SELECT, using the plain XML result format, would be very useful = to > application writers - one, conventional, interface to RDF published dat= a. > Toolkit independent. SELECT we start with - let's see how CONSTRUCT and DESCRIBE can be tweake= d in the long run. > We could be merely inspired by JDBC but actually produce a new > interface that is more SPARQL suitable. I'd like to avoid this for now= > and try to implement "pure" JDBC. My goal is the very same - let's know get into extensions right away. > I'm assuming that the connection to the database (the RDF store, the > knowledge base) is HTTP. Now I would like to be able to take the > SPARQL4J codebase and plug-in an adapter, instead of HTTP, to get a JDB= C > driver for ARQ [0] directly for local use. We need a plug-in layer for= that > which is a connection layer with SPARQL-centric mechanisms and then hav= e > common code for the presentation of results as JDBC methods. Ok, I see. Internally we can create a factory that gives the protocol par= t implementation for the other parts of the driver. From the user perspec= tive this protocol should perhaps be visible on the datasource string? Wh= at do you think? Let's start a separate thread on the datasource. > Update - out of scope: Until there is a standard (de facto or de jure) > language, servers won't implement a common way to do RDF update - there= are > several major decisions, like handling bNodes, to be settled first. Agreed - out of scope for now. > A quick review of the JDBC interfaces shows few tricky parts: > > 1/ NULLs. SQL has NULLs; SPARQL has unbound variables. That in itself > is not important - NULL would be just a way of saying "not bound". But= > the getXXX methods must return a value of the given type and getInt ret= urns > 0 on NULL, which isn't very distinguishing value. But in RDF, there is= n't > always a value in the result set for a given row - NULLs become more co= mmon > and the "return zero" solution is a bit weak. > > Solutions?: Default values that are more unusual values (e.g. MIN_VALUE= +2) > or ones that can be set by the app (requires an extension to JDBC). I would be happy with a default value. Let's see what the eventual other = users think. > 2/ Metadata. Each solution to a SPARQL query can have different types > of values for the same property. Returning anything meaningful as > "metadata" will need design work. Solution?: For now, return very > little and see if applications use the metadata information much. Let's make a, if not dummy, a very basic implementation of the resultset = metadata and listen to use cases that would require a more elaborate impl= ementation. > 3/ Error conditions: The JDBC interface assumes a conventional connecti= on to > a local database. SPARQL is a web protocol and many error conditions m= atter > - the difference between soft errors like can't contact and hard errors= like > invalid query make more of a difference to a web app. I think the different errors could be modelled as a hierarchy of Exceptio= ns subclassed from SQLExeption. This would enable client code determine d= ifferent flavours of errors without having to do string parsing from a SQ= LException to see what happened. Janne -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |
From: Seaborne, A. <and...@hp...> - 2005-01-11 22:04:55
|
I thought I'd put together some thoughts about SPARQL4J to kick things off. My main interest in the project is providing a universal JDBC driver for applications to be able to access RDF-information in conventional applications. "Conventional" here means making semantic web information available to applications in a way that is natural to today's application architectures. This is a big step in making the Semantic Web happen - the Semantic Web isn't about a completely new computing infrastructure. "Universal" here means there is only one driver, not one per implementation of SPARQL. Given there is a standard access protocol, there is no need to require different software to access different implementations. "Access information" means getting at the information; it does not mean do RDF processing on the RDF data. Technologies for use and presentation of data need not be RDF applications. This, together with the scoping of JDBC """ javadoc java.sql (1.5.0) Although the JDBC API is mainly geared to passing SQL statements to a database, it provides for reading and writing data from any data source with a tabular format. """ means I see SPARQL4J as a JDBC driver that is mainly about issuing SPARQL SELECT queries, rather than CONSTRUCT or DESCRIBE. A release with just SPARQL SELECT, using the plain XML result format, would be very useful to application writers - one, conventional, interface to RDF published data. Toolkit independent. We could be merely inspired by JDBC but actually produce a new interface that is more SPARQL suitable. I'd like to avoid this for now and try to implement "pure" JDBC. I'm assuming that the connection to the database (the RDF store, the knowledge base) is HTTP. Now I would like to be able to take the SPARQL4J codebase and plug-in an adapter, instead of HTTP, to get a JDBC driver for ARQ [0] directly for local use. We need a plug-in layer for that which is a connection layer with SPARQL-centric mechanisms and then have common code for the presentation of results as JDBC methods. I am keen on "release early, release often"; simple one query, one GET driver for first release. Performance matters - but so does functionality. I'd rather get something workable out then worry about performance (we do have a fixed interface in JDBC so that makes it easier - the interface can't change!). Some performance matters may be more about advice: precompiled queries can be done by caching previous query plans in the server without protocol support; streaming requires certain server characteristics in the production of the XML. Update - out of scope: Until there is a standard (de facto or de jure) language, servers won't implement a common way to do RDF update - there are several major decisions, like handling bNodes, to be settled first. A quick review of the JDBC interfaces shows few tricky parts: 1/ NULLs. SQL has NULLs; SPARQL has unbound variables. That in itself is not important - NULL would be just a way of saying "not bound". But the getXXX methods must return a value of the given type and getInt returns 0 on NULL, which isn't very distinguishing value. But in RDF, there isn't always a value in the result set for a given row - NULLs become more common and the "return zero" solution is a bit weak. Solutions?: Default values that are more unusual values (e.g. MIN_VALUE+2) or ones that can be set by the app (requires an extension to JDBC). 2/ Metadata. Each solution to a SPARQL query can have different types of values for the same property. Returning anything meaningful as "metadata" will need design work. Solution?: For now, return very little and see if applications use the metadata information much. 3/ Error conditions: The JDBC interface assumes a conventional connection to a local database. SPARQL is a web protocol and many error conditions matter - the difference between soft errors like can't contact and hard errors like invalid query make more of a difference to a web app. Andy [0] ARQ is my Jena implementation of SPARQL-query. Joseki3 is the protocol side. Demo: http://sparql.org/query.html |
From: Janne S. <jan...@pr...> - 2005-01-10 09:42:40
|
As the mailing list for developers of sparql4j has just been created, I f= eel sending this welcome note and at the same time testing the list would= be necessary. Alas, welcome to this interesting project! Janne Saarela -- Janne Saarela <janne.saarela at profium.com> Profium, Lars Sonckin kaari 12, 02600 Espoo, Finland Internet: http://www.profium.com |