You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
|
Feb
(11) |
Mar
(17) |
Apr
(12) |
May
(2) |
Jun
(20) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
(2) |
Nov
|
Dec
(5) |
2011 |
Jan
(4) |
Feb
(1) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
|
Jul
(12) |
Aug
(4) |
Sep
(5) |
Oct
(1) |
Nov
(38) |
Dec
(27) |
2012 |
Jan
(46) |
Feb
(182) |
Mar
(83) |
Apr
(22) |
May
(68) |
Jun
(47) |
Jul
(135) |
Aug
(84) |
Sep
(57) |
Oct
(45) |
Nov
(27) |
Dec
(61) |
2013 |
Jan
(59) |
Feb
(78) |
Mar
(66) |
Apr
(107) |
May
(27) |
Jun
(56) |
Jul
(53) |
Aug
(3) |
Sep
(19) |
Oct
(41) |
Nov
(44) |
Dec
(54) |
2014 |
Jan
(49) |
Feb
(72) |
Mar
(22) |
Apr
(41) |
May
(63) |
Jun
(27) |
Jul
(45) |
Aug
(12) |
Sep
(3) |
Oct
(8) |
Nov
(27) |
Dec
(16) |
2015 |
Jan
(3) |
Feb
(20) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(16) |
May
(9) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Max - M. <ma...@mi...> - 2015-02-11 17:53:12
|
Hi all, Here are some points in the current API (and what is being refined/refactored in the new 1.9 version) that I have wondered about for some time Specifically and currently about graph events, calling IGraph.Assert(Triple t) would trigger a Graph.TripleAsserted and a Graph.Changed event calling IGraph.Assert(IEnumerable<Triple> t) on 10 triples will trigger 10 Graph.TripleAsserted and 10 Graph.Changed event In the second case, wouldn't it be more efficient and clear (and simple for a user) to trigger the 10 Graph.TripleAsserted events and only one overall Graph.Changed event for the global additions ? Going that way it may also be useful to add a direct Update method in the IGraph interface so both removals and additions can be notified with a single Graph.Changed event, ins't it ? I know I could always derive from a specific IGraph implementation but should it not be addressed directly in the base library (if deemed suitable) ? Max. |
From: Rob V. <rv...@do...> - 2015-02-11 12:08:42
|
See my other email. I would consider only supporting in-memory stores to start with and worry about the extra complexities of arbitrary storage later There are existing implementations of those implementations that already proxy requests to arbitrary storage. This doesn't solve the issue of stopping people using the remote storage directly Rob From: Max - Micrologiciel <ma...@mi...> Date: Wednesday, 4 February 2015 17:29 To: Rob Vesse <rv...@do...> Subject: Re: About the SPIN Processor > By the way I forgot to add that (of course) as long as the dotNetRDF SPIN > library is in use over a store, no direct access to the store should be > permitted. > So the library should also help to define a public SPARQL endpoint (with full > SQPARLSD support) if required. > > Is it enough to implement the ISparqlQueryProcessor/ISparqlUpdateProcessor > classes and bind them through the SparqlServer configuration or are they other > considerations to take into account ? > > Thanks, > Max. > > > 2015-02-03 22:38 GMT+01:00 Rob Vesse <rv...@do...>: >> Max >> >> Thanks for the updates, comments are inline: >> >> From: Max - Micrologiciel <ma...@mi...> >> Date: Thursday, 29 January 2015 03:58 >> To: Rob Vesse <rv...@do...> >> Subject: About the SPIN Processor >> >>> Hi Rob, >>> >>> First of all, let me wish you a happy and successful year for 2015. >> >> Thanks and same to you too >> >>> >>> I'm still working on the inclusion of the SPIN layer into dotNetRDF. >>> Since last year's first draft, much of my work has been more experimental >>> (so not really committable) than formal and most often bound to check >>> whether and how the different issues I encountered could be handled. >>> >>> So before going further (I've been delaying this too much already...) I >>> wanted to get your advice on the issues I encountered. >>> >>> Here is a summary of where I stand for now. >>> >>> About SPIN, my first conclusions came to this: >>> * since SPIN user-defined functions and properties rely mainly on SPARQL, it >>> should be possible to handle those through SPARQL rewriting. >> >> Yes I think that would be a reasonable approach, the current API may make >> this harder than it needs to be. Hopefully the 1.9 changes will make this >> much easier in the longer term >> >>> * since SPIN allows data-integrity features >>> (constructors/rules/constraints...) this requires capturing each SPARQL >>> Update command to perform the SPIN pipeline afterwards. >>> * >>> * since those data-integrity features may signal for violations, the command >>> results must be cancelled somehow. This implies that there must be some >>> transactional support in the processor. >> >> Yes, however the SPARQL specs already require that updates within a request >> (of which their may be many) are applied atomically so any SPARQL processor >> will already need to support transactions in some sense >> >>> >>> Based on the current state-of-the-art, we are faced with the subsequent >>> issues: >>> * pipe-lining the SPIN integrity chain requires handling multiple SPARQL >>> updates/queries in a single transactional context. >>>> * HTTP being stateless, there is no way (yet ? see >>>> http://people.apache.org/~sallen/sparql11-transaction/) to span >>>> transactions over multiple requests >> Yes this is an issue, some 3rd party stores define their own protocols for >> transactions e.g. Stardog >> >> If you have a 3rd party store that doesn't support any kind of transactions >> then the solution may be simply to say that we can't support that. >>>> * subsequently, supporting transactions locally requires to handle proper >>>> isolation between clients but also possible transaction concurrency >>>> problems. >> Yep, right now dotNetRDF's in-memory implementation uses MRSW (Multi Reader >> or Single Writer) concurrency so we avoid concurrency issues by only allowing >> a single write transaction to be in progress and blocking all reads while >> transactions are in progress >>>> * It also requires to simulate the transactional environment on the >>>> underlying server to alleviate as much as possible the memory consumption >>>> by dotNetRDF or the storage server. >> Yes ideally the server should manage the transactions but of course if you >> are trying to layer SPIN over a server that doesn't support SPIN then some >> state necessarily has to be maintained by the client. >> >> This perhaps begs the question of how general the SPIN implementation should >> be and whether it should be limited to a subset of suitable stores. >> >>> * SPIN to SPARQL rewriting also raises some problems due to : >>>> * how sub-queries are processed according to the recommendation >>>> * some difficulties to find an equivalent evaluation strategy for some >>>> forms of propertyPaths. >> Can you elaborate on what you mean by this? >> >> Is the sub-query stuff related to the use of SPIN functions and templates >> which potentially require substituting some constants into the sub-query >> prior to execution? >> >>> >>> Going a bit further, I tried experimenting a simple SWP layer on top of the >>> stack with some success until I deiscover my prototypes was biased by a >>> Sesame bug on optional subqueries handling. Anyway, I got directly >>> confronted with how to handle of the natively provided SWP functions which >>> can not be converted into SPARQL. The problem arises also at the basic SPIN >>> level if you consider extensions like SPINx, so it may be best and simpler >>> to handle the case here? >> >> I would start by getting the core working and worry about how to add the >> extra layers later. Presumably some of the none SPARQL things could be >> implemented by using the existing extension function API >> >>> >>> Also, I see you are well going on the 1.9 rewriting and since it introduces >>> many API changes that could also make the implementation easier. >> >> Yes although much slower than I would have liked since I have very little >> time to work on this these days. The changes are going to be quite invasive >> as you've probably noticed but this is necessary to address a lot of the >> shortcomings in the current API and to make it easier to improve the query >> engine going forward. >> >> I keep hoping to be able to start putting out some limited alpha releases of >> the new API at some point this year but then I said that in 2014 and never >> got far enough to do that. The new query engine still has some big pieces >> missing (a query parser and results IO support for a start) before it could >> be meaningfully used. Maybe it'll be ready later this year if I can find the >> time to get it into a sufficiently usable state. >> >> Rob >> >>> >>> >>> Since you have a much more global view of dotNetRDF and of the RDF/SPARQL >>> ecosystem than me, your advice would be welcome. If you're available, I'd >>> rather discuss this with you so we can decide how efforts and contributions >>> may be best directed. >>> >>> Please, tell me what you think about this. >>> >>> Thanks for your consideration, >>> Max. > |
From: Rob V. <rv...@do...> - 2015-02-11 12:07:09
|
Max Comments inline: From: Max - Micrologiciel <ma...@mi...> Date: Wednesday, 4 February 2015 16:26 To: Rob Vesse <rv...@do...> Subject: Re: About the SPIN Processor > Hi Rob, > > thanks for your answer. > > Maybe I've been giving this too much hard thoughts lately but I tried to get > back rapidly on tracks while I have the time ;) > I may have been somewhat confused (thus confusing ) while explaining my > concerns. > > Let me try to get those clearer now that I've had some time to rethink some of > the points I raised. > > Mainly the framework will have to compensate for lack of support of the SPIN > requirements by the underlying storage engine. > Hopefully, those will be addressed someday so the implementation should be in > time relevant only for the dotNetRDF InMemory storage. however this is not the > case now and it won't be for a (long?) time. Most likely so, and of course there is always the risk that SPIN gets replaced with some newer approach in the future. > > Meanwhile we should be able to provide local support by work-around > strategies. Ideally yes > > Related to transaction support, I have been considering the following > strategy: > About isolation, I believe we can provide isolation between transactions by > using "temporary" graphs in the underlying storage : > * during SPARQL evaluation, FROM/FROM NAMED/GRAPH clauses or/and triple > patterns can be rewritten to use those graphs (according to how those graphs > are built and what they contain). FROM is difficult to rewrite because where there are multiple FROM clauses the default graph is the merge of those graphs so any duplicates you could get by querying those graphs separately need suppressing Restricting use of FROM/FROM NAMED may be an option > * > * > * to avoid bloating the underlying storage with whole graph copies and perform > a full graph diff (with all possible concurrency issues) at commit time, it > maybe would be better to store only local additions and removals (those would > need to be retrieved separately anyway to filter rules and constraints to run > in the SPIN workflow) This is what the existing transactional update support does > * > > for instance: the graph pattern >> GRAPH <g> { ?s ?p ?o . } > > could be rewritten into into >> { GRAPH <g> { ?s ?p ?o . } >> MINUS { GRAPH <mytrans:removals:g> {?s ?p ?o . } >> } UNION { >> GRAPH <mytrans:additions:g> { ?s ?p ?o . } >> } > > To provide for concurrency, I am thinking of maintaining a special graph in > the storage that would work as a transaction log to keep track of > transactions, graphs changes and commits > * I think keeping this log directly in the storage would allow for > consistent time-stamping and possible distributed usage between > processes/threads. Possibly though this has extra cost of round trips to the remote storage. Distributed usage is likely ill advised and I would go for a simpler design than try and adding distributed support immediately > * > * > * I feel all transaction changes (with transaction log updates) can also be > performed with the "diff graphs"at commit time within a single SPARQL update > command thus performing the commit atomically > * > * the "SpinWrappedStorage" implementation would then become responsible to > ensure effective local concurrency within the framework. > > This means we could provide a transactional layer upon any storage provider. > Does that sound reasonable ? > To try and simplify this I was thinking about using the .NET > System.Transactions library. Would you have any advice or experience with it > or do you know any alternative that would help ? No never used it personally, it appears to be primarily tied to SQL Server and requires MSDTC which would make it non-portable > > > As about the SPARQL caveats and extensions, I feel this really concerns the > core of the SPIN evaluation engine since the goal is to rely as much as > possible on the underlying storage performances (Fully aware here that relying > only on SPARQL rewriting will also rule out any extension mechanism without > direct implementation by the underlying storage) > > For instance, consider the getFather example here : > http://composing-the-semantic-web.blogspot.fr/2009/01/understanding-spin-funct > ions.html > > Say we want to execute this SPIN query: >> SELECT ?child ?father >> WHERE { >> ?child a kennedys:Person . >> BIND ( :getFather(?child) as ?father ) >> } > > given the definition for the getFather function, we could efficiently rewrite > the query into the equivalent forms: >> SELECT ?child ?father >> WHERE { >> ?child a kennedys:Person . >> OPTIONAL { >> SELECT ?child ?father WHERE { >> ?child kennedys:parent ?father . >> ?father kennedys:gender kennedys:male . >> } >> } >> } Yep that makes sense >> > this would only require a SPARQL rewrite and would avoid local evaluation of > the query (this could represent in a non negligible performance gain in IO and > processing for complex functions). > However the SPIN recommandation does not constrain the SELECT modifiers usable > in a function's spin:body query. The definition could then declare a LIMIT > clause making the SPARQL substitution process unusable since evaluation would > require using a co-related sub-query which SPARQL cannot handle. What do you mean by SPARQL substitution here? dotNetRDF will avoid index joins where they may alter the results of the query so I don't think that is specifically an issue. I think doing any kind of static substitution during rewrite is probably ill advised and you should likely just rely on the underlying query engine to restrict appropriately based on the join. If people have put solution modifiers into their functions then this may not give the correct results. For the in-memory implementation you could register SPIN functions as SPARQL extension functions and evaluate them with substitution if necessary. > > Rewriting the function calls by turning the query "inside-out" might be > possible but I've still not evaluated how multiple different functions calls > would translate into and how the output query would scale on the server. > > So would it be safe/wise to rule out those cases right now ? Imho, that would > hinder the interest and usefulness of the library. I wouldn't rule those out but I think there might be better ways to approach this as you've suggested later in your email > > > If not, this would require to handle evaluation of SPIN extensions > (notwithstanding the source of the extension i.e. either SPIN definitions, > SPINx or any 3rd party extension mechanism) locally through the dotNetRDF > engine whenever a SPARQL substitution cannot be effective but that raises the > problem of finding an acceptable strategy to alleviate IO and local > computation as much as possible which is where I find lacking. > > So far (supposing the getFather function could not subtitute well into SPARQL) > we could either : > 1) naive: precompute in a storage temporary graph for each potential binding > of ?child and directly query a left join >> => would require much IO to get the Multiset and send back the results as a >> graph >> => would most probably generate too much unused computation compared to >> filtered-out results. > 2) define a "NestedJoin" algorithm that could pull a Multiset of the variables > used by the function pre-bound by the LHS Multiset (using a VALUES clause for > instance ?) for local evaluation >> => would not work against a remote storage if the function arguments involves >> blank nodes (unless we can enforce some kind of skolemization or filtering ?) This sounds like the index joins we already do in many cases, however as already noted these can't be applied in certain situations > 3) Split the query into a remote query to pull pre joined/pre filtered data > (if possible) as a Multiset or Graph from the expanded query patterns and > evaluate locally the remaining algebra and computations ? >> => as I am not fluent in the algebra evaluation algorithms and API, I feel >> this approach would be too much for me alone :P >> => this also include being able to identify which SPARQL extensions are also >> natively supported by the underlying storage (even if this is more related to >> configuration and possible SPARQL Service Description support). > > There may be some more ideas to dig but I reached my imaginations limits now, > so feel free to complete if you see any other way to handle this ;) In some sense it might simplify things to just do an in-memory implementation for the time being and consider supporting arbitrary storage layers as a later extension I.e. iterative development approach. Rob > > Thanks for your advice. > > Cheers, > Max. > > > 2015-02-03 22:38 GMT+01:00 Rob Vesse <rv...@do...>: >> Max >> >> Thanks for the updates, comments are inline: >> >> From: Max - Micrologiciel <ma...@mi...> >> Date: Thursday, 29 January 2015 03:58 >> To: Rob Vesse <rv...@do...> >> Subject: About the SPIN Processor >> >>> Hi Rob, >>> >>> First of all, let me wish you a happy and successful year for 2015. >> >> Thanks and same to you too >> >>> >>> I'm still working on the inclusion of the SPIN layer into dotNetRDF. >>> Since last year's first draft, much of my work has been more experimental >>> (so not really committable) than formal and most often bound to check >>> whether and how the different issues I encountered could be handled. >>> >>> So before going further (I've been delaying this too much already...) I >>> wanted to get your advice on the issues I encountered. >>> >>> Here is a summary of where I stand for now. >>> >>> About SPIN, my first conclusions came to this: >>> * since SPIN user-defined functions and properties rely mainly on SPARQL, it >>> should be possible to handle those through SPARQL rewriting. >> >> Yes I think that would be a reasonable approach, the current API may make >> this harder than it needs to be. Hopefully the 1.9 changes will make this >> much easier in the longer term >> >>> * since SPIN allows data-integrity features >>> (constructors/rules/constraints...) this requires capturing each SPARQL >>> Update command to perform the SPIN pipeline afterwards. >>> * >>> * since those data-integrity features may signal for violations, the command >>> results must be cancelled somehow. This implies that there must be some >>> transactional support in the processor. >> >> Yes, however the SPARQL specs already require that updates within a request >> (of which their may be many) are applied atomically so any SPARQL processor >> will already need to support transactions in some sense >> >>> >>> Based on the current state-of-the-art, we are faced with the subsequent >>> issues: >>> * pipe-lining the SPIN integrity chain requires handling multiple SPARQL >>> updates/queries in a single transactional context. >>>> * HTTP being stateless, there is no way (yet ? see >>>> http://people.apache.org/~sallen/sparql11-transaction/) to span >>>> transactions over multiple requests >> Yes this is an issue, some 3rd party stores define their own protocols for >> transactions e.g. Stardog >> >> If you have a 3rd party store that doesn't support any kind of transactions >> then the solution may be simply to say that we can't support that. >>>> * subsequently, supporting transactions locally requires to handle proper >>>> isolation between clients but also possible transaction concurrency >>>> problems. >> Yep, right now dotNetRDF's in-memory implementation uses MRSW (Multi Reader >> or Single Writer) concurrency so we avoid concurrency issues by only allowing >> a single write transaction to be in progress and blocking all reads while >> transactions are in progress >>>> * It also requires to simulate the transactional environment on the >>>> underlying server to alleviate as much as possible the memory consumption >>>> by dotNetRDF or the storage server. >> Yes ideally the server should manage the transactions but of course if you >> are trying to layer SPIN over a server that doesn't support SPIN then some >> state necessarily has to be maintained by the client. >> >> This perhaps begs the question of how general the SPIN implementation should >> be and whether it should be limited to a subset of suitable stores. >> >>> * SPIN to SPARQL rewriting also raises some problems due to : >>>> * how sub-queries are processed according to the recommendation >>>> * some difficulties to find an equivalent evaluation strategy for some >>>> forms of propertyPaths. >> Can you elaborate on what you mean by this? >> >> Is the sub-query stuff related to the use of SPIN functions and templates >> which potentially require substituting some constants into the sub-query >> prior to execution? >> >>> >>> Going a bit further, I tried experimenting a simple SWP layer on top of the >>> stack with some success until I deiscover my prototypes was biased by a >>> Sesame bug on optional subqueries handling. Anyway, I got directly >>> confronted with how to handle of the natively provided SWP functions which >>> can not be converted into SPARQL. The problem arises also at the basic SPIN >>> level if you consider extensions like SPINx, so it may be best and simpler >>> to handle the case here? >> >> I would start by getting the core working and worry about how to add the >> extra layers later. Presumably some of the none SPARQL things could be >> implemented by using the existing extension function API >> >>> >>> Also, I see you are well going on the 1.9 rewriting and since it introduces >>> many API changes that could also make the implementation easier. >> >> Yes although much slower than I would have liked since I have very little >> time to work on this these days. The changes are going to be quite invasive >> as you've probably noticed but this is necessary to address a lot of the >> shortcomings in the current API and to make it easier to improve the query >> engine going forward. >> >> I keep hoping to be able to start putting out some limited alpha releases of >> the new API at some point this year but then I said that in 2014 and never >> got far enough to do that. The new query engine still has some big pieces >> missing (a query parser and results IO support for a start) before it could >> be meaningfully used. Maybe it'll be ready later this year if I can find the >> time to get it into a sufficiently usable state. >> >> Rob >> >>> >>> >>> Since you have a much more global view of dotNetRDF and of the RDF/SPARQL >>> ecosystem than me, your advice would be welcome. If you're available, I'd >>> rather discuss this with you so we can decide how efforts and contributions >>> may be best directed. >>> >>> Please, tell me what you think about this. >>> >>> Thanks for your consideration, >>> Max. > |
From: Rob V. <rv...@do...> - 2015-02-03 21:39:21
|
Max Thanks for the updates, comments are inline: From: Max - Micrologiciel <ma...@mi...> Date: Thursday, 29 January 2015 03:58 To: Rob Vesse <rv...@do...> Subject: About the SPIN Processor > Hi Rob, > > First of all, let me wish you a happy and successful year for 2015. Thanks and same to you too > > I'm still working on the inclusion of the SPIN layer into dotNetRDF. > Since last year's first draft, much of my work has been more experimental (so > not really committable) than formal and most often bound to check whether and > how the different issues I encountered could be handled. > > So before going further (I've been delaying this too much already...) I wanted > to get your advice on the issues I encountered. > > Here is a summary of where I stand for now. > > About SPIN, my first conclusions came to this: > * since SPIN user-defined functions and properties rely mainly on SPARQL, it > should be possible to handle those through SPARQL rewriting. Yes I think that would be a reasonable approach, the current API may make this harder than it needs to be. Hopefully the 1.9 changes will make this much easier in the longer term > * since SPIN allows data-integrity features > (constructors/rules/constraints...) this requires capturing each SPARQL Update > command to perform the SPIN pipeline afterwards. > * > * since those data-integrity features may signal for violations, the command > results must be cancelled somehow. This implies that there must be some > transactional support in the processor. Yes, however the SPARQL specs already require that updates within a request (of which their may be many) are applied atomically so any SPARQL processor will already need to support transactions in some sense > > Based on the current state-of-the-art, we are faced with the subsequent > issues: > * pipe-lining the SPIN integrity chain requires handling multiple SPARQL > updates/queries in a single transactional context. >> * HTTP being stateless, there is no way (yet ? see >> http://people.apache.org/~sallen/sparql11-transaction/) to span transactions >> over multiple requests Yes this is an issue, some 3rd party stores define their own protocols for transactions e.g. Stardog If you have a 3rd party store that doesn't support any kind of transactions then the solution may be simply to say that we can't support that. >> * subsequently, supporting transactions locally requires to handle proper >> isolation between clients but also possible transaction concurrency problems. Yep, right now dotNetRDF's in-memory implementation uses MRSW (Multi Reader or Single Writer) concurrency so we avoid concurrency issues by only allowing a single write transaction to be in progress and blocking all reads while transactions are in progress >> * It also requires to simulate the transactional environment on the >> underlying server to alleviate as much as possible the memory consumption by >> dotNetRDF or the storage server. Yes ideally the server should manage the transactions but of course if you are trying to layer SPIN over a server that doesn't support SPIN then some state necessarily has to be maintained by the client. This perhaps begs the question of how general the SPIN implementation should be and whether it should be limited to a subset of suitable stores. > * SPIN to SPARQL rewriting also raises some problems due to : >> * how sub-queries are processed according to the recommendation >> * some difficulties to find an equivalent evaluation strategy for some forms >> of propertyPaths. Can you elaborate on what you mean by this? Is the sub-query stuff related to the use of SPIN functions and templates which potentially require substituting some constants into the sub-query prior to execution? > > Going a bit further, I tried experimenting a simple SWP layer on top of the > stack with some success until I deiscover my prototypes was biased by a Sesame > bug on optional subqueries handling. Anyway, I got directly confronted with > how to handle of the natively provided SWP functions which can not be > converted into SPARQL. The problem arises also at the basic SPIN level if you > consider extensions like SPINx, so it may be best and simpler to handle the > case here? I would start by getting the core working and worry about how to add the extra layers later. Presumably some of the none SPARQL things could be implemented by using the existing extension function API > > Also, I see you are well going on the 1.9 rewriting and since it introduces > many API changes that could also make the implementation easier. Yes although much slower than I would have liked since I have very little time to work on this these days. The changes are going to be quite invasive as you've probably noticed but this is necessary to address a lot of the shortcomings in the current API and to make it easier to improve the query engine going forward. I keep hoping to be able to start putting out some limited alpha releases of the new API at some point this year but then I said that in 2014 and never got far enough to do that. The new query engine still has some big pieces missing (a query parser and results IO support for a start) before it could be meaningfully used. Maybe it'll be ready later this year if I can find the time to get it into a sufficiently usable state. Rob > > > Since you have a much more global view of dotNetRDF and of the RDF/SPARQL > ecosystem than me, your advice would be welcome. If you're available, I'd > rather discuss this with you so we can decide how efforts and contributions > may be best directed. > > Please, tell me what you think about this. > > Thanks for your consideration, > Max. |
From: Rob V. <rv...@do...> - 2015-01-07 12:54:54
|
Yes if you want to extract things into other libraries then please go ahead, if you need to make the current Query API mutable then please do this as needed. Longer term I want to have things be more modular anyway and a Fluent Query API is an obvious candidate for a separate module. In the 1.9 branch I have rewritten much of the Query API to address many of the current limitations which includes making it a mutable interface based API Rob On 06/01/2015 21:00, "tom...@gm..." <tom...@gm...> wrote: >Hi Rob > >I've been many times in need of unfinished features of the query builder >API. Especially CONSTRUCT but SPARQL Update won't be hard to add either. >I'd like to finish them finally but I was thinking whether it would be >somehow possible to extract it from the core library so that it can be >released independently. I'd like that because dotNetRDF releases quite >infrequently. > >The problem is that I use the query API of which all classes are >internal. Do you see a good way around that? > >Thanks, >Tom > >-------------------------------------------------------------------------- >---- >Dive into the World of Parallel Programming! The Go Parallel Website, >sponsored by Intel and developed in partnership with Slashdot Media, is >your >hub for all things parallel software development, from weekly thought >leadership blogs to news, videos, case studies, tutorials and more. Take a >look and join the conversation now. http://goparallel.sourceforge.net >_______________________________________________ >dotNetRDF-develop mailing list >dot...@li... >https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: <dot...@li...> - 2015-01-07 12:38:37
|
Send dotNetRDF-commits mailing list submissions to dot...@li... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/dotnetrdf-commits or, via email, send a message with subject or body 'help' to dot...@li... You can reach the person managing the list at dot...@li... When replying, please edit your Subject line so it is more specific than "Re: Contents of dotNetRDF-commits digest..." Today's Topics: 1. commit/dotnetrdf: rvesse: Final preparation for 1.0.7.3471 release (Bitbucket) 2. commit/dotnetrdf: 3 new changesets (Bitbucket) 3. commit/dotnetrdf: 11 new changesets (Bitbucket) 4. commit/dotnetrdf: 5 new changesets (Bitbucket) 5. commit/dotnetrdf: 3 new changesets (Bitbucket) 6. commit/dotnetrdf: rvesse: Merged in pkahle/dotnetrdf-pkahle/trix-empty-graph (pull request #30) (Bitbucket) 7. commit/dotnetrdf: 4 new changesets (Bitbucket) 8. commit/dotnetrdf: 4 new changesets (Bitbucket) ---------------------------------------------------------------------- Message: 1 Date: Fri, 21 Nov 2014 12:39:18 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: rvesse: Final preparation for 1.0.7.3471 release To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 1 new commit in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/86baaa47f44c/ Changeset: 86baaa47f44c User: rvesse Date: 2014-11-21 12:31:16+00:00 Summary: Final preparation for 1.0.7.3471 release Affected #: 38 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 2 Date: Fri, 21 Nov 2014 13:21:20 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 3 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 3 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/661f427e4b21/ Changeset: 661f427e4b21 User: rvesse Date: 2014-11-21 13:19:35+00:00 Summary: Added tag 1.0.7, 107 for changeset 86baaa47f44c Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/c5747e069c3b/ Changeset: c5747e069c3b User: rvesse Date: 2014-11-21 13:19:55+00:00 Summary: Prep for next dev cycle Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/7500abe41ffd/ Changeset: 7500abe41ffd User: rvesse Date: 2014-11-21 13:20:58+00:00 Summary: Bump versions for next dev cycle Affected #: 29 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 3 Date: Thu, 04 Dec 2014 16:26:13 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 11 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 11 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/3b7bdd44f8ed/ Changeset: 3b7bdd44f8ed Branch: CORE-433 User: rvesse Date: 2014-12-04 15:28:36+00:00 Summary: Failing test case for CORE-433 Affected #: 3 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/ee1f4f8dd9df/ Changeset: ee1f4f8dd9df Branch: CORE-433 User: rvesse Date: 2014-12-04 15:34:00+00:00 Summary: Allow SparqlCsvParser to accept quoted variable names in the header row (CORE-433) Affected #: 2 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/ecb51325a95e/ Changeset: ecb51325a95e Branch: CORE-433 User: rvesse Date: 2014-12-04 15:35:09+00:00 Summary: Note CORE-433 fix in Change Log Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/8d36d4703ebb/ Changeset: 8d36d4703ebb Branch: CORE-433 User: rvesse Date: 2014-12-04 15:37:18+00:00 Summary: Close CORE-433 branch Affected #: 0 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/718966c6c3a1/ Changeset: 718966c6c3a1 User: rvesse Date: 2014-12-04 15:40:19+00:00 Summary: Merge CORE-433 fix to default Affected #: 5 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/e8e3670c11c8/ Changeset: e8e3670c11c8 Branch: CORE-432 User: rvesse Date: 2014-12-04 15:56:12+00:00 Summary: Failing test cases for CORE-432 - relative URIs in SPARQL results not producing an informative exception Affected #: 4 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/6a787f0c6796/ Changeset: 6a787f0c6796 Branch: CORE-432 User: rvesse Date: 2014-12-04 16:02:10+00:00 Summary: Fix SparqlJsonParser to gracefully handle relative URIs with a more informative error (CORE-432) Affected #: 2 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/1746d1eb4c7a/ Changeset: 1746d1eb4c7a Branch: CORE-432 User: rvesse Date: 2014-12-04 16:09:28+00:00 Summary: Failing tests cases that demonstrate CORE-432 also applies to SparqlXmlParser Affected #: 2 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/d7ec1fcd2731/ Changeset: d7ec1fcd2731 Branch: CORE-432 User: rvesse Date: 2014-12-04 16:14:02+00:00 Summary: Fix SparqlXmlParser to more gracefully report bad URIs with an informative error (CORE-433) Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/0dbc4df67837/ Changeset: 0dbc4df67837 Branch: CORE-432 User: rvesse Date: 2014-12-04 16:22:07+00:00 Summary: Further tests cases for bad URIs in SPARQL Results (CORE-432) Affected #: 6 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/58c633781ba0/ Changeset: 58c633781ba0 User: rvesse Date: 2014-12-04 16:23:50+00:00 Summary: Merge fixes for CORE-432 Affected #: 11 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 4 Date: Thu, 04 Dec 2014 16:53:43 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 5 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 5 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/8513e3c3de9a/ Changeset: 8513e3c3de9a Branch: CORE-434 User: rvesse Date: 2014-12-04 16:38:54+00:00 Summary: Fix omission of default and named graph parameters when POSTing to a remote SPARQL endpoint (CORE-434) Affected #: 3 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/fd6c665f0895/ Changeset: fd6c665f0895 Branch: CORE-434 User: rvesse Date: 2014-12-04 16:39:30+00:00 Summary: Close CORE-434 branch Affected #: 0 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/17f0988cfb07/ Changeset: 17f0988cfb07 Branch: CORE-434 User: rvesse Date: 2014-12-04 16:51:17+00:00 Summary: Remove lossy SPARQL CSV format from round tripping tests since they can never reliably pass Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/1129825423ba/ Changeset: 1129825423ba Branch: CORE-434 User: rvesse Date: 2014-12-04 16:52:10+00:00 Summary: Close CORE-434 branch Affected #: 0 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/285262ca5e79/ Changeset: 285262ca5e79 User: rvesse Date: 2014-12-04 16:53:19+00:00 Summary: Merge in CORE-434 fix Affected #: 4 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 5 Date: Mon, 15 Dec 2014 11:57:00 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 3 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 3 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/6e144bd67f6a/ Changeset: 6e144bd67f6a Branch: trix-empty-graph User: pkahle Date: 2014-12-12 04:42:04+00:00 Summary: Branching to fix a problem. Affected #: 0 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/5f127ed1c60a/ Changeset: 5f127ed1c60a Branch: trix-empty-graph User: pkahle Date: 2014-12-12 04:48:25+00:00 Summary: Fix for TriXParser with empty graphs. Affected #: 5 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/66c4eb846b71/ Changeset: 66c4eb846b71 User: rvesse Date: 2014-12-15 11:56:53+00:00 Summary: Merged in pkahle/dotnetrdf-pkahle/trix-empty-graph (pull request #30) Fix for TriXParser with empty graphs Affected #: 5 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 6 Date: Mon, 15 Dec 2014 11:57:01 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: rvesse: Merged in pkahle/dotnetrdf-pkahle/trix-empty-graph (pull request #30) To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 1 new commit in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/66c4eb846b71/ Changeset: 66c4eb846b71 User: rvesse Date: 2014-12-15 11:56:53+00:00 Summary: Merged in pkahle/dotnetrdf-pkahle/trix-empty-graph (pull request #30) Fix for TriXParser with empty graphs Affected #: 5 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 7 Date: Mon, 15 Dec 2014 14:56:43 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 4 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 4 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/49bf9285e4e2/ Changeset: 49bf9285e4e2 Branch: TOOLS-436 User: rvesse Date: 2014-12-15 12:49:31+00:00 Summary: Fix hang when doing a Replace All with a regular expression that continually matches (TOOLS-436) Affected #: 12 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/cfbfd646de96/ Changeset: cfbfd646de96 Branch: TOOLS-436 User: rvesse Date: 2014-12-15 12:51:04+00:00 Summary: Note TOOLS-436 fix in Change Log Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/eed6cdc759ab/ Changeset: eed6cdc759ab Branch: TOOLS-436 User: rvesse Date: 2014-12-15 12:51:24+00:00 Summary: Note TOOLS-436 fix in Change Log Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/553bca3cbede/ Changeset: 553bca3cbede Branch: TOOLS-436 User: rvesse Date: 2014-12-15 13:33:44+00:00 Summary: Some fixes to Find and Replace to honour capture groups in replace string when using regex (TOOLS-435) Affected #: 12 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 8 Date: Wed, 07 Jan 2015 12:38:26 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 4 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 4 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/f57179f7b69d/ Changeset: f57179f7b69d User: rvesse Date: 2015-01-07 11:32:43+00:00 Summary: Merge TOOLS-435/TOOLS-436 fixes Affected #: 13 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/911f1d8dc846/ Changeset: 911f1d8dc846 Branch: CORE-439 User: rvesse Date: 2015-01-07 12:14:58+00:00 Summary: Add failing tests cases for CORE-439 Affected #: 17 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/0dfcd30a46ea/ Changeset: 0dfcd30a46ea Branch: CORE-439 User: rvesse Date: 2015-01-07 12:36:37+00:00 Summary: Fix for lazy evaluation causing very slow evaluation in some cases (CORE-439) Affected #: 4 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/1d9213ddc251/ Changeset: 1d9213ddc251 User: rvesse Date: 2015-01-07 12:37:49+00:00 Summary: Merge in CORE-439 fixes Affected #: 19 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ ------------------------------------------------------------------------------ Dive into the World of Parallel Programming! The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net ------------------------------ _______________________________________________ dotNetRDF-commits mailing list dot...@li... https://lists.sourceforge.net/lists/listinfo/dotnetrdf-commits End of dotNetRDF-commits Digest, Vol 25, Issue 1 ************************************************ |
From: <tom...@gm...> - 2015-01-06 21:00:51
|
Hi Rob I've been many times in need of unfinished features of the query builder API. Especially CONSTRUCT but SPARQL Update won't be hard to add either. I'd like to finish them finally but I was thinking whether it would be somehow possible to extract it from the core library so that it can be released independently. I'd like that because dotNetRDF releases quite infrequently. The problem is that I use the query API of which all classes are internal. Do you see a good way around that? Thanks, Tom |
From: Rob V. <rv...@do...> - 2014-12-15 12:00:26
|
Peter The existing PCL target already only supports Silverlight 5 so I don't think that is an issue I'm not expecting you to actively maintain and manage the builds, past experience has been once someone like yourself does the initial leg work to make them build things generally then tick along nicely Thanks, Rob From: Peter Kahle <pk...@ka...> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Date: Thursday, 11 December 2014 20:04 To: "dot...@li..." <dot...@li...> Subject: Re: [dotNetRDF-Develop] TriXWriter/Reader and Empty Graphs > Thanks Rob. I’ll have a pull request to fix this immediate bug when I get the > chance. > > I’ll try to throw together a test project to test the portable package against > WinRT and/or Windows Phone 8.1 RT, as well as the minor change to enable WinRT > on Windows Phone as a target. I have to admit I don’t really understand all > the portable target stuff, but I guess I’ll have to learn it sometime. I’ll > also dig in and make a change to the nuspec file to make it work on Universal > apps (as well as the existing types), and I guess I’ll have to do the same for > VDS.Common (which I wasn’t using, apparently). A quick check tells me that the > only profile that supports universal apps supports only Silverlight 5, not 4. > If that’s going to be a problem, I might be biting off more than I can chew, > at least in the short term. > > If you’re looking for someone to actively manage the portable builds, that’s > probably not me, as my interest and time waxes and wanes. But I’ve been on > this list for a couple of years now, so I’ll try to monitor it and help out > when I can. > > Peter > > Sent from Windows Mail > > From: Rob Vesse <mailto:rv...@do...> > Sent: Thursday, December 11, 2014 05:10 > To: dot...@li... > > Peter > > Question 1 > > It is not the query processor but the ISparqlDataset that is used to wrap the > ITripleStore instance for SPARQL execution which adds the empty default graph. > The SPARQL specification says queries and updates operate over a dataset which > always has at least an empty default graph and possibly some named graphs so > it makes logic easier elsewhere to always ensure an empty default graph is > present > > Question 2 > > Yes though if it is the default graph then that could probably be omitted. > > In general many RDF/SPARQL systems won't necessarily support empty graphs > explicitly (particularly quad oriented systems) so it would not be > unreasonable to omit empty graphs entirely. > > Question 3 > > Yes it should, please submit a fix if you have time > > Question 4 > > Not necessarily, going back to Question 2 not all systems support empty graphs > explicitly. Longer term I intend for dotNetRDF to become one of them so I > would just ignore empty graphs in the TriX input, the DOM based parsing mode > used on non-portable platforms likely already does just this. > > If everything is working on more platforms than we currently support it would > be great to have a pull request that expands our supported .Net > builds/portable profile. Personally I don't have the time/energy to keep > track of all the multitude of profiles that MS keep introducing (which IMO is > the single worst thing they ever did to .Net) so if you have some time to do > so that'd be much appreciated. > > Rob > > From: Peter Kahle <pk...@ka...> > Reply-To: dotNetRDF Developer Discussion and Feature Request > <dot...@li...> > Date: Wednesday, 10 December 2014 03:32 > To: "dot...@li..." > <dot...@li...> > Subject: [dotNetRDF-Develop] TriXWriter/Reader and Empty Graphs > >> Hi Folks, >> >> In playing around testing against WinRT/WP 8.1 (CORE-429 ), I’ve found what I >> think is a bug somewhere in the portable version of the TriXWriter/Parser >> stack (or possibly in the SPARQL engine), but I don’t really know where to go >> poking to fix it. >> >> I’m querying against a TripleStore with two named graphs. This works fine, >> but when calling “new LeviathanQueryProcessor(myTripleStore)”, a new default >> graph (with a null Uri) is added to the store. >> >> Then, when I go to save the store using TriXWriter, it writes “<graph />” as >> the third graph element. This seems to be correct behavior to me, or at least >> potentially correct. >> >> Unfortunately, when I go to load the file, I see: >> >> VDS.RDF.Parsing.RdfParseException was unhandled by user code >> HResult=-2146233088 >> Message=[Source XML] >> >> >> Unexpected </TriX> encountered, either <triple> elements or a </graph> was >> expected >> Source=dotNetRDF >> EndLine=-1 >> EndPosition=-1 >> HasPositionInformation=false >> StartLine=-1 >> StartPosition=-1 >> StackTrace: >> at VDS.RDF.Parsing.TriXParser.TryParseGraph(XmlReader reader, >> IRdfHandler handler) >> at VDS.RDF.Parsing.TriXParser.TryParseGraphset(XmlReader reader, >> IRdfHandler handler) >> at VDS.RDF.Parsing.TriXParser.Load(IRdfHandler handler, TextReader >> input) >> at VDS.RDF.Parsing.TriXParser.Load(ITripleStore store, TextReader >> input) >> at TripleNote.Store.<LoadStore>d__14.MoveNext() >> --- End of stack trace from previous location where exception was thrown >> --- >> at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task >> task) >> at >> System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotifi >> cation(Task task) >> at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() >> at TripleNote.Store.<test>d__0.MoveNext() >> InnerException: >> >> I’m guessing this is where the problem lies, and that the correct behavior is >> to check if we’ve got a blank element in the TriXParser.TryParseGraph. >> >> The questions are: >> 1. Should LeviathanQueryProcessor add a default graph to the TripleStore? >> 2. >> 3. Should an empty graph be written by the TriXWriter? >> 4. >> 5. Should an empty graph be accepted by the TriXParser? I think yes. >> 6. >> 7. If so, should it result in an empty graph in the TripleStore? This would >> require a change to IRdfHandler to handle this case. >> >> Incidentally, while I’m not testing it in any organized manner, all of this >> is working in WinRT and WP8.1 WinRT apps (a universal app, actually), >> including reading, writing, querying, and asserting triples. >> >> Peter >> Sent from Windows Mail >> >> ----------------------------------------------------------------------------- >> - Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from >> Actuate! Instantly Supercharge Your Business Reports and Dashboards with >> Interactivity, Sharing, Native Excel Exports, App Integration & more Get >> technology previously reserved for billion-dollar corporations, FREE >> http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk_ >> ______________________________________________ dotNetRDF-develop mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >> <https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop> > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk__ > _____________________________________________ dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Peter K. <pk...@ka...> - 2014-12-11 20:04:17
|
Thanks Rob. I’ll have a pull request to fix this immediate bug when I get the chance. I’ll try to throw together a test project to test the portable package against WinRT and/or Windows Phone 8.1 RT, as well as the minor change to enable WinRT on Windows Phone as a target. I have to admit I don’t really understand all the portable target stuff, but I guess I’ll have to learn it sometime. I’ll also dig in and make a change to the nuspec file to make it work on Universal apps (as well as the existing types), and I guess I’ll have to do the same for VDS.Common (which I wasn’t using, apparently). A quick check tells me that the only profile that supports universal apps supports only Silverlight 5, not 4. If that’s going to be a problem, I might be biting off more than I can chew, at least in the short term. If you’re looking for someone to actively manage the portable builds, that’s probably not me, as my interest and time waxes and wanes. But I’ve been on this list for a couple of years now, so I’ll try to monitor it and help out when I can. Peter Sent from Windows Mail From: Rob Vesse<mailto:rv...@do...> Sent: Thursday, December 11, 2014 05:10 To: dot...@li...<mailto:dot...@li...> Peter Question 1 It is not the query processor but the ISparqlDataset that is used to wrap the ITripleStore instance for SPARQL execution which adds the empty default graph. The SPARQL specification says queries and updates operate over a dataset which always has at least an empty default graph and possibly some named graphs so it makes logic easier elsewhere to always ensure an empty default graph is present Question 2 Yes though if it is the default graph then that could probably be omitted. In general many RDF/SPARQL systems won't necessarily support empty graphs explicitly (particularly quad oriented systems) so it would not be unreasonable to omit empty graphs entirely. Question 3 Yes it should, please submit a fix if you have time Question 4 Not necessarily, going back to Question 2 not all systems support empty graphs explicitly. Longer term I intend for dotNetRDF to become one of them so I would just ignore empty graphs in the TriX input, the DOM based parsing mode used on non-portable platforms likely already does just this. If everything is working on more platforms than we currently support it would be great to have a pull request that expands our supported .Net builds/portable profile. Personally I don't have the time/energy to keep track of all the multitude of profiles that MS keep introducing (which IMO is the single worst thing they ever did to .Net) so if you have some time to do so that'd be much appreciated. Rob From: Peter Kahle <pk...@ka...<mailto:pk...@ka...>> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...<mailto:dot...@li...>> Date: Wednesday, 10 December 2014 03:32 To: "dot...@li...<mailto:dot...@li...>" <dot...@li...<mailto:dot...@li...>> Subject: [dotNetRDF-Develop] TriXWriter/Reader and Empty Graphs Hi Folks, In playing around testing against WinRT/WP 8.1 (CORE-429 ), I’ve found what I think is a bug somewhere in the portable version of the TriXWriter/Parser stack (or possibly in the SPARQL engine), but I don’t really know where to go poking to fix it. I’m querying against a TripleStore with two named graphs. This works fine, but when calling “new LeviathanQueryProcessor(myTripleStore)”, a new default graph (with a null Uri) is added to the store. Then, when I go to save the store using TriXWriter, it writes “<graph />” as the third graph element. This seems to be correct behavior to me, or at least potentially correct. Unfortunately, when I go to load the file, I see: VDS.RDF.Parsing.RdfParseException was unhandled by user code HResult=-2146233088 Message=[Source XML] Unexpected </TriX> encountered, either <triple> elements or a </graph> was expected Source=dotNetRDF EndLine=-1 EndPosition=-1 HasPositionInformation=false StartLine=-1 StartPosition=-1 StackTrace: at VDS.RDF.Parsing.TriXParser.TryParseGraph(XmlReader reader, IRdfHandler handler) at VDS.RDF.Parsing.TriXParser.TryParseGraphset(XmlReader reader, IRdfHandler handler) at VDS.RDF.Parsing.TriXParser.Load(IRdfHandler handler, TextReader input) at VDS.RDF.Parsing.TriXParser.Load(ITripleStore store, TextReader input) at TripleNote.Store.<LoadStore>d__14.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() at TripleNote.Store.<test>d__0.MoveNext() InnerException: I’m guessing this is where the problem lies, and that the correct behavior is to check if we’ve got a blank element in the TriXParser.TryParseGraph. The questions are: 1. Should LeviathanQueryProcessor add a default graph to the TripleStore? 2. Should an empty graph be written by the TriXWriter? 3. Should an empty graph be accepted by the TriXParser? I think yes. 4. If so, should it result in an empty graph in the TripleStore? This would require a change to IRdfHandler to handle this case. Incidentally, while I’m not testing it in any organized manner, all of this is working in WinRT and WP8.1 WinRT apps (a universal app, actually), including reading, writing, querying, and asserting triples. Peter Sent from Windows Mail ------------------------------------------------------------------------------ Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration & more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk_______________________________________________ dotNetRDF-develop mailing list dot...@li...<mailto:dot...@li...> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Rob V. <rv...@do...> - 2014-12-11 10:10:41
|
Peter Question 1 It is not the query processor but the ISparqlDataset that is used to wrap the ITripleStore instance for SPARQL execution which adds the empty default graph. The SPARQL specification says queries and updates operate over a dataset which always has at least an empty default graph and possibly some named graphs so it makes logic easier elsewhere to always ensure an empty default graph is present Question 2 Yes though if it is the default graph then that could probably be omitted. In general many RDF/SPARQL systems won't necessarily support empty graphs explicitly (particularly quad oriented systems) so it would not be unreasonable to omit empty graphs entirely. Question 3 Yes it should, please submit a fix if you have time Question 4 Not necessarily, going back to Question 2 not all systems support empty graphs explicitly. Longer term I intend for dotNetRDF to become one of them so I would just ignore empty graphs in the TriX input, the DOM based parsing mode used on non-portable platforms likely already does just this. If everything is working on more platforms than we currently support it would be great to have a pull request that expands our supported .Net builds/portable profile. Personally I don't have the time/energy to keep track of all the multitude of profiles that MS keep introducing (which IMO is the single worst thing they ever did to .Net) so if you have some time to do so that'd be much appreciated. Rob From: Peter Kahle <pk...@ka...> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Date: Wednesday, 10 December 2014 03:32 To: "dot...@li..." <dot...@li...> Subject: [dotNetRDF-Develop] TriXWriter/Reader and Empty Graphs > Hi Folks, > > In playing around testing against WinRT/WP 8.1 (CORE-429 ), I’ve found what I > think is a bug somewhere in the portable version of the TriXWriter/Parser > stack (or possibly in the SPARQL engine), but I don’t really know where to go > poking to fix it. > > I’m querying against a TripleStore with two named graphs. This works fine, but > when calling “new LeviathanQueryProcessor(myTripleStore)”, a new default graph > (with a null Uri) is added to the store. > > Then, when I go to save the store using TriXWriter, it writes “<graph />” as > the third graph element. This seems to be correct behavior to me, or at least > potentially correct. > > Unfortunately, when I go to load the file, I see: > > VDS.RDF.Parsing.RdfParseException was unhandled by user code > HResult=-2146233088 > Message=[Source XML] > > > Unexpected </TriX> encountered, either <triple> elements or a </graph> was > expected > Source=dotNetRDF > EndLine=-1 > EndPosition=-1 > HasPositionInformation=false > StartLine=-1 > StartPosition=-1 > StackTrace: > at VDS.RDF.Parsing.TriXParser.TryParseGraph(XmlReader reader, > IRdfHandler handler) > at VDS.RDF.Parsing.TriXParser.TryParseGraphset(XmlReader reader, > IRdfHandler handler) > at VDS.RDF.Parsing.TriXParser.Load(IRdfHandler handler, TextReader > input) > at VDS.RDF.Parsing.TriXParser.Load(ITripleStore store, TextReader > input) > at TripleNote.Store.<LoadStore>d__14.MoveNext() > --- End of stack trace from previous location where exception was thrown > --- > at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task > task) > at > System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotific > ation(Task task) > at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() > at TripleNote.Store.<test>d__0.MoveNext() > InnerException: > > I’m guessing this is where the problem lies, and that the correct behavior is > to check if we’ve got a blank element in the TriXParser.TryParseGraph. > > The questions are: > 1. Should LeviathanQueryProcessor add a default graph to the TripleStore? > 2. > 3. Should an empty graph be written by the TriXWriter? > 4. > 5. Should an empty graph be accepted by the TriXParser? I think yes. > 6. > 7. If so, should it result in an empty graph in the TripleStore? This would > require a change to IRdfHandler to handle this case. > > Incidentally, while I’m not testing it in any organized manner, all of this is > working in WinRT and WP8.1 WinRT apps (a universal app, actually), including > reading, writing, querying, and asserting triples. > > Peter > Sent from Windows Mail > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk__ > _____________________________________________ dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Peter K. <pk...@ka...> - 2014-12-10 04:06:02
|
Hi Folks, In playing around testing against WinRT/WP 8.1 (CORE-429 ), I’ve found what I think is a bug somewhere in the portable version of the TriXWriter/Parser stack (or possibly in the SPARQL engine), but I don’t really know where to go poking to fix it. I’m querying against a TripleStore with two named graphs. This works fine, but when calling “new LeviathanQueryProcessor(myTripleStore)”, a new default graph (with a null Uri) is added to the store. Then, when I go to save the store using TriXWriter, it writes “<graph />” as the third graph element. This seems to be correct behavior to me, or at least potentially correct. Unfortunately, when I go to load the file, I see: VDS.RDF.Parsing.RdfParseException was unhandled by user code HResult=-2146233088 Message=[Source XML] Unexpected </TriX> encountered, either <triple> elements or a </graph> was expected Source=dotNetRDF EndLine=-1 EndPosition=-1 HasPositionInformation=false StartLine=-1 StartPosition=-1 StackTrace: at VDS.RDF.Parsing.TriXParser.TryParseGraph(XmlReader reader, IRdfHandler handler) at VDS.RDF.Parsing.TriXParser.TryParseGraphset(XmlReader reader, IRdfHandler handler) at VDS.RDF.Parsing.TriXParser.Load(IRdfHandler handler, TextReader input) at VDS.RDF.Parsing.TriXParser.Load(ITripleStore store, TextReader input) at TripleNote.Store.<LoadStore>d__14.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() at TripleNote.Store.<test>d__0.MoveNext() InnerException: I’m guessing this is where the problem lies, and that the correct behavior is to check if we’ve got a blank element in the TriXParser.TryParseGraph. The questions are: 1. Should LeviathanQueryProcessor add a default graph to the TripleStore? 2. Should an empty graph be written by the TriXWriter? 3. Should an empty graph be accepted by the TriXParser? I think yes. 4. If so, should it result in an empty graph in the TripleStore? This would require a change to IRdfHandler to handle this case. Incidentally, while I’m not testing it in any organized manner, all of this is working in WinRT and WP8.1 WinRT apps (a universal app, actually), including reading, writing, querying, and asserting triples. Peter Sent from Windows Mail |
From: Rob V. <rv...@do...> - 2014-12-04 12:35:56
|
Wadea It is helpful if you provide a complete example of what you have tried - query, data and code - as otherwise we can only guess at what may/may not be wrong. FROM and FROM NAMED by themselves only set the graphs that different parts of the query will operate over. You still need to have the rest of the query be valid with respect to those graphs, in particular as Tom already noted you may need to be using GRAPH clauses for some/all of your query Finally it is also important to understand that FROM and FROM NAMED refer to the dataset you are querying, they do not automatically pull in external graphs (from files or the web) so any graphs you mention with a FROM/FROM NAMED must actually be in the dataset you are querying. >From your question you sound like you are hoping dotNetRDF will do this for you which is not the default behaviour, however you can use the DiskDemandTripleStore/WebDemandTripleStore if you wish to create a dataset that will automatically resolve graphs that aren't already in-memory. Rob From: Tomasz Pluskiewicz <tom...@gm...> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Date: Thursday, 4 December 2014 10:26 To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Subject: Re: [dotNetRDF-Develop] FROM and FROM NAMED > Hi Wadea > > I understand that you should use GRAPH patterns to determine what graphs to > query. > > For example > > SELECT * > WHERE > { > GRAPH ?g > { > // patterns > } > } > > This way you can then narrow down the ?g by using it in further filters or > graph patterns. Does this help? > > Regards, > Tom > > December 4 2014 10:39 AM, "Wadea Hijjawi" <csw...@gm... > <mailto:%22Wadea%20Hijjawi%22%20%3Cc...@gm...%3E> > wrote: >> >> >> Dear Sir/Madam >> I have developed a SPARQL endpoint using dotnetrdf library , it works >> perfect when choosing an external file (Turtle file) >> but when try to execute query that have already "FROM" or "FROM NAMED" clause >> it is return an empty result >> >> is there an example (sample code) to use this library when we want to depend >> on query itself to determine default and named graph? >> >> thank you a lot >> >> -- >> >> >> Wadea Asad Hijjawi >> >> ASP.NET <http://asp.net/> & Oracle DB Developer >> >> Civil Service Bureau >> >> >> >> 06-5604181 Ext:289 >> >> >> >> >> >> >> >> >> >> > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk__ > _____________________________________________ dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Tomasz P. <tom...@gm...> - 2014-12-04 10:26:41
|
Hi Wadea I understand that you should use GRAPH patterns to determine what graphs to query. For example SELECT * WHERE { GRAPH ?g { // patterns } } This way you can then narrow down the ?g by using it in further filters or graph patterns. Does this help? Regards, Tom December 4 2014 10:39 AM, "Wadea Hijjawi" <csw...@gm... <%22Wadea%20Hijjawi%22%20%3Cc...@gm...%3E>> wrote: Dear Sir/Madam I have developed a SPARQL endpoint using dotnetrdf library , it works perfect when choosing an external file (Turtle file) but when try to execute query that have already "FROM" or "FROM NAMED" clause it is return an empty result is there an example (sample code) to use this library when we want to depend on query itself to determine default and named graph? thank you a lot -- Wadea Asad Hijjawi ASP.NET <http://asp.net/> & Oracle DB Developer Civil Service Bureau 06-5604181 Ext:289 |
From: Wadea H. <csw...@gm...> - 2014-12-04 09:14:52
|
Dear Sir/Madam I have developed a SPARQL endpoint using dotnetrdf library , it works perfect when choosing an external file (Turtle file) but when try to execute query that have already "FROM" or "FROM NAMED" clause it is return an empty result is there an example (sample code) to use this library when we want to depend on query itself to determine default and named graph? thank you a lot -- Wadea Asad Hijjawi ASP.NET <http://asp.net/> & Oracle DB Developer Civil Service Bureau 06-5604181 Ext:289 |
From: Wadea H. <csw...@gm...> - 2014-12-04 07:48:56
|
Dear Sir/Madam I have developed a SPARQL endpoint using dotnetrdf library , it works perfect when choosing an external file (Turtle file) but when try to execute query that have already "FROM" or "FROM NAMED" clause it is return an empty result is there an example (sample code) to use this library when we want to depend on query itself to determine default and named graph? thank you a lot -- Wadea Asad Hijjawi ASP.NET <http://asp.net/> & Oracle DB Developer Civil Service Bureau 06-5604181 Ext:289 |
From: Iván P. <iva...@gm...> - 2014-12-02 14:55:51
|
Willdo, thanks Rob! Iván El 2/12/2014 15:50, "Rob Vesse" <rv...@do...> escribió: > Ivan > > As the error message says you have Full HTTP Debugging enabled and this > prevents the calling code from using the HTTP response normally, disable > HTTP debugging and this error will go away > > Rob > > From: Iván Palomares <iva...@gm...> > Reply-To: dotNetRDF Developer Discussion and Feature Request < > dot...@li...> > Date: Tuesday, 2 December 2014 14:45 > To: dotNetRDF Developer Discussion and Feature Request < > dot...@li...> > Subject: Re: [dotNetRDF-Develop] connecting to a BrighstarDB store from > dotnetRDF > > Many thanks for this! it seems that was the main source of the problem. > It has been partially solved, there is still a minor issue but I can > figure out the reason. Now a lot of the triplestore data is shown in > console but after having finished printing in console, the following > exception arises: > > An unhandled exception of type 'VDS.RDF.RdfException' occurred in > dotNetRDF.dll > > Additional information: Full HTTP Debugging is enabled and the HTTP > response stream has been consumed and written to the standard error stream, > the stream is no longer available for calling code to consume > > I'm not pretty if what I have to do to prevent this from appearing is > doing something like try-catch statements in Java. Seemingly the code > executed completely and did what I wanted, except for this "distubring" > exception, am I wrong? > > many thanks, > Iván > > 2014-12-02 11:24 GMT+00:00 Rob Vesse <rv...@do...>: > >> Ivan >> >> Kal appears to have confirmed what I said in my email, Brightstar is >> responding with unexpected data because you are using the incorrect >> endpoint URI >> >> Note that ListGraphs() is internally just a query to the remote database >> so as you suggested and Kal has confirmed the problem is in your connection >> settings. >> >> Correcting the endpoint URL should hopefully resolve your issue >> >> Rob >> >> From: Kal Ahmed <ka...@ne...> >> Reply-To: dotNetRDF Developer Discussion and Feature Request < >> dot...@li...> >> Date: Tuesday, 2 December 2014 08:18 >> To: dotNetRDF Developer Discussion and Feature Request < >> dot...@li...> >> Subject: Re: [dotNetRDF-Develop] connecting to a BrighstarDB store from >> dotnetRDF >> >> Hi, >> >> I believe this is a problem with the URL you are using for the SPARQL >> endpoint. The correct SPARQL endpoint for your store is >> http://<IP>:8090/brightstar/<STORE>/sparql as described here: >> http://brightstardb.readthedocs.org/en/latest/SPARQL_Endpoint/ >> >> Cheers >> >> Kal >> >> On Mon, Dec 1, 2014 at 4:28 PM, Iván Palomares <iva...@gm...> >> wrote: >> >>> Hi, >>> Thanks to all who are trying to help with this question. In order to try >>> to find out the source of the problem, I replaced the code where the SparQL >>> query is executed and shown, by something simple in which I attempt to >>> obtain the graphs in the triplestore ans show its URIs. >>> >>> * //Create a connection to BrightstarDB SPARQL Query endpoint* >>> * SparqlConnector connect = new SparqlConnector(new >>> Uri("http://<IP>:8090/brightstar/<STORE>"));* >>> * PersistentTripleStore store = new >>> PersistentTripleStore(connect);* >>> >>> >>> * IEnumerable<Uri> graphs = connect.ListGraphs();* >>> * foreach (Uri uri in graphs)* >>> * {* >>> * Console.WriteLine(uri.ToString());* >>> * }* >>> >>> Now I obtain a new exception in the ListGraphs() line, a >>> RdfStorageException being unhandled: >>> >>> An unhandled exception of type 'VDS.RDF.Storage.RdfStorageException' >>> occurred in dotNetRDF.dll >>> >>> Additional information: An unexpected error occurred while listing >>> Graphs from the Store. See inner exception for further details >>> >>> This makes me thing that perhaps the source of the problem is not in the >>> previous query itself, but rather somewhere before, in the connection >>> settings? Maybe the classes SaprqlConnector and the PersistentTripleStore >>> as a wrapper, are not the appropriate choice to connect to a BrightstarDB >>> triplestore? >>> >>> Also, to give you more info, my triplestore has been directly imported >>> into BrightstarDB from a .owl file that was previously generated from an >>> ontology in Protegé. The importation process in BrightstarDB was done with >>> a management tool called Polaris it seemingly was all done correctly, >>> according to the output I received when I imported it. >>> >>> Kind regards, >>> Iván >>> >>> 2014-12-01 12:02 GMT+00:00 Iván Palomares <iva...@gm...>: >>> >>>> Hi, >>>> I'm currently trying to make a first connection from a .NET project >>>> with dotnetRDF, to a BrightstarDB remote endpoint in which I have a >>>> triplestore. >>>> The code I use is quite simple, just try to connect and execute a >>>> "trival" SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for >>>> the existing address >>>> and triplestore name I want to access): >>>> >>>> namespace ConsoleApplication3 >>>> { >>>> >>>> class ReadFromBrightstarDB >>>> { >>>> static void Main(string[] args) >>>> { >>>> //Create a connection to BrightstarDB SPARQL Query endpoint >>>> SparqlConnector connect = new SparqlConnector(new Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); >>>> PersistentTripleStore store = new PersistentTripleStore(connect); >>>> >>>> Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); >>>> if(results is SparqlResultSet) >>>> { >>>> //Print out the results >>>> SparqlResultSet rset = (SparqlResultSet)results; >>>> foreach (SparqlResult result in rset) >>>> { >>>> Console.WriteLine(result.ToString()); >>>> } >>>> } >>>> >>>> } >>>> } >>>> >>>> }When executing, I obtain an exception entitled "RDFParseException was >>>> unhandled", with the following description: >>>> >>>> An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' >>>> occurred in dotNetRDF.dll >>>> Additional information: Unable to Parse a SPARQL Result Set from the >>>> provided XML since the Document Element is not a <sparql> element! >>>> >>>> Any help would be appreciate, I'm quite new to .NET in general and >>>> BrightStarDB as well as to SparQL, so maybe there is a very silly issue >>>> here but I couldn't find it yet. >>>> Thanks! >>>> Iván >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >>> from Actuate! Instantly Supercharge Your Business Reports and Dashboards >>> with Interactivity, Sharing, Native Excel Exports, App Integration & more >>> Get technology previously reserved for billion-dollar corporations, FREE >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> dotNetRDF-develop mailing list >>> dot...@li... >>> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >>> >>> >> >> >> -- >> Kal Ahmed >> Director, Networked Planet Limited >> e: kal...@ne... >> w: www.networkedplanet.com >> ------------------------------------------------------------------------------ >> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from >> Actuate! Instantly Supercharge Your Business Reports and Dashboards with >> Interactivity, Sharing, Native Excel Exports, App Integration & more Get >> technology previously reserved for billion-dollar corporations, FREE >> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk_______________________________________________ >> dotNetRDF-develop mailing list dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >> >> >> >> ------------------------------------------------------------------------------ >> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >> from Actuate! Instantly Supercharge Your Business Reports and Dashboards >> with Interactivity, Sharing, Native Excel Exports, App Integration & more >> Get technology previously reserved for billion-dollar corporations, FREE >> >> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >> _______________________________________________ >> dotNetRDF-develop mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >> >> > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk_______________________________________________ > dotNetRDF-develop mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop > > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk > _______________________________________________ > dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop > > |
From: Rob V. <rv...@do...> - 2014-12-02 14:49:59
|
Ivan As the error message says you have Full HTTP Debugging enabled and this prevents the calling code from using the HTTP response normally, disable HTTP debugging and this error will go away Rob From: Iván Palomares <iva...@gm...> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Date: Tuesday, 2 December 2014 14:45 To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Subject: Re: [dotNetRDF-Develop] connecting to a BrighstarDB store from dotnetRDF > Many thanks for this! it seems that was the main source of the problem. > It has been partially solved, there is still a minor issue but I can figure > out the reason. Now a lot of the triplestore data is shown in console but > after having finished printing in console, the following exception arises: > > An unhandled exception of type 'VDS.RDF.RdfException' occurred in > dotNetRDF.dll > > Additional information: Full HTTP Debugging is enabled and the HTTP response > stream has been consumed and written to the standard error stream, the stream > is no longer available for calling code to consume > > I'm not pretty if what I have to do to prevent this from appearing is doing > something like try-catch statements in Java. Seemingly the code executed > completely and did what I wanted, except for this "distubring" exception, am I > wrong? > > many thanks, > Iván > > 2014-12-02 11:24 GMT+00:00 Rob Vesse <rv...@do...>: >> Ivan >> >> Kal appears to have confirmed what I said in my email, Brightstar is >> responding with unexpected data because you are using the incorrect endpoint >> URI >> >> Note that ListGraphs() is internally just a query to the remote database so >> as you suggested and Kal has confirmed the problem is in your connection >> settings. >> >> Correcting the endpoint URL should hopefully resolve your issue >> >> Rob >> >> From: Kal Ahmed <ka...@ne...> >> Reply-To: dotNetRDF Developer Discussion and Feature Request >> <dot...@li...> >> Date: Tuesday, 2 December 2014 08:18 >> To: dotNetRDF Developer Discussion and Feature Request >> <dot...@li...> >> Subject: Re: [dotNetRDF-Develop] connecting to a BrighstarDB store from >> dotnetRDF >> >>> Hi, >>> >>> I believe this is a problem with the URL you are using for the SPARQL >>> endpoint. The correct SPARQL endpoint for your store is >>> http://<IP>:8090/brightstar/<STORE>/sparql as described here: >>> http://brightstardb.readthedocs.org/en/latest/SPARQL_Endpoint/ >>> >>> Cheers >>> >>> Kal >>> >>> On Mon, Dec 1, 2014 at 4:28 PM, Iván Palomares <iva...@gm...> wrote: >>>> Hi, >>>> Thanks to all who are trying to help with this question. In order to try to >>>> find out the source of the problem, I replaced the code where the SparQL >>>> query is executed and shown, by something simple in which I attempt to >>>> obtain the graphs in the triplestore ans show its URIs. >>>> >>>> //Create a connection to BrightstarDB SPARQL Query endpoint >>>> SparqlConnector connect = new SparqlConnector(new >>>> Uri("http://<IP>:8090/brightstar/<STORE>")); >>>> PersistentTripleStore store = new >>>> PersistentTripleStore(connect); >>>> >>>> >>>> IEnumerable<Uri> graphs = connect.ListGraphs(); >>>> foreach (Uri uri in graphs) >>>> { >>>> Console.WriteLine(uri.ToString()); >>>> } >>>> >>>> Now I obtain a new exception in the ListGraphs() line, a >>>> RdfStorageException being unhandled: >>>> >>>> An unhandled exception of type 'VDS.RDF.Storage.RdfStorageException' >>>> occurred in dotNetRDF.dll >>>> >>>> Additional information: An unexpected error occurred while listing Graphs >>>> from the Store. See inner exception for further details >>>> >>>> This makes me thing that perhaps the source of the problem is not in the >>>> previous query itself, but rather somewhere before, in the connection >>>> settings? Maybe the classes SaprqlConnector and the PersistentTripleStore >>>> as a wrapper, are not the appropriate choice to connect to a BrightstarDB >>>> triplestore? >>>> >>>> Also, to give you more info, my triplestore has been directly imported into >>>> BrightstarDB from a .owl file that was previously generated from an >>>> ontology in Protegé. The importation process in BrightstarDB was done with >>>> a management tool called Polaris it seemingly was all done correctly, >>>> according to the output I received when I imported it. >>>> >>>> Kind regards, >>>> Iván >>>> >>>> 2014-12-01 12:02 GMT+00:00 Iván Palomares <iva...@gm...>: >>>>> Hi, >>>>> I'm currently trying to make a first connection from a .NET project with >>>>> dotnetRDF, to a BrightstarDB remote endpoint in which I have a >>>>> triplestore. >>>>> The code I use is quite simple, just try to connect and execute a "trival" >>>>> SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for the >>>>> existing address >>>>> and triplestore name I want to access): >>>>> >>>>> namespace ConsoleApplication3 >>>>> { >>>>> class ReadFromBrightstarDB >>>>> { >>>>> static void Main(string[] args) >>>>> { >>>>> //Create a connection to BrightstarDB SPARQL Query endpoint >>>>> SparqlConnector connect = new SparqlConnector(new >>>>> Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); >>>>> PersistentTripleStore store = new PersistentTripleStore(connect); >>>>> >>>>> Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); >>>>> if(results is SparqlResultSet) >>>>> { >>>>> //Print out the results >>>>> SparqlResultSet rset = (SparqlResultSet)results; >>>>> foreach (SparqlResult result in rset) >>>>> { >>>>> Console.WriteLine(result.ToString()); >>>>> } >>>>> } >>>>> >>>>> } >>>>> } >>>>> } >>>>> When executing, I obtain an exception entitled "RDFParseException was >>>>> unhandled", with the following description: >>>>> >>>>> An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' >>>>> occurred in dotNetRDF.dll >>>>> Additional information: Unable to Parse a SPARQL Result Set from the >>>>> provided XML since the Document Element is not a <sparql> element! >>>>> >>>>> Any help would be appreciate, I'm quite new to .NET in general and >>>>> BrightStarDB as well as to SparQL, so maybe there is a very silly issue >>>>> here but I couldn't find it yet. >>>>> Thanks! >>>>> Iván >>>> >>>> >>>> --------------------------------------------------------------------------- >>>> --- >>>> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >>>> from Actuate! Instantly Supercharge Your Business Reports and Dashboards >>>> with Interactivity, Sharing, Native Excel Exports, App Integration & more >>>> Get technology previously reserved for billion-dollar corporations, FREE >>>> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktr>>>> k >>>> _______________________________________________ >>>> dotNetRDF-develop mailing list >>>> dot...@li... >>>> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >>>> >>> >>> >>> >>> -- >>> Kal Ahmed >>> Director, Networked Planet Limited >>> e: kal...@ne... >>> w: www.networkedplanet.com <http://www.networkedplanet.com> >>> ---------------------------------------------------------------------------- >>> -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from >>> Actuate! Instantly Supercharge Your Business Reports and Dashboards with >>> Interactivity, Sharing, Native Excel Exports, App Integration & more Get >>> technology previously reserved for billion-dollar corporations, FREE >>> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >>> _______________________________________________ dotNetRDF-develop mailing >>> list >>> dot...@li...https://lists.sourceforge.net/lists/l >>> istinfo/dotnetrdf-develop >> >> ----------------------------------------------------------------------------->> - >> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >> from Actuate! Instantly Supercharge Your Business Reports and Dashboards >> with Interactivity, Sharing, Native Excel Exports, App Integration & more >> Get technology previously reserved for billion-dollar corporations, FREE >> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >> _______________________________________________ >> dotNetRDF-develop mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >> > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk__ > _____________________________________________ dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Iván P. <iva...@gm...> - 2014-12-02 14:45:40
|
Many thanks for this! it seems that was the main source of the problem. It has been partially solved, there is still a minor issue but I can figure out the reason. Now a lot of the triplestore data is shown in console but after having finished printing in console, the following exception arises: An unhandled exception of type 'VDS.RDF.RdfException' occurred in dotNetRDF.dll Additional information: Full HTTP Debugging is enabled and the HTTP response stream has been consumed and written to the standard error stream, the stream is no longer available for calling code to consume I'm not pretty if what I have to do to prevent this from appearing is doing something like try-catch statements in Java. Seemingly the code executed completely and did what I wanted, except for this "distubring" exception, am I wrong? many thanks, Iván 2014-12-02 11:24 GMT+00:00 Rob Vesse <rv...@do...>: > Ivan > > Kal appears to have confirmed what I said in my email, Brightstar is > responding with unexpected data because you are using the incorrect > endpoint URI > > Note that ListGraphs() is internally just a query to the remote database > so as you suggested and Kal has confirmed the problem is in your connection > settings. > > Correcting the endpoint URL should hopefully resolve your issue > > Rob > > From: Kal Ahmed <ka...@ne...> > Reply-To: dotNetRDF Developer Discussion and Feature Request < > dot...@li...> > Date: Tuesday, 2 December 2014 08:18 > To: dotNetRDF Developer Discussion and Feature Request < > dot...@li...> > Subject: Re: [dotNetRDF-Develop] connecting to a BrighstarDB store from > dotnetRDF > > Hi, > > I believe this is a problem with the URL you are using for the SPARQL > endpoint. The correct SPARQL endpoint for your store is > http://<IP>:8090/brightstar/<STORE>/sparql as described here: > http://brightstardb.readthedocs.org/en/latest/SPARQL_Endpoint/ > > Cheers > > Kal > > On Mon, Dec 1, 2014 at 4:28 PM, Iván Palomares <iva...@gm...> wrote: > >> Hi, >> Thanks to all who are trying to help with this question. In order to try >> to find out the source of the problem, I replaced the code where the SparQL >> query is executed and shown, by something simple in which I attempt to >> obtain the graphs in the triplestore ans show its URIs. >> >> * //Create a connection to BrightstarDB SPARQL Query endpoint* >> * SparqlConnector connect = new SparqlConnector(new >> Uri("http://<IP>:8090/brightstar/<STORE>"));* >> * PersistentTripleStore store = new >> PersistentTripleStore(connect);* >> >> >> * IEnumerable<Uri> graphs = connect.ListGraphs();* >> * foreach (Uri uri in graphs)* >> * {* >> * Console.WriteLine(uri.ToString());* >> * }* >> >> Now I obtain a new exception in the ListGraphs() line, a >> RdfStorageException being unhandled: >> >> An unhandled exception of type 'VDS.RDF.Storage.RdfStorageException' >> occurred in dotNetRDF.dll >> >> Additional information: An unexpected error occurred while listing Graphs >> from the Store. See inner exception for further details >> >> This makes me thing that perhaps the source of the problem is not in the >> previous query itself, but rather somewhere before, in the connection >> settings? Maybe the classes SaprqlConnector and the PersistentTripleStore >> as a wrapper, are not the appropriate choice to connect to a BrightstarDB >> triplestore? >> >> Also, to give you more info, my triplestore has been directly imported >> into BrightstarDB from a .owl file that was previously generated from an >> ontology in Protegé. The importation process in BrightstarDB was done with >> a management tool called Polaris it seemingly was all done correctly, >> according to the output I received when I imported it. >> >> Kind regards, >> Iván >> >> 2014-12-01 12:02 GMT+00:00 Iván Palomares <iva...@gm...>: >> >>> Hi, >>> I'm currently trying to make a first connection from a .NET project with >>> dotnetRDF, to a BrightstarDB remote endpoint in which I have a triplestore. >>> The code I use is quite simple, just try to connect and execute a >>> "trival" SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for >>> the existing address >>> and triplestore name I want to access): >>> >>> namespace ConsoleApplication3 >>> { >>> >>> class ReadFromBrightstarDB >>> { >>> static void Main(string[] args) >>> { >>> //Create a connection to BrightstarDB SPARQL Query endpoint >>> SparqlConnector connect = new SparqlConnector(new Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); >>> PersistentTripleStore store = new PersistentTripleStore(connect); >>> >>> Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); >>> if(results is SparqlResultSet) >>> { >>> //Print out the results >>> SparqlResultSet rset = (SparqlResultSet)results; >>> foreach (SparqlResult result in rset) >>> { >>> Console.WriteLine(result.ToString()); >>> } >>> } >>> >>> } >>> } >>> >>> }When executing, I obtain an exception entitled "RDFParseException was >>> unhandled", with the following description: >>> >>> An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' >>> occurred in dotNetRDF.dll >>> Additional information: Unable to Parse a SPARQL Result Set from the >>> provided XML since the Document Element is not a <sparql> element! >>> >>> Any help would be appreciate, I'm quite new to .NET in general and >>> BrightStarDB as well as to SparQL, so maybe there is a very silly issue >>> here but I couldn't find it yet. >>> Thanks! >>> Iván >>> >> >> >> >> ------------------------------------------------------------------------------ >> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >> from Actuate! Instantly Supercharge Your Business Reports and Dashboards >> with Interactivity, Sharing, Native Excel Exports, App Integration & more >> Get technology previously reserved for billion-dollar corporations, FREE >> >> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >> _______________________________________________ >> dotNetRDF-develop mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >> >> > > > -- > Kal Ahmed > Director, Networked Planet Limited > e: kal...@ne... > w: www.networkedplanet.com > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk_______________________________________________ > dotNetRDF-develop mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop > > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk > _______________________________________________ > dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop > > |
From: Rob V. <rv...@do...> - 2014-12-02 11:25:26
|
Ivan Kal appears to have confirmed what I said in my email, Brightstar is responding with unexpected data because you are using the incorrect endpoint URI Note that ListGraphs() is internally just a query to the remote database so as you suggested and Kal has confirmed the problem is in your connection settings. Correcting the endpoint URL should hopefully resolve your issue Rob From: Kal Ahmed <ka...@ne...> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Date: Tuesday, 2 December 2014 08:18 To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Subject: Re: [dotNetRDF-Develop] connecting to a BrighstarDB store from dotnetRDF > Hi, > > I believe this is a problem with the URL you are using for the SPARQL > endpoint. The correct SPARQL endpoint for your store is > http://<IP>:8090/brightstar/<STORE>/sparql as described here: > http://brightstardb.readthedocs.org/en/latest/SPARQL_Endpoint/ > > Cheers > > Kal > > On Mon, Dec 1, 2014 at 4:28 PM, Iván Palomares <iva...@gm...> wrote: >> Hi, >> Thanks to all who are trying to help with this question. In order to try to >> find out the source of the problem, I replaced the code where the SparQL >> query is executed and shown, by something simple in which I attempt to obtain >> the graphs in the triplestore ans show its URIs. >> >> //Create a connection to BrightstarDB SPARQL Query endpoint >> SparqlConnector connect = new SparqlConnector(new >> Uri("http://<IP>:8090/brightstar/<STORE>")); >> PersistentTripleStore store = new PersistentTripleStore(connect); >> >> >> IEnumerable<Uri> graphs = connect.ListGraphs(); >> foreach (Uri uri in graphs) >> { >> Console.WriteLine(uri.ToString()); >> } >> >> Now I obtain a new exception in the ListGraphs() line, a RdfStorageException >> being unhandled: >> >> An unhandled exception of type 'VDS.RDF.Storage.RdfStorageException' occurred >> in dotNetRDF.dll >> >> Additional information: An unexpected error occurred while listing Graphs >> from the Store. See inner exception for further details >> >> This makes me thing that perhaps the source of the problem is not in the >> previous query itself, but rather somewhere before, in the connection >> settings? Maybe the classes SaprqlConnector and the PersistentTripleStore as >> a wrapper, are not the appropriate choice to connect to a BrightstarDB >> triplestore? >> >> Also, to give you more info, my triplestore has been directly imported into >> BrightstarDB from a .owl file that was previously generated from an ontology >> in Protegé. The importation process in BrightstarDB was done with a >> management tool called Polaris it seemingly was all done correctly, according >> to the output I received when I imported it. >> >> Kind regards, >> Iván >> >> 2014-12-01 12:02 GMT+00:00 Iván Palomares <iva...@gm...>: >>> Hi, >>> I'm currently trying to make a first connection from a .NET project with >>> dotnetRDF, to a BrightstarDB remote endpoint in which I have a triplestore. >>> The code I use is quite simple, just try to connect and execute a "trival" >>> SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for the >>> existing address >>> and triplestore name I want to access): >>> >>> namespace ConsoleApplication3 >>> { >>> class ReadFromBrightstarDB >>> { >>> static void Main(string[] args) >>> { >>> //Create a connection to BrightstarDB SPARQL Query endpoint >>> SparqlConnector connect = new SparqlConnector(new >>> Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); >>> PersistentTripleStore store = new PersistentTripleStore(connect); >>> >>> Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); >>> if(results is SparqlResultSet) >>> { >>> //Print out the results >>> SparqlResultSet rset = (SparqlResultSet)results; >>> foreach (SparqlResult result in rset) >>> { >>> Console.WriteLine(result.ToString()); >>> } >>> } >>> >>> } >>> } >>> } >>> When executing, I obtain an exception entitled "RDFParseException was >>> unhandled", with the following description: >>> >>> An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' occurred >>> in dotNetRDF.dll >>> Additional information: Unable to Parse a SPARQL Result Set from the >>> provided XML since the Document Element is not a <sparql> element! >>> >>> Any help would be appreciate, I'm quite new to .NET in general and >>> BrightStarDB as well as to SparQL, so maybe there is a very silly issue here >>> but I couldn't find it yet. >>> Thanks! >>> Iván >> >> >> ----------------------------------------------------------------------------->> - >> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >> from Actuate! Instantly Supercharge Your Business Reports and Dashboards >> with Interactivity, Sharing, Native Excel Exports, App Integration & more >> Get technology previously reserved for billion-dollar corporations, FREE >> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >> _______________________________________________ >> dotNetRDF-develop mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop >> > > > > -- > Kal Ahmed > Director, Networked Planet Limited > e: kal...@ne... > w: www.networkedplanet.com <http://www.networkedplanet.com> > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk__ > _____________________________________________ dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Khalil A. <ka...@ne...> - 2014-12-02 08:48:23
|
Hi, I believe this is a problem with the URL you are using for the SPARQL endpoint. The correct SPARQ endpoint for your store is http://<IP>:8090/brightstar/<STORE>/sparql as described here: http://brightstardb.readthedocs.org/en/latest/SPARQL_Endpoint/ Cheers Kal On Mon, Dec 1, 2014 at 4:28 PM, Iván Palomares <iva...@gm...> wrote: > Hi, > Thanks to all who are trying to help with this question. In order to try > to find out the source of the problem, I replaced the code where the SparQL > query is executed and shown, by something simple in which I attempt to > obtain the graphs in the triplestore ans show its URIs. > > * //Create a connection to BrightstarDB SPARQL Query endpoint* > * SparqlConnector connect = new SparqlConnector(new > Uri("http://<IP>:8090/brightstar/<STORE>"));* > * PersistentTripleStore store = new > PersistentTripleStore(connect);* > > > * IEnumerable<Uri> graphs = connect.ListGraphs();* > * foreach (Uri uri in graphs)* > * {* > * Console.WriteLine(uri.ToString());* > * }* > > Now I obtain a new exception in the ListGraphs() line, a > RdfStorageException being unhandled: > > An unhandled exception of type 'VDS.RDF.Storage.RdfStorageException' > occurred in dotNetRDF.dll > > Additional information: An unexpected error occurred while listing Graphs > from the Store. See inner exception for further details > > This makes me thing that perhaps the source of the problem is not in the > previous query itself, but rather somewhere before, in the connection > settings? Maybe the classes SaprqlConnector and the PersistentTripleStore > as a wrapper, are not the appropriate choice to connect to a BrightstarDB > triplestore? > > Also, to give you more info, my triplestore has been directly imported > into BrightstarDB from a .owl file that was previously generated from an > ontology in Protegé. The importation process in BrightstarDB was done with > a management tool called Polaris it seemingly was all done correctly, > according to the output I received when I imported it. > > Kind regards, > Iván > > 2014-12-01 12:02 GMT+00:00 Iván Palomares <iva...@gm...>: > >> Hi, >> I'm currently trying to make a first connection from a .NET project with >> dotnetRDF, to a BrightstarDB remote endpoint in which I have a triplestore. >> The code I use is quite simple, just try to connect and execute a >> "trival" SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for >> the existing address >> and triplestore name I want to access): >> >> namespace ConsoleApplication3 >> { >> >> class ReadFromBrightstarDB >> { >> static void Main(string[] args) >> { >> //Create a connection to BrightstarDB SPARQL Query endpoint >> SparqlConnector connect = new SparqlConnector(new Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); >> PersistentTripleStore store = new PersistentTripleStore(connect); >> >> Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); >> if(results is SparqlResultSet) >> { >> //Print out the results >> SparqlResultSet rset = (SparqlResultSet)results; >> foreach (SparqlResult result in rset) >> { >> Console.WriteLine(result.ToString()); >> } >> } >> >> } >> } >> >> }When executing, I obtain an exception entitled "RDFParseException was >> unhandled", with the following description: >> >> An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' >> occurred in dotNetRDF.dll >> Additional information: Unable to Parse a SPARQL Result Set from the >> provided XML since the Document Element is not a <sparql> element! >> >> Any help would be appreciate, I'm quite new to .NET in general and >> BrightStarDB as well as to SparQL, so maybe there is a very silly issue >> here but I couldn't find it yet. >> Thanks! >> Iván >> > > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk > _______________________________________________ > dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop > > -- Kal Ahmed Director, Networked Planet Limited e: kal...@ne... w: www.networkedplanet.com |
From: Iván P. <iva...@gm...> - 2014-12-01 16:29:02
|
Hi, Thanks to all who are trying to help with this question. In order to try to find out the source of the problem, I replaced the code where the SparQL query is executed and shown, by something simple in which I attempt to obtain the graphs in the triplestore ans show its URIs. * //Create a connection to BrightstarDB SPARQL Query endpoint* * SparqlConnector connect = new SparqlConnector(new Uri("http://<IP>:8090/brightstar/<STORE>"));* * PersistentTripleStore store = new PersistentTripleStore(connect);* * IEnumerable<Uri> graphs = connect.ListGraphs();* * foreach (Uri uri in graphs)* * {* * Console.WriteLine(uri.ToString());* * }* Now I obtain a new exception in the ListGraphs() line, a RdfStorageException being unhandled: An unhandled exception of type 'VDS.RDF.Storage.RdfStorageException' occurred in dotNetRDF.dll Additional information: An unexpected error occurred while listing Graphs from the Store. See inner exception for further details This makes me thing that perhaps the source of the problem is not in the previous query itself, but rather somewhere before, in the connection settings? Maybe the classes SaprqlConnector and the PersistentTripleStore as a wrapper, are not the appropriate choice to connect to a BrightstarDB triplestore? Also, to give you more info, my triplestore has been directly imported into BrightstarDB from a .owl file that was previously generated from an ontology in Protegé. The importation process in BrightstarDB was done with a management tool called Polaris it seemingly was all done correctly, according to the output I received when I imported it. Kind regards, Iván 2014-12-01 12:02 GMT+00:00 Iván Palomares <iva...@gm...>: > Hi, > I'm currently trying to make a first connection from a .NET project with > dotnetRDF, to a BrightstarDB remote endpoint in which I have a triplestore. > The code I use is quite simple, just try to connect and execute a "trival" > SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for the > existing address > and triplestore name I want to access): > > namespace ConsoleApplication3 > { > > class ReadFromBrightstarDB > { > static void Main(string[] args) > { > //Create a connection to BrightstarDB SPARQL Query endpoint > SparqlConnector connect = new SparqlConnector(new Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); > PersistentTripleStore store = new PersistentTripleStore(connect); > > Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); > if(results is SparqlResultSet) > { > //Print out the results > SparqlResultSet rset = (SparqlResultSet)results; > foreach (SparqlResult result in rset) > { > Console.WriteLine(result.ToString()); > } > } > > } > } > > }When executing, I obtain an exception entitled "RDFParseException was > unhandled", with the following description: > > An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' > occurred in dotNetRDF.dll > Additional information: Unable to Parse a SPARQL Result Set from the > provided XML since the Document Element is not a <sparql> element! > > Any help would be appreciate, I'm quite new to .NET in general and > BrightStarDB as well as to SparQL, so maybe there is a very silly issue > here but I couldn't find it yet. > Thanks! > Iván > |
From: Rob V. <rv...@do...> - 2014-12-01 12:32:27
|
Ivan Please note this is a subscription based list, I have moderated your email through and explicitly CC'd you on this reply this time but please subscribe at https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop if you would like to send further emails You have not done anything obviously wrong. The error you get implies that the store in question is not responding with a valid content type for the query. You can enable the HTTP debugging feature - https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/HowTo/Debug%20HTTP%20Communic ation.wiki#!debugging-http-communication - to see the HTTP headers being received (they are printed to the Debug console) which would be helpful to see what content type Brightstar is responding with. Rob From: Iván Palomares <iva...@gm...> Reply-To: dotNetRDF Developer Discussion and Feature Request <dot...@li...> Date: Monday, 1 December 2014 12:02 To: <dot...@li...> Subject: [dotNetRDF-Develop] connecting to a BrighstarDB store from dotnetRDF > Hi, > I'm currently trying to make a first connection from a .NET project with > dotnetRDF, to a BrightstarDB remote endpoint in which I have a triplestore. > The code I use is quite simple, just try to connect and execute a "trival" > SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for the existing > address > and triplestore name I want to access): > > namespace ConsoleApplication3 > { > class ReadFromBrightstarDB > { > static void Main(string[] args) > { > //Create a connection to BrightstarDB SPARQL Query endpoint > SparqlConnector connect = new SparqlConnector(new > Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); > PersistentTripleStore store = new PersistentTripleStore(connect); > > Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); > if(results is SparqlResultSet) > { > //Print out the results > SparqlResultSet rset = (SparqlResultSet)results; > foreach (SparqlResult result in rset) > { > Console.WriteLine(result.ToString()); > } > } > > } > } > } > When executing, I obtain an exception entitled "RDFParseException was > unhandled", with the following description: > > An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' occurred in > dotNetRDF.dll > Additional information: Unable to Parse a SPARQL Result Set from the provided > XML since the Document Element is not a <sparql> element! > > Any help would be appreciate, I'm quite new to .NET in general and > BrightStarDB as well as to SparQL, so maybe there is a very silly issue here > but I couldn't find it yet. > Thanks! > Iván > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from > Actuate! Instantly Supercharge Your Business Reports and Dashboards with > Interactivity, Sharing, Native Excel Exports, App Integration & more Get > technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk__ > _____________________________________________ dotNetRDF-develop mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Iván P. <iva...@gm...> - 2014-12-01 12:02:59
|
Hi, I'm currently trying to make a first connection from a .NET project with dotnetRDF, to a BrightstarDB remote endpoint in which I have a triplestore. The code I use is quite simple, just try to connect and execute a "trival" SPARQL query, as follows (<IP>, <PORT> and <STORE_NAME> stand for the existing address and triplestore name I want to access): namespace ConsoleApplication3 { class ReadFromBrightstarDB { static void Main(string[] args) { //Create a connection to BrightstarDB SPARQL Query endpoint SparqlConnector connect = new SparqlConnector(new Uri("http://<IP>:<PORT>/brightstar/<STORE_NAME>")); PersistentTripleStore store = new PersistentTripleStore(connect); Object results = store.ExecuteQuery("SELECT * WHERE {?s ?p ?o}"); if(results is SparqlResultSet) { //Print out the results SparqlResultSet rset = (SparqlResultSet)results; foreach (SparqlResult result in rset) { Console.WriteLine(result.ToString()); } } } } }When executing, I obtain an exception entitled "RDFParseException was unhandled", with the following description: An unhandled exception of type 'VDS.RDF.Parsing.RdfParseException' occurred in dotNetRDF.dll Additional information: Unable to Parse a SPARQL Result Set from the provided XML since the Document Element is not a <sparql> element! Any help would be appreciate, I'm quite new to .NET in general and BrightStarDB as well as to SparQL, so maybe there is a very silly issue here but I couldn't find it yet. Thanks! Iván |
From: <dot...@li...> - 2014-11-21 12:10:51
|
Send dotNetRDF-commits mailing list submissions to dot...@li... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/dotnetrdf-commits or, via email, send a message with subject or body 'help' to dot...@li... You can reach the person managing the list at dot...@li... When replying, please edit your Subject line so it is more specific than "Re: Contents of dotNetRDF-commits digest..." Today's Topics: 1. commit/dotnetrdf: 11 new changesets (Bitbucket) 2. commit/dotnetrdf: rvesse: ifdef out tests that are not applicable on PCL (Bitbucket) 3. commit/dotnetrdf: rvesse: Delete junk from portion of file TeamCity is complaining about (Bitbucket) 4. commit/dotnetrdf: 2 new changesets (Bitbucket) 5. commit/dotnetrdf: 2 new changesets (Bitbucket) 6. commit/dotnetrdf: rvesse: Merge from CORE-425 (Bitbucket) 7. commit/dotnetrdf: 3 new changesets (Bitbucket) 8. commit/dotnetrdf: kal_ahmed: Fix for CORE-431 (Bitbucket) 9. commit/dotnetrdf: 4 new changesets (Bitbucket) ---------------------------------------------------------------------- Message: 1 Date: Thu, 06 Nov 2014 15:43:53 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 11 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 11 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/e6fadb73547a/ Changeset: e6fadb73547a Branch: CORE-428 User: rvesse Date: 2014-11-06 14:23:00+00:00 Summary: Test case and fix for corner case SPARQL parsing bug (CORE-428) Affected #: 2 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/c9e092095e6a/ Changeset: c9e092095e6a Branch: CORE-428 User: rvesse Date: 2014-11-06 14:32:13+00:00 Summary: Additional test cases for CORE-428 Affected #: 3 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/9ad62101628d/ Changeset: 9ad62101628d User: rvesse Date: 2014-11-06 14:33:45+00:00 Summary: Merge in CORE-428 fix Affected #: 4 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/7fe5f846956c/ Changeset: 7fe5f846956c Branch: CORE-427 User: rvesse Date: 2014-11-06 14:53:08+00:00 Summary: Failing unit tests for CORE-427 Affected #: 3 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/f6b7d992e418/ Changeset: f6b7d992e418 Branch: CORE-427 User: rvesse Date: 2014-11-06 14:54:19+00:00 Summary: Add missing ToString() methods to fix CORE-427 Affected #: 0 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/3b61c11ae0f0/ Changeset: 3b61c11ae0f0 User: rvesse Date: 2014-11-06 14:56:05+00:00 Summary: Merge in CORE-427 fixes Affected #: 3 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/3a30a61be89e/ Changeset: 3a30a61be89e User: rvesse Date: 2014-11-06 14:59:56+00:00 Summary: Upgrade VersionBumper to fix CORE-426 Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/7d75084d78cc/ Changeset: 7d75084d78cc Branch: CORE-425 User: rvesse Date: 2014-11-06 15:18:40+00:00 Summary: Bring CORE-425 branch up to date with default Affected #: 40 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/c8f2bacac8a5/ Changeset: c8f2bacac8a5 Branch: CORE-425 User: rvesse Date: 2014-11-06 15:23:45+00:00 Summary: Note CORE-426 build packaging improvements in ChangeLog Affected #: 2 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/9666c6da585d/ Changeset: 9666c6da585d Branch: CORE-425 User: rvesse Date: 2014-11-06 15:40:49+00:00 Summary: Upgrade dependencies to latest versions Affected #: 36 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/f95509ce48e0/ Changeset: f95509ce48e0 User: rvesse Date: 2014-11-06 15:42:23+00:00 Summary: Merge dependency upgrades and extra unit tests for CORE-425 Affected #: 39 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 2 Date: Thu, 06 Nov 2014 16:03:11 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: rvesse: ifdef out tests that are not applicable on PCL To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 1 new commit in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/d0dc613fda24/ Changeset: d0dc613fda24 User: rvesse Date: 2014-11-06 15:58:34+00:00 Summary: ifdef out tests that are not applicable on PCL Affected #: 11 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 3 Date: Thu, 06 Nov 2014 16:16:52 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: rvesse: Delete junk from portion of file TeamCity is complaining about To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 1 new commit in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/9a20bdd11170/ Changeset: 9a20bdd11170 User: rvesse Date: 2014-11-06 16:16:22+00:00 Summary: Delete junk from portion of file TeamCity is complaining about Affected #: 1 file Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 4 Date: Thu, 06 Nov 2014 16:54:19 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 2 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 2 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/b8e07402c340/ Changeset: b8e07402c340 User: rvesse Date: 2014-11-06 16:48:03+00:00 Summary: Revert HtmlAgilityPack upgrade as the 1.4.9 NuGet package is corrupt and will break builds Affected #: 14 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/6c899ecd2099/ Changeset: 6c899ecd2099 User: rvesse Date: 2014-11-06 16:53:51+00:00 Summary: Merge heads Affected #: 12 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 5 Date: Thu, 06 Nov 2014 19:24:05 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 2 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 2 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/36914de86d0c/ Changeset: 36914de86d0c Branch: CORE-425 User: rvesse Date: 2014-11-06 16:56:47+00:00 Summary: Merge latest state of default to CORE-425 Affected #: 26 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/5ff4a18bea0d/ Changeset: 5ff4a18bea0d Branch: CORE-425 User: rvesse Date: 2014-11-06 19:23:07+00:00 Summary: Upgrade to latest NuGet Affected #: 1 file Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 6 Date: Thu, 06 Nov 2014 19:24:48 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: rvesse: Merge from CORE-425 To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 1 new commit in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/b5cac0076ead/ Changeset: b5cac0076ead User: rvesse Date: 2014-11-06 19:24:28+00:00 Summary: Merge from CORE-425 Affected #: 1 file Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 7 Date: Fri, 07 Nov 2014 14:57:05 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 3 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 3 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/bb54307f9f78/ Changeset: bb54307f9f78 User: rvesse Date: 2014-11-07 14:34:13+00:00 Summary: Clean up test cases to make those that rely on parsing remote resources from DBPedia be disabled by default since DBPedia access can be sporadic and cause intermittent test failures Affected #: 18 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/79cd40326978/ Changeset: 79cd40326978 User: rvesse Date: 2014-11-07 14:48:51+00:00 Summary: Remove MSBuild-integrated NuGet package restore Affected #: 59 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/c6e97164fc07/ Changeset: c6e97164fc07 User: rvesse Date: 2014-11-07 14:56:18+00:00 Summary: Update NAnt build to support restoring NuGet packages Affected #: 2 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 8 Date: Sat, 15 Nov 2014 10:23:36 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: kal_ahmed: Fix for CORE-431 To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 1 new commit in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/349abb233c44/ Changeset: 349abb233c44 User: kal_ahmed Date: 2014-11-15 10:22:28+00:00 Summary: Fix for CORE-431 Affected #: 1 file Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ Message: 9 Date: Fri, 21 Nov 2014 12:10:38 -0000 From: Bitbucket <com...@bi...> Subject: [dotNetRDF Commits] commit/dotnetrdf: 4 new changesets To: dot...@li... Message-ID: <201...@ap...> Content-Type: text/plain; charset="utf-8" 4 new commits in dotnetrdf: https://bitbucket.org/dotnetrdf/dotnetrdf/commits/8cc72f307139/ Changeset: 8cc72f307139 User: rvesse Date: 2014-11-21 11:16:35+00:00 Summary: Note CORE-431 fix in Change Log Affected #: 1 file https://bitbucket.org/dotnetrdf/dotnetrdf/commits/57445f109301/ Changeset: 57445f109301 Branch: CORE-425 User: rvesse Date: 2014-11-21 11:39:16+00:00 Summary: Bring up to date with default Affected #: 80 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/65e4e36167eb/ Changeset: 65e4e36167eb Branch: CORE-423 User: rvesse Date: 2014-11-21 11:42:55+00:00 Summary: Bring branch up to date with default (CORE-423) Affected #: 136 files https://bitbucket.org/dotnetrdf/dotnetrdf/commits/904bc78b900e/ Changeset: 904bc78b900e User: rvesse Date: 2014-11-21 12:09:27+00:00 Summary: Upgrade to HtmlAgilityPack 1.4.9 Affected #: 8 files Repository URL: https://bitbucket.org/dotnetrdf/dotnetrdf/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. ------------------------------ ------------------------------------------------------------------------------ Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration & more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk ------------------------------ _______________________________________________ dotNetRDF-commits mailing list dot...@li... https://lists.sourceforge.net/lists/listinfo/dotnetrdf-commits End of dotNetRDF-commits Digest, Vol 24, Issue 1 ************************************************ |
From: <tr...@do...> - 2014-11-21 11:13:17
|
<p>The following issue has been updated by Rob Vesse:</p> <table border="0"> <tr> <td width="90px" valign="top"><b>Title:</b></td> <td>CORE-416 test query demonstrates an intermittent parallelisation bug</td> </tr> <tr> <td><b>Project:</b></td> <td>Core Library (dotNetRDF.dll)</td> </tr> <tr> <td colspan="2"><b>Changes:</b></td> </tr> <tr> <td colspan="2"> <ul> <li>Milestone changed from "1.0.7" to "1.0.8" </li> </ul> </td> </tr> </table> <p> More information on this issue can be found at <a href="http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=425" target="_blank">http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=425</a></p> <p style="text-align:center;font-size:8pt;padding:5px;"> If you no longer wish to receive notifications, please visit <a href="http://www.dotnetrdf.org/tracker/Account/UserProfile.aspx" target="_blank">your profile</a> and change your notifications options. </p> |