This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
| 2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
| 2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
| 2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
| 2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Bryan T. <br...@sy...> - 2013-07-30 10:14:02
|
We do use the sesame factory patterns for parsers and writers. Backport of the json code might be fastest as those files are likely to be self contained. 2.7 introduces some changes in tx support/scope that we need to look at. If someone wants to attempt the 2.7 migration I can point at what needs to be done. The main issues are any api semantics changes (and 2.7 may have some, capturing the delta in any classes where we maintain a fork (including the test suites), picking up any new test suites for compliance, and capturing any missing functionality as openrdf continues to capture the final sparql 1.1 recommendation. Bryan -------- Original message -------- From: Peter Ansell <ans...@gm...> Date: 07/29/2013 11:21 PM (GMT-05:00) To: Jeremy J Carroll <jj...@sy...> Cc: Bryan Thompson <br...@sy...>,Big...@li... Subject: Re: [Bigdata-developers] json and ask: trac 704 I am not sure what the exact relationship is either, but I did not think that bigdata supported sesame-2.7 yet, based on recent release announcements referencing 2.6.10. It may turn out that the conneg failures are due to missing Sesame parser/writer implementations and nanosparqlserver may be dropping back to SPARQL Results XML as a failsafe backup. If that is the case, you could either backport the modules (not sure how much effort that would take but it shouldn't be too difficult if necessary), or wait until bigdata has a dependency on sesame-2.7 to start relying on SPARQL Results JSON for boolean results. Cheers, Peter On 30 July 2013 13:09, Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> wrote: the result exhibits with the nanosparqlserver. I do not understand the architecture enough to know how much sesame code that uses. Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 29, 2013, at 8:06 PM, Peter Ansell <ans...@gm...<mailto:ans...@gm...>> wrote: Hi Bryan, Jeremy, Sesame-2.6 did not support SPARQL Results JSON Boolean results, if that is related to this issue. That feature was added in Sesame-2.7.0 [1] Cheers, Peter [1] https://openrdf.atlassian.net/browse/SES-535 On 30 July 2013 10:16, Bryan Thompson <br...@sy...<mailto:br...@sy...>> wrote: Sure. Problem is likely in conneg code of webapp package. Post ticket. Would be GREAT to get curl examples for rest api. Am back in us and will have more bandwidth soon for things. B -------- Original message -------- From: Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> Date: 07/29/2013 8:13 PM (GMT-05:00) To: Big...@li...<mailto:Big...@li...> Subject: [Bigdata-developers] json and ask: trac 704 json response does not seem to work with SPARQL ask, see https://sourceforge.net/apps/trac/bigdata/ticket/704 i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode query='ask {}' http://localhost:2333/sparql incorrectly returns XML I am going to go for a work around right now, but hope to assign time at the end of the week to work on this issue, which is a problem for me, and my uninformed guess is that the fix is trivial. My understanding of a call I had with Mike is that an approach we could take is: 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code base 2) I can deploy that internally as my hot fix 3) I create a patch, add it to the trac ticket and reassign the ticket to Mike or Bryan Jeremy J Carroll Principal Architect Syapse, Inc. ------------------------------------------------------------------------------ Get your SQL database under version control now! Version control is standard for application code, but databases havent caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out. http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk _______________________________________________ Bigdata-developers mailing list Big...@li...<mailto:Big...@li...> https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Jeremy J C. <jj...@sy...> - 2013-07-30 03:33:06
|
the result exhibits with the nanosparqlserver. I do not understand the architecture enough to know how much sesame code that uses. Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 29, 2013, at 8:06 PM, Peter Ansell <ans...@gm...> wrote: > Hi Bryan, Jeremy, > > Sesame-2.6 did not support SPARQL Results JSON Boolean results, if that is related to this issue. That feature was added in Sesame-2.7.0 [1] > > Cheers, > > Peter > > [1] https://openrdf.atlassian.net/browse/SES-535 > > On 30 July 2013 10:16, Bryan Thompson <br...@sy...> wrote: > Sure. Problem is likely in conneg code of webapp package. Post ticket. > > Would be GREAT to get curl examples for rest api. > > Am back in us and will have more bandwidth soon for things. > > B > > > > -------- Original message -------- > From: Jeremy J Carroll <jj...@sy...> > Date: 07/29/2013 8:13 PM (GMT-05:00) > To: Big...@li... > Subject: [Bigdata-developers] json and ask: trac 704 > > > > json response does not seem to work with SPARQL ask, see > > https://sourceforge.net/apps/trac/bigdata/ticket/704 > > i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode query='ask {}' http://localhost:2333/sparql > > incorrectly returns XML > > I am going to go for a work around right now, but hope to assign time at the end of the week to work on this issue, which is a problem for me, and my uninformed guess is that the fix is trivial. > > My understanding of a call I had with Mike is that an approach we could take is: > 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code base > 2) I can deploy that internally as my hot fix > 3) I create a patch, add it to the trac ticket and reassign the ticket to Mike or Bryan > > Jeremy J Carroll > Principal Architect > Syapse, Inc. > > > > > ------------------------------------------------------------------------------ > Get your SQL database under version control now! > Version control is standard for application code, but databases havent > caught up. So what steps can you take to put your SQL databases under > version control? Why should you start doing it? Read more to find out. > http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
|
From: Peter A. <ans...@gm...> - 2013-07-30 03:21:17
|
I am not sure what the exact relationship is either, but I did not think that bigdata supported sesame-2.7 yet, based on recent release announcements referencing 2.6.10. It may turn out that the conneg failures are due to missing Sesame parser/writer implementations and nanosparqlserver may be dropping back to SPARQL Results XML as a failsafe backup. If that is the case, you could either backport the modules (not sure how much effort that would take but it shouldn't be too difficult if necessary), or wait until bigdata has a dependency on sesame-2.7 to start relying on SPARQL Results JSON for boolean results. Cheers, Peter On 30 July 2013 13:09, Jeremy J Carroll <jj...@sy...> wrote: > the result exhibits with the nanosparqlserver. I do not understand the > architecture enough to know how much sesame code that uses. > > Jeremy J Carroll > Principal Architect > Syapse, Inc. > > > > On Jul 29, 2013, at 8:06 PM, Peter Ansell <ans...@gm...> wrote: > > Hi Bryan, Jeremy, > > Sesame-2.6 did not support SPARQL Results JSON Boolean results, if that is > related to this issue. That feature was added in Sesame-2.7.0 [1] > > Cheers, > > Peter > > [1] https://openrdf.atlassian.net/browse/SES-535 > > On 30 July 2013 10:16, Bryan Thompson <br...@sy...> wrote: > >> Sure. Problem is likely in conneg code of webapp package. Post ticket. >> >> Would be GREAT to get curl examples for rest api. >> >> Am back in us and will have more bandwidth soon for things. >> >> B >> >> >> >> -------- Original message -------- >> From: Jeremy J Carroll <jj...@sy...> >> Date: 07/29/2013 8:13 PM (GMT-05:00) >> To: Big...@li... >> Subject: [Bigdata-developers] json and ask: trac 704 >> >> >> >> json response does not seem to work with SPARQL ask, see >> >> https://sourceforge.net/apps/trac/bigdata/ticket/704 >> >> i.e. curl -H "Accept: application/sparql-results+json" >> --data-urlencode query='ask {}' http://localhost:2333/sparql >> >> incorrectly returns XML >> >> I am going to go for a work around right now, but hope to assign time >> at the end of the week to work on this issue, which is a problem for me, >> and my uninformed guess is that the fix is trivial. >> >> My understanding of a call I had with Mike is that an approach we could >> take is: >> 1) I fix the issue to my satisfaction on a syapse copy of the bigdata >> code base >> 2) I can deploy that internally as my hot fix >> 3) I create a patch, add it to the trac ticket and reassign the ticket to >> Mike or Bryan >> >> Jeremy J Carroll >> Principal Architect >> Syapse, Inc. >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Get your SQL database under version control now! >> Version control is standard for application code, but databases havent >> caught up. So what steps can you take to put your SQL databases under >> version control? Why should you start doing it? Read more to find out. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > |
|
From: Peter A. <ans...@gm...> - 2013-07-30 03:09:40
|
Sorry, the issue I linked to was just for parsing boolean results, there was a separate issue that added writers which was also completed for 2.7.0: https://openrdf.atlassian.net/browse/SES-1417 On 30 July 2013 13:06, Peter Ansell <ans...@gm...> wrote: > Hi Bryan, Jeremy, > > Sesame-2.6 did not support SPARQL Results JSON Boolean results, if that is > related to this issue. That feature was added in Sesame-2.7.0 [1] > > Cheers, > > Peter > > [1] https://openrdf.atlassian.net/browse/SES-535 > > On 30 July 2013 10:16, Bryan Thompson <br...@sy...> wrote: > >> Sure. Problem is likely in conneg code of webapp package. Post ticket. >> >> Would be GREAT to get curl examples for rest api. >> >> Am back in us and will have more bandwidth soon for things. >> >> B >> >> >> >> -------- Original message -------- >> From: Jeremy J Carroll <jj...@sy...> >> Date: 07/29/2013 8:13 PM (GMT-05:00) >> To: Big...@li... >> Subject: [Bigdata-developers] json and ask: trac 704 >> >> >> >> json response does not seem to work with SPARQL ask, see >> >> https://sourceforge.net/apps/trac/bigdata/ticket/704 >> >> i.e. curl -H "Accept: application/sparql-results+json" >> --data-urlencode query='ask {}' http://localhost:2333/sparql >> >> incorrectly returns XML >> >> I am going to go for a work around right now, but hope to assign time >> at the end of the week to work on this issue, which is a problem for me, >> and my uninformed guess is that the fix is trivial. >> >> My understanding of a call I had with Mike is that an approach we could >> take is: >> 1) I fix the issue to my satisfaction on a syapse copy of the bigdata >> code base >> 2) I can deploy that internally as my hot fix >> 3) I create a patch, add it to the trac ticket and reassign the ticket to >> Mike or Bryan >> >> Jeremy J Carroll >> Principal Architect >> Syapse, Inc. >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Get your SQL database under version control now! >> Version control is standard for application code, but databases havent >> caught up. So what steps can you take to put your SQL databases under >> version control? Why should you start doing it? Read more to find out. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
|
From: Peter A. <ans...@gm...> - 2013-07-30 03:06:15
|
Hi Bryan, Jeremy, Sesame-2.6 did not support SPARQL Results JSON Boolean results, if that is related to this issue. That feature was added in Sesame-2.7.0 [1] Cheers, Peter [1] https://openrdf.atlassian.net/browse/SES-535 On 30 July 2013 10:16, Bryan Thompson <br...@sy...> wrote: > Sure. Problem is likely in conneg code of webapp package. Post ticket. > > Would be GREAT to get curl examples for rest api. > > Am back in us and will have more bandwidth soon for things. > > B > > > > -------- Original message -------- > From: Jeremy J Carroll <jj...@sy...> > Date: 07/29/2013 8:13 PM (GMT-05:00) > To: Big...@li... > Subject: [Bigdata-developers] json and ask: trac 704 > > > > json response does not seem to work with SPARQL ask, see > > https://sourceforge.net/apps/trac/bigdata/ticket/704 > > i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode > query='ask {}' http://localhost:2333/sparql > > incorrectly returns XML > > I am going to go for a work around right now, but hope to assign time at > the end of the week to work on this issue, which is a problem for me, and > my uninformed guess is that the fix is trivial. > > My understanding of a call I had with Mike is that an approach we could > take is: > 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code > base > 2) I can deploy that internally as my hot fix > 3) I create a patch, add it to the trac ticket and reassign the ticket to > Mike or Bryan > > Jeremy J Carroll > Principal Architect > Syapse, Inc. > > > > > > ------------------------------------------------------------------------------ > Get your SQL database under version control now! > Version control is standard for application code, but databases havent > caught up. So what steps can you take to put your SQL databases under > version control? Why should you start doing it? Read more to find out. > http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
|
From: Bryan T. <br...@sy...> - 2013-07-30 00:20:33
|
Fyi, The ESTCARD method of the REST api is more efficient than ASK for a simple triple pattern. -------- Original message -------- From: Bryan Thompson <br...@sy...> Date: 07/29/2013 8:17 PM (GMT-05:00) To: jj...@sy...,Big...@li... Subject: Re: [Bigdata-developers] json and ask: trac 704 Sure. Problem is likely in conneg code of webapp package. Post ticket. Would be GREAT to get curl examples for rest api. Am back in us and will have more bandwidth soon for things. B -------- Original message -------- From: Jeremy J Carroll <jj...@sy...> Date: 07/29/2013 8:13 PM (GMT-05:00) To: Big...@li... Subject: [Bigdata-developers] json and ask: trac 704 json response does not seem to work with SPARQL ask, see https://sourceforge.net/apps/trac/bigdata/ticket/704 i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode query='ask {}' http://localhost:2333/sparql incorrectly returns XML I am going to go for a work around right now, but hope to assign time at the end of the week to work on this issue, which is a problem for me, and my uninformed guess is that the fix is trivial. My understanding of a call I had with Mike is that an approach we could take is: 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code base 2) I can deploy that internally as my hot fix 3) I create a patch, add it to the trac ticket and reassign the ticket to Mike or Bryan Jeremy J Carroll Principal Architect Syapse, Inc. |
|
From: Bryan T. <br...@sy...> - 2013-07-30 00:17:47
|
Ah.I see ticket exists. B -------- Original message -------- From: Jeremy J Carroll <jj...@sy...> Date: 07/29/2013 8:13 PM (GMT-05:00) To: Big...@li... Subject: [Bigdata-developers] json and ask: trac 704 json response does not seem to work with SPARQL ask, see https://sourceforge.net/apps/trac/bigdata/ticket/704 i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode query='ask {}' http://localhost:2333/sparql incorrectly returns XML I am going to go for a work around right now, but hope to assign time at the end of the week to work on this issue, which is a problem for me, and my uninformed guess is that the fix is trivial. My understanding of a call I had with Mike is that an approach we could take is: 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code base 2) I can deploy that internally as my hot fix 3) I create a patch, add it to the trac ticket and reassign the ticket to Mike or Bryan Jeremy J Carroll Principal Architect Syapse, Inc. |
|
From: Bryan T. <br...@sy...> - 2013-07-30 00:17:01
|
Sure. Problem is likely in conneg code of webapp package. Post ticket. Would be GREAT to get curl examples for rest api. Am back in us and will have more bandwidth soon for things. B -------- Original message -------- From: Jeremy J Carroll <jj...@sy...> Date: 07/29/2013 8:13 PM (GMT-05:00) To: Big...@li... Subject: [Bigdata-developers] json and ask: trac 704 json response does not seem to work with SPARQL ask, see https://sourceforge.net/apps/trac/bigdata/ticket/704 i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode query='ask {}' http://localhost:2333/sparql incorrectly returns XML I am going to go for a work around right now, but hope to assign time at the end of the week to work on this issue, which is a problem for me, and my uninformed guess is that the fix is trivial. My understanding of a call I had with Mike is that an approach we could take is: 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code base 2) I can deploy that internally as my hot fix 3) I create a patch, add it to the trac ticket and reassign the ticket to Mike or Bryan Jeremy J Carroll Principal Architect Syapse, Inc. |
|
From: Jeremy J C. <jj...@sy...> - 2013-07-30 00:13:49
|
json response does not seem to work with SPARQL ask, see https://sourceforge.net/apps/trac/bigdata/ticket/704 i.e. curl -H "Accept: application/sparql-results+json" --data-urlencode query='ask {}' http://localhost:2333/sparql incorrectly returns XML I am going to go for a work around right now, but hope to assign time at the end of the week to work on this issue, which is a problem for me, and my uninformed guess is that the fix is trivial. My understanding of a call I had with Mike is that an approach we could take is: 1) I fix the issue to my satisfaction on a syapse copy of the bigdata code base 2) I can deploy that internally as my hot fix 3) I create a patch, add it to the trac ticket and reassign the ticket to Mike or Bryan Jeremy J Carroll Principal Architect Syapse, Inc. |
|
From: Mike P. <mi...@sy...> - 2013-07-24 11:53:26
|
1.2 maintenance branch. I am pretty sure there are no commits in there that need to be migrated. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, July 24, 2013 7:50 AM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>>, Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Ok. Is your working version the 1.2.x maintenance branch or the READ_CACHE branch (HA)? B From: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Date: Wednesday, July 24, 2013 1:47 PM To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>, Jeremy Carroll <jj...@sy...<mailto:jj...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 I have the changes applied to my local version but not committed as I was working on some other property path issues last week. Not ready to commit right now. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, July 24, 2013 2:56 AM To: Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>>, Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Morning all. What is the status on this patch and the related issues around UNION optimizations as they pertain to property paths? Also, are the changes applied in the 1.2.x maintenance branch and do they need to be migrated to the READ_CACHE branch? Bryan From: Jeremy Carroll <jj...@sy...<mailto:jj...@sy...>> Date: Monday, July 15, 2013 7:27 PM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Patch is now attached to the trace item at https://sourceforge.net/apps/trac/bigdata/attachment/ticket/684/trac684B.patch $ patch -p1 -i /tmp/trac684B.patch Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 15, 2013, at 9:50 AM, Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> wrote: This is now done. Sorry it took me a while to get my infrastructure in place. The tests are in bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTStaticJoinOptimizer.java and are named xtest_union_trac684_A test_union_trac684_B xtest_union_trac684_C The first and last currently fail, and I renamed them as not to suggest that things had got worse. To run the tests you will need to rename them back again (I am used to just adding an @Ignore annotation … for this usage - what is the preference in this project?) I am now working on producing a new patch file of my solution to these failing tests. I am sorry we had miscommunicated, since I was waiting for feedback from you on a previous patch …. :( Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 12, 2013, at 7:46 AM, Mike Personick <mi...@sy...<mailto:mi...@sy...>> wrote: Can you check in a test case for it so that I can run it? |
|
From: Bryan T. <br...@sy...> - 2013-07-24 11:51:04
|
Ok. Is your working version the 1.2.x maintenance branch or the READ_CACHE branch (HA)? B From: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Date: Wednesday, July 24, 2013 1:47 PM To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>, Jeremy Carroll <jj...@sy...<mailto:jj...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 I have the changes applied to my local version but not committed as I was working on some other property path issues last week. Not ready to commit right now. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, July 24, 2013 2:56 AM To: Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>>, Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Morning all. What is the status on this patch and the related issues around UNION optimizations as they pertain to property paths? Also, are the changes applied in the 1.2.x maintenance branch and do they need to be migrated to the READ_CACHE branch? Bryan From: Jeremy Carroll <jj...@sy...<mailto:jj...@sy...>> Date: Monday, July 15, 2013 7:27 PM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Patch is now attached to the trace item at https://sourceforge.net/apps/trac/bigdata/attachment/ticket/684/trac684B.patch $ patch -p1 -i /tmp/trac684B.patch Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 15, 2013, at 9:50 AM, Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> wrote: This is now done. Sorry it took me a while to get my infrastructure in place. The tests are in bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTStaticJoinOptimizer.java and are named xtest_union_trac684_A test_union_trac684_B xtest_union_trac684_C The first and last currently fail, and I renamed them as not to suggest that things had got worse. To run the tests you will need to rename them back again (I am used to just adding an @Ignore annotation … for this usage - what is the preference in this project?) I am now working on producing a new patch file of my solution to these failing tests. I am sorry we had miscommunicated, since I was waiting for feedback from you on a previous patch …. :( Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 12, 2013, at 7:46 AM, Mike Personick <mi...@sy...<mailto:mi...@sy...>> wrote: Can you check in a test case for it so that I can run it? |
|
From: Mike P. <mi...@sy...> - 2013-07-24 11:48:56
|
I have the changes applied to my local version but not committed as I was working on some other property path issues last week. Not ready to commit right now. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, July 24, 2013 2:56 AM To: Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>>, Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Morning all. What is the status on this patch and the related issues around UNION optimizations as they pertain to property paths? Also, are the changes applied in the 1.2.x maintenance branch and do they need to be migrated to the READ_CACHE branch? Bryan From: Jeremy Carroll <jj...@sy...<mailto:jj...@sy...>> Date: Monday, July 15, 2013 7:27 PM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Patch is now attached to the trace item at https://sourceforge.net/apps/trac/bigdata/attachment/ticket/684/trac684B.patch $ patch -p1 -i /tmp/trac684B.patch Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 15, 2013, at 9:50 AM, Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> wrote: This is now done. Sorry it took me a while to get my infrastructure in place. The tests are in bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTStaticJoinOptimizer.java and are named xtest_union_trac684_A test_union_trac684_B xtest_union_trac684_C The first and last currently fail, and I renamed them as not to suggest that things had got worse. To run the tests you will need to rename them back again (I am used to just adding an @Ignore annotation … for this usage - what is the preference in this project?) I am now working on producing a new patch file of my solution to these failing tests. I am sorry we had miscommunicated, since I was waiting for feedback from you on a previous patch …. :( Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 12, 2013, at 7:46 AM, Mike Personick <mi...@sy...<mailto:mi...@sy...>> wrote: Can you check in a test case for it so that I can run it? |
|
From: Bryan T. <br...@sy...> - 2013-07-24 07:18:00
|
Morning all. What is the status on this patch and the related issues around UNION optimizations as they pertain to property paths? Also, are the changes applied in the 1.2.x maintenance branch and do they need to be migrated to the READ_CACHE branch? Bryan From: Jeremy Carroll <jj...@sy...<mailto:jj...@sy...>> Date: Monday, July 15, 2013 7:27 PM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Cc: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Subject: Re: [Bigdata-developers] interaction between #699 and #684 Patch is now attached to the trace item at https://sourceforge.net/apps/trac/bigdata/attachment/ticket/684/trac684B.patch $ patch -p1 -i /tmp/trac684B.patch Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 15, 2013, at 9:50 AM, Jeremy J Carroll <jj...@sy...<mailto:jj...@sy...>> wrote: This is now done. Sorry it took me a while to get my infrastructure in place. The tests are in bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTStaticJoinOptimizer.java and are named xtest_union_trac684_A test_union_trac684_B xtest_union_trac684_C The first and last currently fail, and I renamed them as not to suggest that things had got worse. To run the tests you will need to rename them back again (I am used to just adding an @Ignore annotation … for this usage - what is the preference in this project?) I am now working on producing a new patch file of my solution to these failing tests. I am sorry we had miscommunicated, since I was waiting for feedback from you on a previous patch …. :( Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 12, 2013, at 7:46 AM, Mike Personick <mi...@sy...<mailto:mi...@sy...>> wrote: Can you check in a test case for it so that I can run it? |
|
From: Bryan T. <br...@sy...> - 2013-07-24 06:52:17
|
Did you have curl commands that we could use with these examples? I would like to integrate both the curl and the http request examples into the REST API documentation on the wiki. Thanks, Bryan From: Eugen F <feu...@ya...<mailto:feu...@ya...>> Reply-To: Eugen F <feu...@ya...<mailto:feu...@ya...>> Date: Thursday, June 13, 2013 10:19 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Http request sample |
|
From: Rose B. <ros...@gm...> - 2013-07-21 06:28:02
|
I wanted to benchmark bigdata. In bigdata's getting started guide ( http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted ) its written to learn sesame first. I learnt sesame using HTTP API (employing tomcat). But I am unable to understand how to load rdf quads/provenance/triples into bigdata using sesame HTTP API. I am new to sesame and bigdata. So, can someone please help me with this? |
|
From: Jeremy J C. <jj...@sy...> - 2013-07-15 17:27:07
|
Patch is now attached to the trace item at https://sourceforge.net/apps/trac/bigdata/attachment/ticket/684/trac684B.patch $ patch -p1 -i /tmp/trac684B.patch Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 15, 2013, at 9:50 AM, Jeremy J Carroll <jj...@sy...> wrote: > This is now done. > Sorry it took me a while to get my infrastructure in place. > > The tests are in > > bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTStaticJoinOptimizer.java > > and are named > > xtest_union_trac684_A > test_union_trac684_B > xtest_union_trac684_C > > > The first and last currently fail, and I renamed them as not to suggest that things had got worse. > To run the tests you will need to rename them back again > > (I am used to just adding an @Ignore annotation … for this usage - what is the preference in this project?) > > I am now working on producing a new patch file of my solution to these failing tests. I am sorry we had miscommunicated, since I was waiting for feedback from you on a previous patch …. :( > > > > > Jeremy J Carroll > Principal Architect > Syapse, Inc. > > > > On Jul 12, 2013, at 7:46 AM, Mike Personick <mi...@sy...> wrote: > >> Can you check in a test case for >> it so that I can run it? > |
|
From: Jeremy J C. <jj...@sy...> - 2013-07-15 17:20:22
|
This is now done. Sorry it took me a while to get my infrastructure in place. The tests are in bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTStaticJoinOptimizer.java and are named xtest_union_trac684_A test_union_trac684_B xtest_union_trac684_C The first and last currently fail, and I renamed them as not to suggest that things had got worse. To run the tests you will need to rename them back again (I am used to just adding an @Ignore annotation … for this usage - what is the preference in this project?) I am now working on producing a new patch file of my solution to these failing tests. I am sorry we had miscommunicated, since I was waiting for feedback from you on a previous patch …. :( Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 12, 2013, at 7:46 AM, Mike Personick <mi...@sy...> wrote: > Can you check in a test case for > it so that I can run it? |
|
From: Mike P. <mi...@sy...> - 2013-07-12 14:47:27
|
Jeremy, They are related. The static optimizer needs to get the ordering for property paths right, right now it does not consider them. Do you mind if I take over ticket 684? Can you check in a test case for it so that I can run it? Mike On 7/11/13 2:31 PM, "Jeremy J Carroll" <jj...@sy...> wrote: > > >Hi Mike > >Bryan pointed out that you are working on >http://sourceforge.net/apps/trac/bigdata/ticket/699 > >and he wondered about the interaction with > >http://sourceforge.net/apps/trac/bigdata/ticket/684 > >since both are dealing with optimizing queries involving property paths. > >My judgment is that the interaction is likely to be little - since the >684 code deals with unions, whereas the 699 fix is to do with arbitrary >length paths > >They may interact in cases like > > >[A] > ?sub ( rdfs:subPropertyOf | my:subPropertyOf ) * ?super > >=== > >On the topic of 684, Bryan suggested that I should: >- omit the LUBM tests since they do not involve UNION >- execute the Berlin tests > >If these tests are satisfactory then to merge in with the RELEASE_1_2_0 >branch > >Maybe one of us should write a test case for [A] as well. > >I have not yet understood how/whether the code does dynamic as well as >static optimization. >On #699 Bryan seems to be suggesting detecting the fully bound case of >ALP and treating it differently from other ALPs; I guess that can be done >before query execution as part of planning in most cases. > >=== > >My timeline is that I am not working on #684 today, but will find some >time tomorrow to understand how to set up Berlin Benchmark > > > >Jeremy J Carroll >Principal Architect >Syapse, Inc. > > > > >-------------------------------------------------------------------------- >---- >See everything from the browser to the database with AppDynamics >Get end-to-end visibility with application monitoring from AppDynamics >Isolate bottlenecks and diagnose root cause in seconds. >Start your free trial of AppDynamics Pro today! >http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktr >k >_______________________________________________ >Bigdata-developers mailing list >Big...@li... >https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Jeremy J C. <jj...@sy...> - 2013-07-11 21:01:11
|
Hi Mike Bryan pointed out that you are working on http://sourceforge.net/apps/trac/bigdata/ticket/699 and he wondered about the interaction with http://sourceforge.net/apps/trac/bigdata/ticket/684 since both are dealing with optimizing queries involving property paths. My judgment is that the interaction is likely to be little - since the 684 code deals with unions, whereas the 699 fix is to do with arbitrary length paths They may interact in cases like [A] ?sub ( rdfs:subPropertyOf | my:subPropertyOf ) * ?super === On the topic of 684, Bryan suggested that I should: - omit the LUBM tests since they do not involve UNION - execute the Berlin tests If these tests are satisfactory then to merge in with the RELEASE_1_2_0 branch Maybe one of us should write a test case for [A] as well. I have not yet understood how/whether the code does dynamic as well as static optimization. On #699 Bryan seems to be suggesting detecting the fully bound case of ALP and treating it differently from other ALPs; I guess that can be done before query execution as part of planning in most cases. === My timeline is that I am not working on #684 today, but will find some time tomorrow to understand how to set up Berlin Benchmark Jeremy J Carroll Principal Architect Syapse, Inc. |
|
From: Bryan T. <br...@sy...> - 2013-07-09 01:33:07
|
Looks good to me. Bryan On 7/8/13 8:41 PM, "Jeremy J Carroll" <jj...@sy...> wrote: > >I found some aspects of the REST API somewhat confusing, and felt that >having got to grips with one part, I should document my learning: > >https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlS >erver#URI_Valued_Parameters > >Please feel free to reject and delete if incorrect or not appropriate. > >Jeremy J Carroll >Principal Architect >Syapse, Inc. > > > > >-------------------------------------------------------------------------- >---- >See everything from the browser to the database with AppDynamics >Get end-to-end visibility with application monitoring from AppDynamics >Isolate bottlenecks and diagnose root cause in seconds. >Start your free trial of AppDynamics Pro today! >http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktr >k >_______________________________________________ >Bigdata-developers mailing list >Big...@li... >https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Jeremy J C. <jj...@sy...> - 2013-07-09 00:41:54
|
I found some aspects of the REST API somewhat confusing, and felt that having got to grips with one part, I should document my learning: https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer#URI_Valued_Parameters Please feel free to reject and delete if incorrect or not appropriate. Jeremy J Carroll Principal Architect Syapse, Inc. |
|
From: Eugen F <feu...@ya...> - 2013-06-13 08:19:46
|
UPDATE (DELETE statements selected by a QUERY plus INSERT statements from Request Body using PUT)
CONSTRUCT {<http://example.org/book1> ?predicate ?object .}
WHERE {<http://example.org/book1> ?predicate ?object}
PUT http://127.0.0.1.:8081/sparql/?query=CONSTRUCT+%7b++%0d%0a++++++++++++++++++++++++++++++++++++%3chttp%3a%2f%2fexample.org%2fbook1%3e+%3fpredicate+%3fobject+.++%0d%0a++++++++++++++++++++++++++++++++%7d++%0d%0a++++++++++++++++++++++++++++++++WHERE+%7b%3chttp%3a%2f%2fexample.org%2fbook1%3e+%3fpredicate+%3fobject%7d&context-uri=http%3a%2f%2fexample.com%2f HTTP/1.1
Accept: */*
Content-Type: text/plain
Host: 127.0.0.1.:8081
Content-Length: 215
Expect: 100-continue
<http://example.org/book1> <http://example.org/title> "A new book" .
<http://example.org/book1> <http://example.org/title> "A new book1" .
<http://example.org/book1> <http://example.org/title> "A new book2" .
-------------------------------------------------------------------------------------------------------------------------------
UPDATE (POST with Multi-Part Request Body)
POST http://127.0.0.1.:8081/sparql/?updatePost&context-uri=http%3a%2f%2fexample.com%2f HTTP/1.1
Accept: */*
Content-Type: multipart/form-data; boundary=a7d93dd3-39bc-48a2-91fc-e41f0fcdc642
Host: 127.0.0.1.:8081
Content-Length: 697
Expect: 100-continue
--a7d93dd3-39bc-48a2-91fc-e41f0fcdc642
Content-Disposition:form-data; name="remove"
Content-Type: text/plain
<http://example.org/book1> <http://example.org/title> "A new book" .
<http://example.org/book1> <http://example.org/title> "A new book1" .
<http://example.org/book1> <http://example.org/title> "A new book2" .
--a7d93dd3-39bc-48a2-91fc-e41f0fcdc642
Content-Disposition:form-data; name="add"
Content-Type: text/plain
<http://example.org/book1> <http://example.org/title> "A new book" .
<http://example.org/book1> <http://example.org/title> "A new book1" .
<http://example.org/book1> <http://example.org/title> "A new book2" .
--a7d93dd3-39bc-48a2-91fc-e41f0fcdc642-- |
|
From: Bryan T. <br...@sy...> - 2013-05-31 18:35:52
|
This is a minor release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can download the WAR from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_2_3 New features: - SPARQL 1.1 Update Extensions (SPARQL UPDATE for named solution sets). See https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=SPARQL_Update for more information. - SPARQL 1.1 Property Paths. - Remote Java client for Multi-Tenancy extensions NanoSparqlServer - Sesame 2.6.10 dependency - Plus numerous other bug fixes and performance enhancements. Feature summary: - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Fast 100% native SPARQL 1.1 evaluation; - Integrated "analytic" query package; - %100 Java memory manager leverages the JVM native heap (no GC); Road map [3]: - High availability for the journal and the cluster. - Runtime Query Optimizer for Analytic Query mode; and - Simplified deployment, configuration, and administration for clusters. Change log: Note: Versions with (*) MAY require data migration. For details, see [9]. 1.2.3: - http://sourceforge.net/apps/trac/bigdata/ticket/168 (Maven Build) - http://sourceforge.net/apps/trac/bigdata/ticket/196 (Journal leaks memory). - http://sourceforge.net/apps/trac/bigdata/ticket/235 (Occasional deadlock in CI runs in com.bigdata.io.writecache.TestAll) - http://sourceforge.net/apps/trac/bigdata/ticket/312 (CI (mock) quorums deadlock) - http://sourceforge.net/apps/trac/bigdata/ticket/405 (Optimize hash join for subgroups with no incoming bound vars.) - http://sourceforge.net/apps/trac/bigdata/ticket/412 (StaticAnalysis#getDefinitelyBound() ignores exogenous variables.) - http://sourceforge.net/apps/trac/bigdata/ticket/485 (RDFS Plus Profile) - http://sourceforge.net/apps/trac/bigdata/ticket/495 (SPARQL 1.1 Property Paths) - http://sourceforge.net/apps/trac/bigdata/ticket/519 (Negative parser tests) - http://sourceforge.net/apps/trac/bigdata/ticket/531 (SPARQL UPDATE for SOLUTION SETS) - http://sourceforge.net/apps/trac/bigdata/ticket/535 (Optimize JOIN VARS for Sub-Selects) - http://sourceforge.net/apps/trac/bigdata/ticket/555 (Support PSOutputStream/InputStream at IRawStore) - http://sourceforge.net/apps/trac/bigdata/ticket/559 (Use RDFFormat.NQUADS as the format identifier for the NQuads parser) - http://sourceforge.net/apps/trac/bigdata/ticket/570 (MemoryManager Journal does not implement all methods). - http://sourceforge.net/apps/trac/bigdata/ticket/575 (NSS Admin API) - http://sourceforge.net/apps/trac/bigdata/ticket/577 (DESCRIBE with OFFSET/LIMIT needs to use sub-select) - http://sourceforge.net/apps/trac/bigdata/ticket/578 (Concise Bounded Description (CBD)) - http://sourceforge.net/apps/trac/bigdata/ticket/579 (CONSTRUCT should use distinct SPO filter) - http://sourceforge.net/apps/trac/bigdata/ticket/583 (VoID in ServiceDescription) - http://sourceforge.net/apps/trac/bigdata/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) - http://sourceforge.net/apps/trac/bigdata/ticket/590 (nxparser fails with uppercase language tag) - http://sourceforge.net/apps/trac/bigdata/ticket/592 (Optimize RWStore allocator sizes) - http://sourceforge.net/apps/trac/bigdata/ticket/593 (Ugrade to Sesame 2.6.10) - http://sourceforge.net/apps/trac/bigdata/ticket/594 (WAR was deployed using TRIPLES rather than QUADS by default) - http://sourceforge.net/apps/trac/bigdata/ticket/596 (Change web.xml parameter names to be consistent with Jini/River) - http://sourceforge.net/apps/trac/bigdata/ticket/597 (SPARQL UPDATE LISTENER) - http://sourceforge.net/apps/trac/bigdata/ticket/598 (B+Tree branching factor and HTree addressBits are confused in their NodeSerializer implementations) - http://sourceforge.net/apps/trac/bigdata/ticket/599 (BlobIV for blank node : NotMaterializedException) - http://sourceforge.net/apps/trac/bigdata/ticket/600 (BlobIV collision counter hits false limit.) - http://sourceforge.net/apps/trac/bigdata/ticket/601 (Log uncaught exceptions) - http://sourceforge.net/apps/trac/bigdata/ticket/602 (RWStore does not discard logged deletes on reset()) - http://sourceforge.net/apps/trac/bigdata/ticket/607 (History service / index) - http://sourceforge.net/apps/trac/bigdata/ticket/608 (LOG BlockingBuffer not progressing at INFO or lower level) - http://sourceforge.net/apps/trac/bigdata/ticket/609 (bigdata-ganglia is required dependency for Journal) - http://sourceforge.net/apps/trac/bigdata/ticket/611 (The code that processes SPARQL Update has a typo) - http://sourceforge.net/apps/trac/bigdata/ticket/612 (Bigdata scale-up depends on zookeper) - http://sourceforge.net/apps/trac/bigdata/ticket/613 (SPARQL UPDATE response inlines large DELETE or INSERT triple graphs) - http://sourceforge.net/apps/trac/bigdata/ticket/614 (static join optimizer does not get ordering right when multiple tails share vars with ancestry) - http://sourceforge.net/apps/trac/bigdata/ticket/615 (AST2BOpUtility wraps UNION with an unnecessary hash join) - http://sourceforge.net/apps/trac/bigdata/ticket/616 (Row store read/update not isolated on Journal) - http://sourceforge.net/apps/trac/bigdata/ticket/617 (Concurrent KB create fails with "No axioms defined?") - http://sourceforge.net/apps/trac/bigdata/ticket/618 (DirectBufferPool.poolCapacity maximum of 2GB) - http://sourceforge.net/apps/trac/bigdata/ticket/619 (RemoteRepository class should use application/x-www-form-urlencoded for large POST requests) - http://sourceforge.net/apps/trac/bigdata/ticket/620 (UpdateServlet fails to parse MIMEType when doing conneg.) - http://sourceforge.net/apps/trac/bigdata/ticket/626 (Expose performance counters for read-only indices) - http://sourceforge.net/apps/trac/bigdata/ticket/627 (Environment variable override for NSS properties file) - http://sourceforge.net/apps/trac/bigdata/ticket/628 (Create a bigdata-client jar for the NSS REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/631 (ClassCastException in SIDs mode query) - http://sourceforge.net/apps/trac/bigdata/ticket/632 (NotMaterializedException when a SERVICE call needs variables that are provided as query input bindings) - http://sourceforge.net/apps/trac/bigdata/ticket/633 (ClassCastException when binding non-uri values to a variable that occurs in predicate position) - http://sourceforge.net/apps/trac/bigdata/ticket/638 (Change DEFAULT_MIN_RELEASE_AGE to 1ms) - http://sourceforge.net/apps/trac/bigdata/ticket/640 (Conditionally rollback() BigdataSailConnection if dirty) - http://sourceforge.net/apps/trac/bigdata/ticket/642 (Property paths do not work inside of exists/not exists filters) - http://sourceforge.net/apps/trac/bigdata/ticket/643 (Add web.xml parameters to lock down public NSS end points) - http://sourceforge.net/apps/trac/bigdata/ticket/644 (Bigdata2Sesame2BindingSetIterator can fail to notice asynchronous close()) - http://sourceforge.net/apps/trac/bigdata/ticket/650 (Can not POST RDF to a graph using REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/654 (Rare AssertionError in WriteCache.clearAddrMap()) - http://sourceforge.net/apps/trac/bigdata/ticket/655 (SPARQL REGEX operator does not perform case-folding correctly for Unicode data) - http://sourceforge.net/apps/trac/bigdata/ticket/656 (InFactory bug when IN args consist of a single literal) - http://sourceforge.net/apps/trac/bigdata/ticket/647 (SIDs mode creates unnecessary hash join for GRAPH group patterns) - http://sourceforge.net/apps/trac/bigdata/ticket/667 (Provide NanoSparqlServer initialization hook) - http://sourceforge.net/apps/trac/bigdata/ticket/669 (Doubly nested subqueries yield no results with LIMIT) - http://sourceforge.net/apps/trac/bigdata/ticket/675 (Flush indices in parallel during checkpoint to reduce IO latency) - http://sourceforge.net/apps/trac/bigdata/ticket/682 (AtomicRowFilter UnsupportedOperationException) 1.2.2: - http://sourceforge.net/apps/trac/bigdata/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) - http://sourceforge.net/apps/trac/bigdata/ticket/602 (RWStore does not discard logged deletes on reset()) - http://sourceforge.net/apps/trac/bigdata/ticket/603 (Prepare critical maintenance release as branch of 1.2.1) 1.2.1: - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/539 (NotMaterializedException with REGEX and Vocab) - http://sourceforge.net/apps/trac/bigdata/ticket/540 (SPARQL UPDATE using NSS via index.html) - http://sourceforge.net/apps/trac/bigdata/ticket/541 (MemoryManaged backed Journal mode) - http://sourceforge.net/apps/trac/bigdata/ticket/546 (Index cache for Journal) - http://sourceforge.net/apps/trac/bigdata/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) - http://sourceforge.net/apps/trac/bigdata/ticket/550 (NPE in Leaf.getKey() : root cause was user error) - http://sourceforge.net/apps/trac/bigdata/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) - http://sourceforge.net/apps/trac/bigdata/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) - http://sourceforge.net/apps/trac/bigdata/ticket/563 (DISTINCT ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) - http://sourceforge.net/apps/trac/bigdata/ticket/568 (DELETE WHERE fails with Java AssertionError) - http://sourceforge.net/apps/trac/bigdata/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) - http://sourceforge.net/apps/trac/bigdata/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) - http://sourceforge.net/apps/trac/bigdata/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) 1.2.0: (*) - http://sourceforge.net/apps/trac/bigdata/ticket/92 (Monitoring webapp) - http://sourceforge.net/apps/trac/bigdata/ticket/267 (Support evaluation of 3rd party operators) - http://sourceforge.net/apps/trac/bigdata/ticket/337 (Compact and efficient movement of binding sets between nodes.) - http://sourceforge.net/apps/trac/bigdata/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) - http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) - http://sourceforge.net/apps/trac/bigdata/ticket/438 (KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/439 (Class loader problem) - http://sourceforge.net/apps/trac/bigdata/ticket/441 (Ganglia integration) - http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) - http://sourceforge.net/apps/trac/bigdata/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) - http://sourceforge.net/apps/trac/bigdata/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) - http://sourceforge.net/apps/trac/bigdata/ticket/448 (SPARQL 1.1 UPDATE) - http://sourceforge.net/apps/trac/bigdata/ticket/449 (SPARQL 1.1 Federation extension) - http://sourceforge.net/apps/trac/bigdata/ticket/451 (Serialization error in SIDs mode on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/454 (Global Row Store Read on Cluster uses Tx) - http://sourceforge.net/apps/trac/bigdata/ticket/456 (IExtension implementations do point lookups on lexicon) - http://sourceforge.net/apps/trac/bigdata/ticket/457 ("No such index" on cluster under concurrent query workload) - http://sourceforge.net/apps/trac/bigdata/ticket/458 (Java level deadlock in DS) - http://sourceforge.net/apps/trac/bigdata/ticket/460 (Uncaught interrupt resolving RDF terms) - http://sourceforge.net/apps/trac/bigdata/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) - http://sourceforge.net/apps/trac/bigdata/ticket/464 (Query statistics do not update correctly on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/465 (Too many GRS reads on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/469 (Sail does not flush assertion buffers before query) - http://sourceforge.net/apps/trac/bigdata/ticket/472 (acceptTaskService pool size on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/475 (Optimize serialization for query messages on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) - http://sourceforge.net/apps/trac/bigdata/ticket/478 (Cluster does not map input solution(s) across shards) - http://sourceforge.net/apps/trac/bigdata/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) - http://sourceforge.net/apps/trac/bigdata/ticket/481 (PhysicalAddressResolutionException against 1.0.6) - http://sourceforge.net/apps/trac/bigdata/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) - http://sourceforge.net/apps/trac/bigdata/ticket/484 (Java API for NanoSparqlServer REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) - http://sourceforge.net/apps/trac/bigdata/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) - http://sourceforge.net/apps/trac/bigdata/ticket/493 (Virtual Graphs) - http://sourceforge.net/apps/trac/bigdata/ticket/496 (Sesame 2.6.3) - http://sourceforge.net/apps/trac/bigdata/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) - http://sourceforge.net/apps/trac/bigdata/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) - http://sourceforge.net/apps/trac/bigdata/ticket/500 (SPARQL 1.1 Service Description) - http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) - http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) - http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/501 (SPARQL 1.1 BINDINGS are ignored) - http://sourceforge.net/apps/trac/bigdata/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) - http://sourceforge.net/apps/trac/bigdata/ticket/504 (UNION with Empty Group Pattern) - http://sourceforge.net/apps/trac/bigdata/ticket/505 (Exception when using SPARQL sort & statement identifiers) - http://sourceforge.net/apps/trac/bigdata/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) - http://sourceforge.net/apps/trac/bigdata/ticket/508 (LIMIT causes hash join utility to log errors) - http://sourceforge.net/apps/trac/bigdata/ticket/513 (Expose the LexiconConfiguration to Function BOPs) - http://sourceforge.net/apps/trac/bigdata/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) - http://sourceforge.net/apps/trac/bigdata/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) - http://sourceforge.net/apps/trac/bigdata/ticket/517 (Java 7 Compiler Compatibility) - http://sourceforge.net/apps/trac/bigdata/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) - http://sourceforge.net/apps/trac/bigdata/ticket/520 (CONSTRUCT WHERE shortcut) - http://sourceforge.net/apps/trac/bigdata/ticket/521 (Incremental materialization of Tuple and Graph query results) - http://sourceforge.net/apps/trac/bigdata/ticket/525 (Modify the IChangeLog interface to support multiple agents) - http://sourceforge.net/apps/trac/bigdata/ticket/527 (Expose timestamp of LexiconRelation to function bops) - http://sourceforge.net/apps/trac/bigdata/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/534 (BSBM BI Q5 error using MERGE JOIN) 1.1.0 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins) - http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as "blobs") - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query) - http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) - http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) - http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics). - http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) - http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) - http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.) - http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) - http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) - http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.) - http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST) - http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions) - http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) - http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure) - http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance) - http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes) - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) For more information about bigdata(R), please see the following links: [1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Bigdata(R) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(R) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(R) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(R) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
|
From: Bryan T. <br...@sy...> - 2013-04-03 15:17:15
|
If anyone else has been having difficulties with the SVN certificates (specifically the inability to permanently accept a new certificate once it has been verified), it turns out that the root cause is a permissions problem on in .subversion/auth/svn.ssl.server/. These files need to be writable by the user in order to accept the new certificate. See [1] for both a general solution (removing everything in that directory) and the more focus solution (chmod 644 on the file(s) that are not writable by the user). Bryan [1] http://kthoms.wordpress.com/2011/03/17/fixing-subversion-problem-error-validating-server-certificate/ |
|
From: Bryan T. <br...@sy...> - 2013-03-02 04:16:00
|
A GraphQuery returns triples (CONSTRUCT/DESCRIBE)
A TupleQuery returns solutions (SELECT).
Use prepareTupleQuery() here.
Bryan
From: Laurent Pellegrino <lau...@gm...<mailto:lau...@gm...>>
Date: Friday, March 1, 2013 9:15 AM
To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>
Subject: Re: [Bigdata-developers] Jena Adapter?
Hello Bryan,
I tested with the new artifacts. The repository initialization is now working properly. However, when I try to execute some SPARQL queries I get the following ClassCastException that seems to come from bigdata code:
java.lang.ClassCastException: com.bigdata.rdf.sail.BigdataSailTupleQuery cannot be cast to com.bigdata.rdf.sail.BigdataSailGraphQuery
at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.prepareGraphQuery(BigdataSailRepositoryConnection.java:75)
at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.prepareGraphQuery(BigdataSailRepositoryConnection.java:31)
at org.openrdf.repository.base.RepositoryConnectionBase.prepareGraphQuery(RepositoryConnectionBase.java:134)
at fr.inria.eventcloud.stash.rdfstores.benchmark.RDFStoreBenchmark.bigdataReadOperation(RDFStoreBenchmark.java:224)
at fr.inria.eventcloud.stash.rdfstores.benchmark.RDFStoreBenchmark.access$2(RDFStoreBenchmark.java:219)
at fr.inria.eventcloud.stash.rdfstores.benchmark.RDFStoreBenchmark$2.run(RDFStoreBenchmark.java:192)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
The piece of code I use for reads is the following:
RepositoryConnection cxn = repository.getConnection();
GraphQuery gq =
cxn.prepareGraphQuery(
QueryLanguage.SPARQL,
"SELECT * WHERE { GRAPH ?g { ?s ?p ?o } }");
GraphQueryResult result = gq.evaluate();
cxn.close()
Is my SPARQL request illegal?
Kind Regards,
Laurent
On Fri, Mar 1, 2013 at 1:09 PM, Bryan Thompson <br...@sy...<mailto:br...@sy...>> wrote:
Laurent,
This exception would be from using the wrong Sesame JARs. I think we probably failed to update the POM when we updated the JARs.
java.lang.NoSuchFieldError: NQUADS
at com.bigdata.rdf.rio.nquads.NQuadsParserFactory.getRDFFormat(NQuadsParserFactory.java:47)
I have just committed a fix for this. It is building in CI now. A new snapshot JAR should be posted shortly.
Truth maintenance is only supported for triples, not quads. There are several discussions on the forum concerning what would be required to support truth maintenance and inference over quads. Briefly, the problem is specifying which named graph(s) are combined and where the entailments are written. There are also several suggestions of alternative mechanisms involving triples that may be better suited to some applications, especially the triples plus statement level metadata database mode.
Thanks,
Bryan
From: Laurent Pellegrino <lau...@gm...<mailto:lau...@gm...>>
Date: Friday, March 1, 2013 6:41 AM
To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>
Subject: Re: [Bigdata-developers] Jena Adapter?
Hello,
Thank you for the answers. I tried to create a Bigdata repository with support for quads with the following piece of code:
Properties properties =
new Properties();
File journal = File.createTempFile("bigdata", ".jnl");
properties.setProperty(
BigdataSail.Options.FILE, journal.getAbsolutePath());
properties.setProperty(
BigdataSail.Options.QUADS_MODE, "true");
properties.setProperty(
BigdataSail.Options.TRUTH_MAINTENANCE, "false");
BigdataSail sail = new BigdataSail(properties);
Repository repo = new BigdataSailRepository(sail);
repo.initialize();
repo.shutDown();
but the repository initialization seems not to work. I get the following error :
[main] ERROR rio.RDFParserRegistry - Failed to instantiate service
java.lang.NoSuchFieldError: NQUADS
at com.bigdata.rdf.rio.nquads.NQuadsParserFactory.getRDFFormat(NQuadsParserFactory.java:47)
at org.openrdf.rio.RDFParserRegistry.getKey(RDFParserRegistry.java:38)
at org.openrdf.rio.RDFParserRegistry.getKey(RDFParserRegistry.java:15)
at info.aduna.lang.service.ServiceRegistry.add(ServiceRegistry.java:74)
at info.aduna.lang.service.ServiceRegistry.<init>(ServiceRegistry.java:44)
at info.aduna.lang.service.FileFormatServiceRegistry.<init>(FileFormatServiceRegistry.java:20)
at org.openrdf.rio.RDFParserRegistry.<init>(RDFParserRegistry.java:33)
at org.openrdf.rio.RDFParserRegistry.getInstance(RDFParserRegistry.java:26)
at com.bigdata.rdf.ServiceProviderHook.forceLoad(ServiceProviderHook.java:109)
at com.bigdata.rdf.ServiceProviderHook.<clinit>(ServiceProviderHook.java:84)
at com.bigdata.rdf.store.AbstractTripleStore.<init>(AbstractTripleStore.java:1266)
at com.bigdata.rdf.store.AbstractLocalTripleStore.<init>(AbstractLocalTripleStore.java:57)
at com.bigdata.rdf.store.LocalTripleStore.<init>(LocalTripleStore.java:161)
at com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:726)
at com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:653)
at com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:630)
at fr.inria.eventcloud.stash.rdfstores.benchmark.BigdataRepository.createBigDataRepository(BigdataRepository.java:53)
at fr.inria.eventcloud.stash.rdfstores.benchmark.RDFStoreBenchmark.testBigdataSequential(RDFStoreBenchmark.java:45)
at fr.inria.eventcloud.stash.rdfstores.benchmark.RDFStoreBenchmark.main(RDFStoreBenchmark.java:77)
I have set truth maintenance to false given that I get the following message if I let it to the default value when quads mode is set to true:
java.lang.UnsupportedOperationException: com.bigdata.rdf.sail.truthMaintenance is not supported with quads (com.bigdata.rdf.store.AbstractTripleStore.quads)
My test was performed by using bigdata version 1.2.2-SNAPSHOT from your maven repository.
I cannot figure out what is wrong in the previous piece of code. Could you help me?
Kind Regards,
Laurent
On Thu, Feb 28, 2013 at 1:12 PM, Bryan Thompson <br...@sy...<mailto:br...@sy...>> wrote:
Laurent,
Unfortunately not. This has been discussed several times, but never implemented. However, the advent of SPARQL UPDATE makes it much easier to write portable applications. If you like, feel free to post questions about how to achieve certain things under bigdata that you currently accomplish with Jena.
Thanks,
Bryan
From: Laurent Pellegrino <lau...@gm...<mailto:lau...@gm...>>
Date: Thursday, February 28, 2013 4:27 AM
To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>>
Subject: [Bigdata-developers] Jena Adapter?
Hello all,
I am investigating the use of Bigdata in one application that currently uses Jena TDB. I would like to test Bigdata without rewriting all, that's why I wonder if there exists a jena adapter on top of Bigdata?
Kind Regards,
Laurent
|