This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(139) |
Aug
(94) |
Sep
(232) |
Oct
(143) |
Nov
(138) |
Dec
(55) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(127) |
Feb
(90) |
Mar
(101) |
Apr
(74) |
May
(148) |
Jun
(241) |
Jul
(169) |
Aug
(121) |
Sep
(157) |
Oct
(199) |
Nov
(281) |
Dec
(75) |
2012 |
Jan
(107) |
Feb
(122) |
Mar
(184) |
Apr
(73) |
May
(14) |
Jun
(49) |
Jul
(26) |
Aug
(103) |
Sep
(133) |
Oct
(61) |
Nov
(51) |
Dec
(55) |
2013 |
Jan
(59) |
Feb
(72) |
Mar
(99) |
Apr
(62) |
May
(92) |
Jun
(19) |
Jul
(31) |
Aug
(138) |
Sep
(47) |
Oct
(83) |
Nov
(95) |
Dec
(111) |
2014 |
Jan
(125) |
Feb
(60) |
Mar
(119) |
Apr
(136) |
May
(270) |
Jun
(83) |
Jul
(88) |
Aug
(30) |
Sep
(47) |
Oct
(27) |
Nov
(23) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Blaise de C. <bde...@gm...> - 2016-03-22 07:29:02
|
Thank you so much, you saved me : it works well. We are experiencing Blazegraph for a big open data project, it will go live at the end if the year. What is the roadmap for 2.1 ? Thank you again for your help ! Blaise Le 21 mars 2016 6:48 PM, "Michael Schmidt" <ms...@me...> a écrit : > Dear Blaise, > > thanks for your interest in using geospatial. Unless I oversee something, > data and query look good. So I guess what’s missing is that you need to > enable geospatial when creating the namespace: > > > com.bigdata.rdf.store.AbstractTripleStore.geoSpatial=true > > Please note that the geospatial feature in 2.0.1 was intended to be a > preview — while it should work well performance-wise, we’re just finishing > up a much more powerful version of the geospatial feature for 2.1, which > comes with more powerful configuration mechanisms, a built-in datatype for > lat+lon only, and many more (this will, unfortunately, include breaking > changes, yet it should be pretty straightforward to migrate existing > geospatial knowledge bases to 2.0.2). > > Best, > Michael > > > > > > > On 21 Mar 2016, at 17:29, Blaise de Carné <bde...@gm...> wrote: > > Hi again ! > > I'm unable to use the Geospatial service in blazegraph 2.0.1 :( > > I created a simple namespace and i load these stmts from the test file : > > > https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-cities.nt > > Then i executed the associated query : > > > https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-rectangle01.rq > > It gives me zero results, without any error... Did i made some mistakes ? > > Thank you, > Blaise > > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > > http://pubads.g.doubleclick.net/gampad/clk?id=278785351&iu=/4140_______________________________________________ > Bigdata-commit mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-commit > > > |
From: Michael S. <ms...@me...> - 2016-03-21 17:49:05
|
Dear Blaise, thanks for your interest in using geospatial. Unless I oversee something, data and query look good. So I guess what’s missing is that you need to enable geospatial when creating the namespace: > com.bigdata.rdf.store.AbstractTripleStore.geoSpatial=true Please note that the geospatial feature in 2.0.1 was intended to be a preview — while it should work well performance-wise, we’re just finishing up a much more powerful version of the geospatial feature for 2.1, which comes with more powerful configuration mechanisms, a built-in datatype for lat+lon only, and many more (this will, unfortunately, include breaking changes, yet it should be pretty straightforward to migrate existing geospatial knowledge bases to 2.0.2). Best, Michael > On 21 Mar 2016, at 17:29, Blaise de Carné <bde...@gm...> wrote: > > Hi again ! > > I'm unable to use the Geospatial service in blazegraph 2.0.1 :( > > I created a simple namespace and i load these stmts from the test file : > > https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-cities.nt <https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-cities.nt> > > Then i executed the associated query : > > https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-rectangle01.rq <https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-rectangle01.rq> > > It gives me zero results, without any error... Did i made some mistakes ? > > Thank you, > Blaise > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://pubads.g.doubleclick.net/gampad/clk?id=278785351&iu=/4140_______________________________________________ > Bigdata-commit mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-commit |
From: Blaise de C. <bde...@gm...> - 2016-03-21 16:30:08
|
Hi again ! I'm unable to use the Geospatial service in blazegraph 2.0.1 :( I created a simple namespace and i load these stmts from the test file : https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-cities.nt Then i executed the associated query : https://raw.githubusercontent.com/blazegraph/database/master/bigdata-rdf-test/src/test/java/com/bigdata/rdf/sparql/ast/eval/geo-realworld-rectangle01.rq It gives me zero results, without any error... Did i made some mistakes ? Thank you, Blaise |
From: Blaise de C. <bde...@gm...> - 2016-03-21 15:43:28
|
Hi there, I'm making some tests on a great server : Intel Xeon E5 2x E5-2650v3 (20c/40t 2,3 / 3,0 GHz) 256 Go RAM (DDR4 ECC 2133 MHz) 2x480Go SSD I'm using the DataLoader to load 22M triples and this is the stats : Load: 22571730 stmts added in 371.618 secs, rate= 60739, commitLatency=0ms, failSet=0,goodSet=1} Total elapsed=380457ms 60K stmts/s it seems to me not so good for this configuration... What do you think ? Thank you, Blaise |
From: Alex M. <ale...@gm...> - 2015-09-11 22:52:56
|
Hi Michael Thanks very much Alex tilogeo.com On 11 Sep 2015 22:20, "Michael Schmidt" <ms...@me...> wrote: > Dear Alex, > > thanks for your interest in Blazegraph. > > There are different ways to communicate with the Blazegraph server. The > way you tried is through Java using the Sesame API. This is a generic RDF > API and a convenient way to integrate with Java applications. Sesame offers > different options to do so, one is the native BigdataSail (which is used in > the sample code you are using), another option is using a more generic > SPARQL repository (which wraps requests against Blazegraph’s built-in > SPARQL endpoint). > > And yes, you can as well communicate with Blazegraph’s standard compliant > SPARQL endpoint (and some custom APIs) directly via command line/curl. > Beyond the SPARQL endpoint, Blazegraph offers some convenient REST API > extensions. See for instance > https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT for a > description on how to insert data from file. Similar methods are available > for deletion, SPARQL UPDATE, and querying, of course. > > Just a side note: this is a development focused mailing list (commit > traffic), it would be great if you could send future questions regarding > the use of Blazegraph to > https://lists.sourceforge.net/lists/listinfo/bigdata-developers instead. > > Best, > Michael > > On 11 Sep 2015, at 19:33, Alex Muir <ale...@gm...> wrote: > > Hi, > > I'm evaluating blazegraph for a client along with other RDF triple stores. > > I note that there is sample code for loading data. Is a java application > the only method to load data. > > http://sourceforge.net/p/bigdata/git/ci/master/tree/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java > > Is there a method to use curl to load data? > > I was trying something like follows > curl -D -H 'Content-Type: application/rdf+xml' --upload-file > zu_en-gb_dict.rdf -X POST 'http://localhost:8080/bigdata/update' > > Is there any application equivalent to s-put for jena? > > Generally speaking are most interactions with blazegraph through java code? > > Regards > Alex > www.tilogeo.com > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-commit mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-commit > > > |
From: Michael S. <ms...@me...> - 2015-09-11 22:51:38
|
Dear Alex, thanks for your interest in Blazegraph. There are different ways to communicate with the Blazegraph server. The way you tried is through Java using the Sesame API. This is a generic RDF API and a convenient way to integrate with Java applications. Sesame offers different options to do so, one is the native BigdataSail (which is used in the sample code you are using), another option is using a more generic SPARQL repository (which wraps requests against Blazegraph’s built-in SPARQL endpoint). And yes, you can as well communicate with Blazegraph’s standard compliant SPARQL endpoint (and some custom APIs) directly via command line/curl. Beyond the SPARQL endpoint, Blazegraph offers some convenient REST API extensions. See for instance https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT <https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT> for a description on how to insert data from file. Similar methods are available for deletion, SPARQL UPDATE, and querying, of course. Just a side note: this is a development focused mailing list (commit traffic), it would be great if you could send future questions regarding the use of Blazegraph to https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> instead. Best, Michael > On 11 Sep 2015, at 19:33, Alex Muir <ale...@gm...> wrote: > > Hi, > > I'm evaluating blazegraph for a client along with other RDF triple stores. > > I note that there is sample code for loading data. Is a java application the only method to load data. > http://sourceforge.net/p/bigdata/git/ci/master/tree/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java <http://sourceforge.net/p/bigdata/git/ci/master/tree/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java> > > Is there a method to use curl to load data? > > I was trying something like follows > curl -D -H 'Content-Type: application/rdf+xml' --upload-file zu_en-gb_dict.rdf -X POST 'http://localhost:8080/bigdata/update <http://localhost:8080/bigdata/update>' > > Is there any application equivalent to s-put for jena? > > Generally speaking are most interactions with blazegraph through java code? > > Regards > Alex > www.tilogeo.com <http://www.tilogeo.com/>------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-commit mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-commit |
From: Alex M. <ale...@gm...> - 2015-09-11 17:33:27
|
Hi, I'm evaluating blazegraph for a client along with other RDF triple stores. I note that there is sample code for loading data. Is a java application the only method to load data. http://sourceforge.net/p/bigdata/git/ci/master/tree/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java Is there a method to use curl to load data? I was trying something like follows curl -D -H 'Content-Type: application/rdf+xml' --upload-file zu_en-gb_dict.rdf -X POST 'http://localhost:8080/bigdata/update' Is there any application equivalent to s-put for jena? Generally speaking are most interactions with blazegraph through java code? Regards Alex www.tilogeo.com |
From: <tho...@us...> - 2014-11-20 20:05:27
|
Revision: 8725 http://sourceforge.net/p/bigdata/code/8725 Author: thompsonbry Date: 2014-11-20 20:05:23 +0000 (Thu, 20 Nov 2014) Log Message: ----------- fixup to the 1.4.0 release pom for the openrdf dependency version (2.6.10 => 2.7.13). Modified Paths: -------------- tags/BIGDATA_RELEASE_1_4_0/pom.xml Modified: tags/BIGDATA_RELEASE_1_4_0/pom.xml =================================================================== --- tags/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-19 19:40:20 UTC (rev 8724) +++ tags/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-20 20:05:23 UTC (rev 8725) @@ -52,7 +52,7 @@ <modelVersion>4.0.0</modelVersion> <groupId>com.bigdata</groupId> <artifactId>bigdata</artifactId> - <version>1.3.4-SNAPSHOT</version> + <version>1.4.0</version> <packaging>pom</packaging> <name>bigdata(R)</name> <description>Bigdata(R) Maven Build</description> @@ -76,7 +76,7 @@ <!-- --> <icu.version>4.8</icu.version> <zookeeper.version>3.4.5</zookeeper.version> - <sesame.version>2.6.10</sesame.version> + <sesame.version>2.7.13</sesame.version> <slf4j.version>1.6.1</slf4j.version> <jetty.version>9.1.4.v20140401</jetty.version> <servlet.version>3.1.0</servlet.version> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-19 19:40:23
|
Revision: 8724 http://sourceforge.net/p/bigdata/code/8724 Author: thompsonbry Date: 2014-11-19 19:40:20 +0000 (Wed, 19 Nov 2014) Log Message: ----------- fix to pom (openrdf 2.6.10 => 2.7.13) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/build.properties branches/BIGDATA_RELEASE_1_4_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_4_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/build.properties 2014-11-18 16:35:18 UTC (rev 8723) +++ branches/BIGDATA_RELEASE_1_4_0/build.properties 2014-11-19 19:40:20 UTC (rev 8724) @@ -50,8 +50,6 @@ sesame.version=2.7.13 slf4j.version=1.6.1 jetty.version=9.1.4.v20140401 -#jetty.version=7.2.2.v20101205 -#servlet.version=2.5 servlet.version=3.1.0 lucene.version=3.0.0 apache.commons_codec.version=1.4 @@ -62,7 +60,6 @@ apache.httpclient_cache.version=4.1.3 apache.httpcore.version=4.1.4 apache.httpmime.version=4.1.3 -#nxparser.version=1.2.3 colt.version=1.2.0 highscalelib.version=1.1.2 log4j.version=1.2.17 Modified: branches/BIGDATA_RELEASE_1_4_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-18 16:35:18 UTC (rev 8723) +++ branches/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-19 19:40:20 UTC (rev 8724) @@ -76,7 +76,7 @@ <!-- --> <icu.version>4.8</icu.version> <zookeeper.version>3.4.5</zookeeper.version> - <sesame.version>2.6.10</sesame.version> + <sesame.version>2.7.13</sesame.version> <slf4j.version>1.6.1</slf4j.version> <jetty.version>9.1.4.v20140401</jetty.version> <servlet.version>3.1.0</servlet.version> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 16:35:22
|
Revision: 8723 http://sourceforge.net/p/bigdata/code/8723 Author: thompsonbry Date: 2014-11-18 16:35:18 +0000 (Tue, 18 Nov 2014) Log Message: ----------- returning to snapshot builds for the 1.4.x maintenance and development branch. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/build.properties branches/BIGDATA_RELEASE_1_4_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_4_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/build.properties 2014-11-18 16:23:07 UTC (rev 8722) +++ branches/BIGDATA_RELEASE_1_4_0/build.properties 2014-11-18 16:35:18 UTC (rev 8723) @@ -98,14 +98,14 @@ # Set true to do a snapshot build. This changes the value of ${version} to # include the date. -snapshot=false +snapshot=true # Javadoc build may be disabled using this property. The javadoc target will # not be executed unless this property is defined (its value does not matter). # Note: The javadoc goes quite if you have enough memory, but can take forever # and then runs out of memory if the JVM is starved for RAM. The heap for the # javadoc JVM is explicitly set in the javadoc target in the build.xml file. -javadoc= +#javadoc= # packaging property set (rpm, deb). package.release=1 Modified: branches/BIGDATA_RELEASE_1_4_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-18 16:23:07 UTC (rev 8722) +++ branches/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-18 16:35:18 UTC (rev 8723) @@ -52,7 +52,7 @@ <modelVersion>4.0.0</modelVersion> <groupId>com.bigdata</groupId> <artifactId>bigdata</artifactId> - <version>1.3.4-SNAPSHOT</version> + <version>1.4.0-SNAPSHOT</version> <packaging>pom</packaging> <name>bigdata(R)</name> <description>Bigdata(R) Maven Build</description> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 16:23:08
|
Revision: 8722 http://sourceforge.net/p/bigdata/code/8722 Author: thompsonbry Date: 2014-11-18 16:23:07 +0000 (Tue, 18 Nov 2014) Log Message: ----------- 1.4.0 release tag. Added Paths: ----------- tags/BIGDATA_RELEASE_1_4_0/ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 16:21:47
|
Revision: 8721 http://sourceforge.net/p/bigdata/code/8721 Author: thompsonbry Date: 2014-11-18 16:21:45 +0000 (Tue, 18 Nov 2014) Log Message: ----------- Removed nxparser dependency entirely from pom.xml Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_4_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-18 15:51:12 UTC (rev 8720) +++ branches/BIGDATA_RELEASE_1_4_0/pom.xml 2014-11-18 16:21:45 UTC (rev 8721) @@ -89,7 +89,6 @@ <apache.httpclient_cache.version>4.1.3</apache.httpclient_cache.version> <apache.httpcore.version>4.1.4</apache.httpcore.version> <apache.httpmime.version>4.1.3</apache.httpmime.version> - <!--<nxparser.version>1.2.3</nxparser.version>--> <colt.version>1.2.0</colt.version> <highscalelib.version>1.1.2</highscalelib.version> <log4j.version>1.2.17</log4j.version> @@ -178,14 +177,6 @@ <id>bigdata.releases</id> <url>http://www.systap.com/maven/releases/</url> </repository> - <repository> - <id>nxparser-repo</id> - <url>http://nxparser.googlecode.com/svn/repository</url> - </repository> - <repository> - <id>nxparser-snapshots</id> - <url>http://nxparser.googlecode.com/svn/snapshots</url> - </repository> <!-- <repository> <id>jetty.releases</id> @@ -208,11 +199,6 @@ <version>${highscalelib.version}</version> </dependency> <dependency> - <groupId>org.semanticweb.yars</groupId> - <artifactId>nxparser</artifactId> - <version>${nxparser.version}</version> - </dependency> - <dependency> <groupId>colt</groupId> <artifactId>colt</artifactId> <version>${colt.version}</version> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 15:51:15
|
Revision: 8720 http://sourceforge.net/p/bigdata/code/8720 Author: thompsonbry Date: 2014-11-18 15:51:12 +0000 (Tue, 18 Nov 2014) Log Message: ----------- Bumping version for 1.4.0 release. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/build.properties Modified: branches/BIGDATA_RELEASE_1_4_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/build.properties 2014-11-18 15:27:18 UTC (rev 8719) +++ branches/BIGDATA_RELEASE_1_4_0/build.properties 2014-11-18 15:51:12 UTC (rev 8720) @@ -93,19 +93,19 @@ release.dir=ant-release # The build version (note: 0.82b -> 0.82.0); 0.83.2 is followed by 1.0.0 -build.ver=1.3.4 +build.ver=1.4.0 build.ver.osgi=1.0 # Set true to do a snapshot build. This changes the value of ${version} to # include the date. -snapshot=true +snapshot=false # Javadoc build may be disabled using this property. The javadoc target will # not be executed unless this property is defined (its value does not matter). # Note: The javadoc goes quite if you have enough memory, but can take forever # and then runs out of memory if the JVM is starved for RAM. The heap for the # javadoc JVM is explicitly set in the javadoc target in the build.xml file. -#javadoc= +javadoc= # packaging property set (rpm, deb). package.release=1 This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 15:27:27
|
Revision: 8719 http://sourceforge.net/p/bigdata/code/8719 Author: thompsonbry Date: 2014-11-18 15:27:18 +0000 (Tue, 18 Nov 2014) Log Message: ----------- updated release notes Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-18 15:07:48 UTC (rev 8718) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-18 15:27:18 UTC (rev 8719) @@ -20,8 +20,10 @@ New features in 1.4.x: -- Openrdf 2.7 support. -- Numerous bug fixes and performance enhancements. +- Openrdf 2.7 support (#714). +- Workbench error handling improvements (#911) +- Various RDR specific bug fixes for the workbench and server (#1038, #1058, #1061) +- Numerous other bug fixes and performance enhancements. Feature summary: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 15:07:51
|
Revision: 8718 http://sourceforge.net/p/bigdata/code/8718 Author: thompsonbry Date: 2014-11-18 15:07:48 +0000 (Tue, 18 Nov 2014) Log Message: ----------- restoring testOrderByQueriesAreInterruptable Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java 2014-11-18 15:01:18 UTC (rev 8717) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java 2014-11-18 15:07:48 UTC (rev 8718) @@ -35,9 +35,18 @@ import java.io.File; import java.io.IOException; +import java.lang.management.GarbageCollectorMXBean; +import java.lang.management.ManagementFactory; +import java.lang.reflect.Method; +import java.util.List; import java.util.Properties; import org.apache.log4j.Logger; +import org.openrdf.model.vocabulary.RDFS; +import org.openrdf.query.QueryInterruptedException; +import org.openrdf.query.QueryLanguage; +import org.openrdf.query.TupleQuery; +import org.openrdf.query.TupleQueryResult; import org.openrdf.repository.Repository; import org.openrdf.repository.RepositoryConnectionTest; @@ -669,153 +678,153 @@ // } // } // -// /** -// * {@inheritDoc} -// * <p> -// * This test was failing historically for two reasons. First, it would -// * sometimes encounter a full GC pause that would suspend the JVM for longer -// * than the query timeout. This would fail the test. Second, the query -// * engine code used to only check for a deadline when a query operator would -// * start or stop. This meant that a compute bound operator would not be -// * interrupted if there was no other concurrent operators for that query -// * that were starting and stoping. This was fixed in #722. -// * -// * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/772"> -// * Query timeout only checked at operator start/stop. </a> -// */ -// @Override -// public void testOrderByQueriesAreInterruptable() throws Exception { -// -// /* -// * Note: Test failures arise from length GC pauses. Such GC pauses -// * suspend the application for longer than the query should run and -// * cause it to miss its deadline. In order to verify that the deadline -// * is being applied correctly, we can only rely on those test trials -// * where the GC pause was LT the target query time. Other trials need to -// * be thrown out. We do this using a Sun specific management API. The -// * test will throw a ClassNotFoundException for other JVMs. -// */ -// final Class cls1 = Class -// .forName("com.sun.management.GarbageCollectorMXBean"); -// -// final Class cls2 = Class.forName("com.sun.management.GcInfo"); -// -// final Method method1 = cls1.getMethod("getLastGcInfo", new Class[] {}); -// -// final Method method2 = cls2.getMethod("getDuration", new Class[] {}); -// -// /* -// * Load data. -// */ -// testCon.setAutoCommit(false); -// for (int index = 0; index < 512; index++) { -// testCon.add(RDFS.CLASS, RDFS.COMMENT, testCon.getValueFactory() -// .createBNode()); -// } -// testCon.setAutoCommit(true); -// testCon.commit(); -// -// final long MAX_QUERY_TIME = 2000; -// final long MAX_TIME_MILLIS = 5000; -// final int NTRIALS = 20; -// int nok = 0, ngcfail = 0; -// -// for (int i = 0; i < NTRIALS; i++) { -// -// if (log.isInfoEnabled()) -// log.info("RUN-TEST-PASS #" + i); -// -// final TupleQuery query = testCon -// .prepareTupleQuery( -// QueryLanguage.SPARQL, -// "SELECT * WHERE { ?s ?p ?o . ?s1 ?p1 ?o1 . ?s2 ?p2 ?o2 . ?s3 ?p3 ?o3 . } ORDER BY ?s1 ?p1 ?o1 LIMIT 1000"); -// -// query.setMaxQueryTime((int) (MAX_QUERY_TIME / 1000)); -// -// final long startTime = System.currentTimeMillis(); -// -// final TupleQueryResult result = query.evaluate(); -// -// if (log.isInfoEnabled()) -// log.info("Query evaluation has begin"); -// -// try { -// -// result.hasNext(); -// fail("Query should have been interrupted on pass# " + i); -// -// } catch (QueryInterruptedException e) { -// -// // Expected -// final long duration = System.currentTimeMillis() - startTime; -// -// if (log.isInfoEnabled()) -// log.info("Actual query duration: " + duration -// + "ms on pass#" + i); -// -// final boolean ok = duration < MAX_TIME_MILLIS; -// -// if (ok) { -// -// nok++; -// -// } else { -// -// boolean failedByGCPause = false; -// -// final List<GarbageCollectorMXBean> mbeans = ManagementFactory -// .getGarbageCollectorMXBeans(); -// -// for (GarbageCollectorMXBean m : mbeans) { -// /* -// * Note: This relies on a sun specific interface. -// * -// * Note: This test is not strickly diagnostic. We should -// * really be comparing the full GC time since we started -// * to evaluate the query. However, in practice this is a -// * pretty good proxy for that (as observed with -// * -verbose:gc when you run this test). -// */ -// if (cls1.isAssignableFrom(m.getClass())) { -// // Information from the last GC. -// final Object lastGcInfo = method1.invoke(m, -// new Object[] {}); -// // Duration of that last GC. -// final long lastDuration = (Long) method2.invoke( -// lastGcInfo, new Object[] {}); -// if (lastDuration >= MAX_QUERY_TIME) { -// log.warn("Large GC pause caused artifical liveness problem: duration=" -// + duration + "ms"); -// failedByGCPause = true; -// break; -// } -// } -// } -// -// if (!failedByGCPause) -// fail("Query not interrupted quickly enough, should have been ~2s, but was " -// + (duration / 1000) + "s on pass#" + i); -// -// ngcfail++; -// -// } -// } -// } -// -// /* -// * Fail the test if we do not get enough good trials. -// */ -// final String msg = "NTRIALS=" + NTRIALS + ", nok=" + nok + ", ngcfail=" -// + ngcfail; -// -// log.warn(msg); -// -// if (nok < 5) { -// -// fail(msg); -// -// } -// -// } -// + /** + * {@inheritDoc} + * <p> + * This test was failing historically for two reasons. First, it would + * sometimes encounter a full GC pause that would suspend the JVM for longer + * than the query timeout. This would fail the test. Second, the query + * engine code used to only check for a deadline when a query operator would + * start or stop. This meant that a compute bound operator would not be + * interrupted if there was no other concurrent operators for that query + * that were starting and stoping. This was fixed in #722. + * + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/772"> + * Query timeout only checked at operator start/stop. </a> + */ + @Override + public void testOrderByQueriesAreInterruptable() throws Exception { + + /* + * Note: Test failures arise from length GC pauses. Such GC pauses + * suspend the application for longer than the query should run and + * cause it to miss its deadline. In order to verify that the deadline + * is being applied correctly, we can only rely on those test trials + * where the GC pause was LT the target query time. Other trials need to + * be thrown out. We do this using a Sun specific management API. The + * test will throw a ClassNotFoundException for other JVMs. + */ + final Class cls1 = Class + .forName("com.sun.management.GarbageCollectorMXBean"); + + final Class cls2 = Class.forName("com.sun.management.GcInfo"); + + final Method method1 = cls1.getMethod("getLastGcInfo", new Class[] {}); + + final Method method2 = cls2.getMethod("getDuration", new Class[] {}); + + /* + * Load data. + */ + testCon.setAutoCommit(false); + for (int index = 0; index < 512; index++) { + testCon.add(RDFS.CLASS, RDFS.COMMENT, testCon.getValueFactory() + .createBNode()); + } + testCon.setAutoCommit(true); + testCon.commit(); + + final long MAX_QUERY_TIME = 2000; + final long MAX_TIME_MILLIS = 5000; + final int NTRIALS = 20; + int nok = 0, ngcfail = 0; + + for (int i = 0; i < NTRIALS; i++) { + + if (log.isInfoEnabled()) + log.info("RUN-TEST-PASS #" + i); + + final TupleQuery query = testCon + .prepareTupleQuery( + QueryLanguage.SPARQL, + "SELECT * WHERE { ?s ?p ?o . ?s1 ?p1 ?o1 . ?s2 ?p2 ?o2 . ?s3 ?p3 ?o3 . } ORDER BY ?s1 ?p1 ?o1 LIMIT 1000"); + + query.setMaxQueryTime((int) (MAX_QUERY_TIME / 1000)); + + final long startTime = System.currentTimeMillis(); + + final TupleQueryResult result = query.evaluate(); + + if (log.isInfoEnabled()) + log.info("Query evaluation has begin"); + + try { + + result.hasNext(); + fail("Query should have been interrupted on pass# " + i); + + } catch (QueryInterruptedException e) { + + // Expected + final long duration = System.currentTimeMillis() - startTime; + + if (log.isInfoEnabled()) + log.info("Actual query duration: " + duration + + "ms on pass#" + i); + + final boolean ok = duration < MAX_TIME_MILLIS; + + if (ok) { + + nok++; + + } else { + + boolean failedByGCPause = false; + + final List<GarbageCollectorMXBean> mbeans = ManagementFactory + .getGarbageCollectorMXBeans(); + + for (GarbageCollectorMXBean m : mbeans) { + /* + * Note: This relies on a sun specific interface. + * + * Note: This test is not strickly diagnostic. We should + * really be comparing the full GC time since we started + * to evaluate the query. However, in practice this is a + * pretty good proxy for that (as observed with + * -verbose:gc when you run this test). + */ + if (cls1.isAssignableFrom(m.getClass())) { + // Information from the last GC. + final Object lastGcInfo = method1.invoke(m, + new Object[] {}); + // Duration of that last GC. + final long lastDuration = (Long) method2.invoke( + lastGcInfo, new Object[] {}); + if (lastDuration >= MAX_QUERY_TIME) { + log.warn("Large GC pause caused artifical liveness problem: duration=" + + duration + "ms"); + failedByGCPause = true; + break; + } + } + } + + if (!failedByGCPause) + fail("Query not interrupted quickly enough, should have been ~2s, but was " + + (duration / 1000) + "s on pass#" + i); + + ngcfail++; + + } + } + } + + /* + * Fail the test if we do not get enough good trials. + */ + final String msg = "NTRIALS=" + NTRIALS + ", nok=" + nok + ", ngcfail=" + + ngcfail; + + log.warn(msg); + + if (nok < 5) { + + fail(msg); + + } + + } + } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java 2014-11-18 15:01:18 UTC (rev 8717) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java 2014-11-18 15:07:48 UTC (rev 8718) @@ -1709,6 +1709,10 @@ } } + /* + * Note: This test gets overridden in a GC aware manner to ensure that it + * passes in CI. See BigdataConnectionTest. + */ @Test public void testOrderByQueriesAreInterruptable() throws Exception This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 15:01:33
|
Revision: 8717 http://sourceforge.net/p/bigdata/code/8717 Author: thompsonbry Date: 2014-11-18 15:01:18 +0000 (Tue, 18 Nov 2014) Log Message: ----------- RDR test suites. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt branches/BIGDATA_RELEASE_1_4_0/bigdata/src/resources/logging/log4j.properties branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WorkbenchServlet.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractTestNanoSparqlClient.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServerWithProxyIndexManager.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestRDROperations.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/rdr_01.ttlx Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-18 15:01:18 UTC (rev 8717) @@ -53,14 +53,17 @@ - http://trac.bigdata.com/ticket/714 (Migrate to openrdf 2.7) - http://trac.bigdata.com/ticket/745 (BackgroundTupleResult overrides final method close) +- http://trac.bigdata.com/ticket/751 (explicit bindings get ignored in subselect (duplicate of #714)) - http://trac.bigdata.com/ticket/813 (Documentation on BigData Reasoning) - http://trac.bigdata.com/ticket/911 (workbench does not display errors well) - http://trac.bigdata.com/ticket/1035 (DISTINCT PREDICATEs query is slow) - http://trac.bigdata.com/ticket/1037 (SELECT COUNT(...) (DISTINCT|REDUCED) {single-triple-pattern} is slow) +- http://trac.bigdata.com/ticket/1038 (RDR RDF parsers are not always discovered) - http://trac.bigdata.com/ticket/1044 (ORDER_BY ordering not preserved by projection operator) - http://trac.bigdata.com/ticket/1047 (NQuadsParser hangs when loading latest dbpedia dump.) - http://trac.bigdata.com/ticket/1052 (ASTComplexOptionalOptimizer did not account for Values clauses) - http://trac.bigdata.com/ticket/1054 (BigdataGraphFactory create method cannot be invoked from the gremlin command line due to a Boolean vs boolean type mismatch.) +- http://trac.bigdata.com/ticket/1058 (update RDR documentation on wiki) - http://trac.bigdata.com/ticket/1061 (Server does not generate RDR aware JSON for RDF/SPARQL RESULTS) 1.3.4: Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/resources/logging/log4j.properties =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/resources/logging/log4j.properties 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/resources/logging/log4j.properties 2014-11-18 15:01:18 UTC (rev 8717) @@ -10,13 +10,15 @@ log4j.logger.com.bigdata.btree=WARN log4j.logger.com.bigdata.counters.History=ERROR log4j.logger.com.bigdata.counters.XMLUtility$MyHandler=ERROR -log4j.logger.com.bigdata.counters.query.CounterSetQuery=INFO -log4j.logger.com.bigdata.journal.CompactTask=INFO +#log4j.logger.com.bigdata.counters.query.CounterSetQuery=INFO +#log4j.logger.com.bigdata.journal.CompactTask=INFO log4j.logger.com.bigdata.relation.accesspath.BlockingBuffer=ERROR log4j.logger.com.bigdata.rdf.load=INFO log4j.logger.com.bigdata.rdf.store.DataLoader=INFO log4j.logger.com.bigdata.resources.AsynchronousOverflowTask=INFO +#log4j.logger.com.bigdata.rdf.ServiceProviderHook=INFO + #log4j.logger.com.bigdata.rdf.sparql=ALL #log4j.logger.com.bigdata.rdf.sail.sparql.BigdataExprBuilder=INFO #log4j.logger.com.bigdata.rdf.sail.TestProvenanceQuery=ALL @@ -78,3 +80,36 @@ log4j.appender.queryRunStateLog.BufferedIO=false log4j.appender.queryRunStateLog.layout=org.apache.log4j.PatternLayout log4j.appender.queryRunStateLog.layout.ConversionPattern=%m + +## +# Solutions trace (tab delimited file). Uncomment the next line to enable. +#log4j.logger.com.bigdata.bop.engine.SolutionsLog=INFO,solutionsLog +log4j.additivity.com.bigdata.bop.engine.SolutionsLog=false +log4j.appender.solutionsLog=org.apache.log4j.ConsoleAppender +#log4j.appender.solutionsLog=org.apache.log4j.FileAppender +log4j.appender.solutionsLog.Threshold=ALL +#log4j.appender.solutionsLog.File=solutions.csv +#log4j.appender.solutionsLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +#log4j.appender.solutionsLog.BufferedIO=false +log4j.appender.solutionsLog.layout=org.apache.log4j.PatternLayout +log4j.appender.solutionsLog.layout.ConversionPattern=SOLUTION:\t%m + +## +# SPARQL query trace (plain text file). Uncomment 2nd line to enable. +log4j.logger.com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper=WARN +#log4j.logger.com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper=INFO,sparqlLog +log4j.additivity.com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper=false +log4j.appender.sparqlLog=org.apache.log4j.ConsoleAppender +#log4j.appender.sparqlLog=org.apache.log4j.FileAppender +log4j.appender.sparqlLog.Threshold=ALL +#log4j.appender.sparqlLog.File=sparql.txt +#log4j.appender.sparqlLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +#log4j.appender.sparqlLog.BufferedIO=false +log4j.appender.sparqlLog.layout=org.apache.log4j.PatternLayout +log4j.appender.sparqlLog.layout.ConversionPattern=#----------%d-----------tx=%X{tx}\n%m\n Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java 2014-11-18 15:01:18 UTC (rev 8717) @@ -235,7 +235,7 @@ final TupleQueryResultWriterRegistry r = TupleQueryResultWriterRegistry.getInstance(); - // add our custom RDR-enabled JSON writer + // add our custom RDR-enabled JSON writer for SPARQL result sets. r.add(new BigdataSPARQLResultsJSONWriterFactory()); } @@ -244,7 +244,7 @@ final TupleQueryResultParserRegistry r = TupleQueryResultParserRegistry.getInstance(); - // add our custom RDR-enabled JSON parser + // add our custom RDR-enabled JSON parser for SPARQL result sets. r.add(new BigdataSPARQLResultsJSONParserFactory()); } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx 2014-11-18 15:01:18 UTC (rev 8717) @@ -1,5 +1,4 @@ @prefix : <http://example.com/> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . :a1 :b :c . :a2 :b :c . Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WorkbenchServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WorkbenchServlet.java 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WorkbenchServlet.java 2014-11-18 15:01:18 UTC (rev 8717) @@ -87,8 +87,9 @@ private void doConvert(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - final String baseURI = req.getRequestURL().toString(); + final String baseURI = req.getRequestURL().toString(); + // The content type of the request. final String contentType = req.getContentType(); if (log.isInfoEnabled()) @@ -155,6 +156,10 @@ */ rdfParser.parse(req.getInputStream(), baseURI); + /* + * Send back the graph using CONNEG to decide the MIME Type of the + * response. + */ sendGraph(req, resp, g); } catch (Throwable t) { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractTestNanoSparqlClient.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractTestNanoSparqlClient.java 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractTestNanoSparqlClient.java 2014-11-18 15:01:18 UTC (rev 8717) @@ -26,7 +26,6 @@ import java.io.File; import java.io.FileReader; import java.io.IOException; -import java.io.InputStream; import java.io.InputStreamReader; import java.io.LineNumberReader; import java.io.Reader; @@ -392,32 +391,61 @@ } - protected String getStreamContents(final InputStream inputStream) - throws IOException { +// protected String getStreamContents(final InputStream inputStream) +// throws IOException { +// +// final Reader rdr = new InputStreamReader(inputStream); +// +// final StringBuffer sb = new StringBuffer(); +// +// final char[] buf = new char[512]; +// +// while (true) { +// +// final int rdlen = rdr.read(buf); +// +// if (rdlen == -1) +// break; +// +// sb.append(buf, 0, rdlen); +// +// } +// +// return sb.toString(); +// +// } - final Reader rdr = new InputStreamReader(inputStream); - - final StringBuffer sb = new StringBuffer(); - - final char[] buf = new char[512]; - - while (true) { - - final int rdlen = rdr.read(buf); - - if (rdlen == -1) - break; - - sb.append(buf, 0, rdlen); - - } - - return sb.toString(); + /** + * Counts the #of results in a SPARQL result set. + * + * @param result + * The connection from which to read the results. + * + * @return The #of results. + * + * @throws Exception + * If anything goes wrong. + */ + protected long countResults(final TupleQueryResult result) throws Exception { + long count = 0; + + while(result.hasNext()) { + + result.next(); + + count++; + + } + + result.close(); + + return count; + } /** - * Counts the #of results in a SPARQL result set. + * Counts the #of results in a GRAPH result set. * * @param result * The connection from which to read the results. @@ -427,7 +455,7 @@ * @throws Exception * If anything goes wrong. */ - protected long countResults(final TupleQueryResult result) throws Exception { + protected long countResults(final GraphQueryResult result) throws Exception { long count = 0; @@ -914,6 +942,16 @@ } + /** + * Read a graph from a file. + * + * @param file + * The file. + * @return The contents as a {@link Graph}. + * @throws RDFParseException + * @throws RDFHandlerException + * @throws IOException + */ protected static Graph readGraphFromFile(final File file) throws RDFParseException, RDFHandlerException, IOException { final RDFFormat format = RDFFormat.forFileName(file.getName()); Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServerWithProxyIndexManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServerWithProxyIndexManager.java 2014-11-18 12:03:24 UTC (rev 8716) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServerWithProxyIndexManager.java 2014-11-18 15:01:18 UTC (rev 8717) @@ -270,6 +270,7 @@ break; case sids: // TODO SIDS mode UPDATE test suite. + suite.addTestSuite(TestRDROperations.class); break; case quads: // QUADS mode UPDATE test suite. Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestRDROperations.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestRDROperations.java (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestRDROperations.java 2014-11-18 15:01:18 UTC (rev 8717) @@ -0,0 +1,219 @@ +/** +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.rdf.sail.webapp; + +import java.io.File; +import java.io.FileInputStream; +import java.io.InputStream; + +import junit.framework.Test; + +import com.bigdata.journal.IIndexManager; +import com.bigdata.rdf.ServiceProviderHook; +import com.bigdata.rdf.sail.webapp.client.IPreparedBooleanQuery; +import com.bigdata.rdf.sail.webapp.client.IPreparedGraphQuery; +import com.bigdata.rdf.sail.webapp.client.IPreparedTupleQuery; +import com.bigdata.rdf.sail.webapp.client.RemoteRepository.AddOp; + +/** + * Test of RDR specific data interchange and query. + * + * @author bryan + * + * @param <S> + */ +public class TestRDROperations<S extends IIndexManager> extends + AbstractTestNanoSparqlClient<S> { + + public TestRDROperations() { + + } + + public TestRDROperations(final String name) { + + super(name); + + } + + public static Test suite() { + + return ProxySuiteHelper.suiteWhenStandalone(TestRDROperations.class, + "test.*", TestMode.sids); + + } + + public void test_POST_INSERT_withBody_TURTLE_RDR() throws Exception { + + final long ntriples = 3L; + + InputStream is = null; + try { + is = new FileInputStream(new File(packagePath + "rdr_01.ttlx")); + final AddOp add = new AddOp(is, ServiceProviderHook.TURTLE_RDR); + assertEquals(ntriples, m_repo.add(add)); + } finally { + if (is != null) { + is.close(); + } + } + + /* + * Verify normal ground triple is present. + */ + { + + final String queryStr = "ASK {<x:a1> <x:b1> <x:c1>}"; + + final IPreparedBooleanQuery query = m_repo.prepareBooleanQuery(queryStr); + + assertTrue(query.evaluate()); + + } + + // false positive test (not found). + { + + final String queryStr = "ASK {<x:a1> <x:b1> <x:c2>}"; + + final IPreparedBooleanQuery query = m_repo.prepareBooleanQuery(queryStr); + + assertFalse(query.evaluate()); + + } + + /* + * Verify RDR ground triple is present. + */ + { + + final String queryStr = "ASK {<x:a> <x:b> <x:c>}"; + + final IPreparedBooleanQuery query = m_repo.prepareBooleanQuery(queryStr); + + assertTrue(query.evaluate()); + + } + + // RDR false positive test (not found). + { + + final String queryStr = "ASK {<x:a> <x:b> <x:c2>}"; + + final IPreparedBooleanQuery query = m_repo.prepareBooleanQuery(queryStr); + + assertFalse(query.evaluate()); + + } + + + /* + * Verify RDR triple is present. + */ + { + + final String queryStr = "ASK {<<<x:a> <x:b> <x:c>>> <x:d> <x:e>}"; + + final IPreparedBooleanQuery query = m_repo.prepareBooleanQuery(queryStr); + + assertTrue(query.evaluate()); + + } + + // false positive test for RDR triple NOT present. + { + + final String queryStr = "ASK {<<<x:a> <x:b> <x:c>>> <x:d> <x:e2>}"; + + final IPreparedBooleanQuery query = m_repo.prepareBooleanQuery(queryStr); + + assertFalse(query.evaluate()); + + } + + /* + * Verify the expected #of statements in the store using a SPARQL result + * set. + */ + { + + final String queryStr = "SELECT * where {?s ?p ?o}"; + + final IPreparedTupleQuery query = m_repo.prepareTupleQuery(queryStr); + + assertEquals(ntriples, countResults(query.evaluate())); + + } + + /* + * Verify the RDR data can be recovered using a CONSTRUCT query. + */ + { + + final String queryStr = "CONSTRUCT where {?s ?p ?o}"; + + final IPreparedGraphQuery query = m_repo.prepareGraphQuery(queryStr); + + assertEquals(ntriples, countResults(query.evaluate())); + + } + + /* + * Verify the RDR data can be recovered using a DESCRIBE query. + */ + { + + final String queryStr = "DESCRIBE * {?s ?p ?o}"; + + final IPreparedGraphQuery query = m_repo.prepareGraphQuery(queryStr); + + assertEquals(ntriples, countResults(query.evaluate())); + + } + + } + + /** + * FIXME We need to verify export for this case. It relies on access to a + * Bigdata specific ValueFactoryImpl to handle the RDR mode statements. + */ + public void test_EXPORT_TURTLE_RDR() throws Exception { + + + final long ntriples = 3L; + + InputStream is = null; + try { + is = new FileInputStream(new File(packagePath + "rdr_01.ttlx")); + final AddOp add = new AddOp(is, ServiceProviderHook.TURTLE_RDR); + assertEquals(ntriples, m_repo.add(add)); + } finally { + if (is != null) { + is.close(); + } + } + + fail("write export test for TURTLE-RDR"); + + } + +} \ No newline at end of file Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/rdr_01.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/rdr_01.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/rdr_01.ttlx 2014-11-18 15:01:18 UTC (rev 8717) @@ -0,0 +1,3 @@ +<x:a1> <x:b1> <x:c1> . + +<<<x:a> <x:b> <x:c>>> <x:d> <x:e> . This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-18 12:03:34
|
Revision: 8716 http://sourceforge.net/p/bigdata/code/8716 Author: thompsonbry Date: 2014-11-18 12:03:24 +0000 (Tue, 18 Nov 2014) Log Message: ----------- Bug fixes for test suites (mainly RDR related) in preparation for 1.4.0 release. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/TestReificationDoneRightEval.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttl branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestProvenanceQuery.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSids.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-01.ttlx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttlx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02a.ttlx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttlx Removed Paths: ------------- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttl branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttl Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/TestReificationDoneRightEval.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/TestReificationDoneRightEval.java 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/TestReificationDoneRightEval.java 2014-11-18 12:03:24 UTC (rev 8716) @@ -82,7 +82,6 @@ * Reification Done Right</a> * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id: TestTCK.java 6261 2012-04-09 10:28:48Z thompsonbry $ */ public class TestReificationDoneRightEval extends AbstractDataDrivenSPARQLTestCase { @@ -165,6 +164,20 @@ h.runTest(); } + + /** + * Version of the above where the data are read from a file rather than being + * built up by hand. + */ + public void test_reificationDoneRight_00_loadDataFromFile() throws Exception { + + new TestHelper("reif/rdr-00-loadFromFile", // testURI, + "reif/rdr-02.rq",// queryFileURL + "reif/rdr-02.ttlx",// dataFileURL + "reif/rdr-02.srx"// resultFileURL + ).runTest(); + + } /** * Bootstrap test. The data are explicitly entered into the KB by hand. This @@ -221,18 +234,32 @@ * matching lexical items.) */ - final TestHelper h = new TestHelper("reif/rdr-00a", // testURI, + new TestHelper("reif/rdr-00a", // testURI, "reif/rdr-02a.rq",// queryFileURL "reif/empty.ttl",// dataFileURL "reif/rdr-02a.srx"// resultFileURL - ); + ).runTest(); - h.runTest(); + } + + /** + * Version of the above where the data are read from a file rather than being + * built up by hand. + */ + public void test_reificationDoneRight_00a_loadFromFile() throws Exception { + new TestHelper("reif/rdr-00a-loadFromFile", // testURI, + "reif/rdr-02a.rq",// queryFileURL + "reif/rdr-02.ttlx",// dataFileURL + "reif/rdr-02a.srx"// resultFileURL + ).runTest(); + } /** - * Simple query involving alice, bob, and an information extractor. + * Simple query involving alice, bob, and an information extractor. For this + * version of the test the data are modeled in the source file using RDF + * reification. * * <pre> * select ?src where { @@ -252,6 +279,29 @@ } + /** + * Simple query involving alice, bob, and an information extractor. For this + * version of the test the data are modeled in the source file using the RDR + * syntax. + * + * <pre> + * select ?src where { + * ?x foaf:name "Alice" . + * ?y foaf:name "Bob" . + * <<?x foaf:knows ?y>> dc:source ?src . + * } + * </pre> + */ + public void test_reificationDoneRight_01_usingRDRData() throws Exception { + + new TestHelper("reif/rdr-01-usingRDRData", // testURI, + "reif/rdr-01.rq",// queryFileURL + "reif/rdr-01.ttlx",// dataFileURL + "reif/rdr-01.srx"// resultFileURL + ).runTest(); + + } + /** * Same data, but the query uses the BIND() syntax and pulls out some more * information. @@ -277,6 +327,30 @@ } /** + * Same data, but the query uses the BIND() syntax and pulls out some more + * information and RDR syntax for the data. + * + * <pre> + * select ?who ?src ?conf where { + * ?x foaf:name "Alice" . + * ?y foaf:name ?who . + * BIND( <<?x foaf:knows ?y>> as ?sid ) . + * ?sid dc:source ?src . + * ?sid rv:confidence ?src . + * } + * </pre> + */ + public void test_reificationDoneRight_01a_usingRDRData() throws Exception { + + new TestHelper("reif/rdr-01a-usingRDRData", // testURI, + "reif/rdr-01a.rq",// queryFileURL + "reif/rdr-01.ttlx",// dataFileURL + "reif/rdr-01a.srx"// resultFileURL + ).runTest(); + + } + + /** * Simple query ("who bought sybase"). * * <pre> @@ -296,6 +370,25 @@ } /** + * Simple query ("who bought sybase") using RDR syntax for the data. + * + * <pre> + * SELECT ?src ?who { + * <<?who :bought :sybase>> dc:source ?src + * } + * </pre> + */ + public void test_reificationDoneRight_02_usingRDRData() throws Exception { + + new TestHelper("reif/rdr-02", // testURI, + "reif/rdr-02.rq",// queryFileURL + "reif/rdr-02.ttlx",// dataFileURL + "reif/rdr-02.srx"// resultFileURL + ).runTest(); + + } + + /** * Same data, but the query uses the BIND() syntax and pulls out some more * information. * @@ -318,7 +411,29 @@ } /** + * Same data, but the query uses the BIND() syntax and pulls out some more + * information and RDR syntax for the data. + * * <pre> + * SELECT ?src ?who ?created { + * BIND( <<?who :bought :sybase>> as ?sid ) . + * ?sid dc:source ?src . + * OPTIONAL {?sid dc:created ?created} + * } + * </pre> + */ + public void test_reificationDoneRight_02a_usingRDRData() throws Exception { + + new TestHelper("reif/rdr-02a", // testURI, + "reif/rdr-02a.rq",// queryFileURL + "reif/rdr-02a.ttlx",// dataFileURL + "reif/rdr-02a.srx"// resultFileURL + ).runTest(); + + } + + /** + * <pre> * prefix : <http://example.com/> * SELECT ?a { * BIND( <<?a :b :c>> AS ?ignored ) @@ -397,7 +512,7 @@ new TestHelper("reif/rdr-04", // testURI, "reif/rdr-04.rq",// queryFileURL - "reif/rdr-04.ttl",// dataFileURL + "reif/rdr-04.ttlx",// dataFileURL "reif/rdr-04.srx"// resultFileURL ).runTest(); Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-01.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-01.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-01.ttlx 2014-11-18 12:03:24 UTC (rev 8716) @@ -0,0 +1,77 @@ +@prefix foaf: <http://xmlns.com/foaf/0.1/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix dc: <http://purl.org/dc/terms/> . +@prefix re: <http://reasoner.example.com/engines#> . +@prefix rr: <http://reasoner.example.com/rules#> . +@prefix rv: <http://reasoner.example.com/vocabulary#> . +@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . +@prefix bd: <http://bigdata.com/RDF#> . + +bd:alice + rdf:type foaf:Person ; + foaf:name "Alice" ; + foaf:mbox <mailto:alice@work> ; + foaf:knows bd:bob. + +# The terse syntax: +<<bd:alice foaf:mbox <mailto:alice@work>>> + dc:source <http://hr.example.com/employees#bob> ; + dc:created "2012-02-05T12:34:00Z"^^xsd:dateTime . +# The expanded syntax. +#_:s1 rdf:subject bd:alice . +#_:s1 rdf:predicate foaf:mbox . +#_:s1 rdf:object <mailto:alice@work> . +#_:s1 rdf:type rdf:Statement . +#_:s1 dc:source <http://hr.example.com/employees#bob> ; +# dc:created "2012-02-05T12:34:00Z"^^xsd:dateTime . + +# Terse +<<bd:alice foaf:knows bd:bob>> + dc:source re:engine_1; + rv:rule rr:rule524 ; + rv:confidence 0.9835 . +# Expanded +#_:s2 rdf:subject bd:alice . +#_:s2 rdf:predicate foaf:knows . +#_:s2 rdf:object bd:bob . +#_:s2 rdf:type rdf:Statement . +#_:s2 +# dc:source re:engine_1; +# rv:rule rr:rule524 ; +# rv:confidence 0.9835 . + +bd:bob + rdf:type foaf:Person ; + foaf:name "Bob" ; + foaf:knows bd:alice ; + foaf:mbox <mailto:bob@work> ; + foaf:mbox <mailto:bob@home> . + +# Terse +<<bd:bob foaf:mbox <mailto:bob@home>>> + dc:creator <http://hr.example.com/infra/crawlers#we1> ; + dc:created "2012-02-05T12:34:00Z"^^xsd:dateTime ; + dc:source <http://whatever.nu/profile/bob1975> . +# Expanded +#_:s3 rdf:subject bd:bob . +#_:s3 rdf:predicate foaf:mbox . +#_:s3 rdf:object <mailto:bob@home> . +#_:s3 rdf:type rdf:Statement . +#_:s3 +# dc:creator <http://hr.example.com/infra/crawlers#we1> ; +# dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime ; +# dc:source <http://whatever.nu/profile/bob1975> . + +# Terse +<<bd:bob foaf:mbox <mailto:bob@home>>> + dc:source <http://hr.example.com/employees/bob> ; + dc:created "2012-02-05T12:34:00Z"^^xsd:dateTime . +# Expanded +#_:s4 rdf:subject bd:bob . +#_:s4 rdf:predicate foaf:mbox . +#_:s4 rdf:object <mailto:bob@home> . +#_:s4 rdf:type rdf:Statement . +#_:s4 +# dc:source <http://hr.example.com/employees/bob> ; +# dc:created "2012-02-05T12:34:00Z"^^xsd:dateTime . Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttl =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttl 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttl 2014-11-18 12:03:24 UTC (rev 8716) @@ -1,6 +1,3 @@ -# <<:SAP :bought :sybase>> dc:source news:us-sybase ; -# dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime . - @prefix : <http://example.com/> . @prefix news: <http://example.com/news/> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @@ -8,6 +5,9 @@ @prefix dc: <http://purl.org/dc/terms/> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . +# <<:SAP :bought :sybase>> dc:source news:us-sybase ; +# dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime . + :SAP :bought :sybase . _:s1 rdf:subject :SAP . _:s1 rdf:predicate :bought . Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02.ttlx 2014-11-18 12:03:24 UTC (rev 8716) @@ -0,0 +1,17 @@ +@prefix : <http://example.com/> . +@prefix news: <http://example.com/news/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix dc: <http://purl.org/dc/terms/> . +@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . + +<<:SAP :bought :sybase>> dc:source news:us-sybase ; + dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime . + +#:SAP :bought :sybase . +#_:s1 rdf:subject :SAP . +#_:s1 rdf:predicate :bought . +#_:s1 rdf:object :sybase . +#_:s1 rdf:type rdf:Statement . +#_:s1 dc:source news:us-sybase . +#_:s1 dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime . Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02a.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02a.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-02a.ttlx 2014-11-18 12:03:24 UTC (rev 8716) @@ -0,0 +1,17 @@ +@prefix : <http://example.com/> . +@prefix news: <http://example.com/news/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix dc: <http://purl.org/dc/terms/> . +@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . + +<<:SAP :bought :sybase>> dc:source news:us-sybase ; + dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime . + +#:SAP :bought :sybase . +#_:s1 rdf:subject :SAP . +#_:s1 rdf:predicate :bought . +#_:s1 rdf:object :sybase . +#_:s1 rdf:type rdf:Statement . +#_:s1 dc:source news:us-sybase . +#_:s1 dc:created "2011-04-05T12:00:00Z"^^xsd:dateTime . Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttl =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttl 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttl 2014-11-18 12:03:24 UTC (rev 8716) @@ -1,7 +0,0 @@ -@prefix : <http://example.com/> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . - -:a1 :b :c . -:a2 :b :c . -<<:a1 :b :c>> :d :e1 . -<<:a2 :b :c>> :d :e2 . Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/reif/rdr-04.ttlx 2014-11-18 12:03:24 UTC (rev 8716) @@ -0,0 +1,7 @@ +@prefix : <http://example.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . + +:a1 :b :c . +:a2 :b :c . +<<:a1 :b :c>> :d :e1 . +<<:a2 :b :c>> :d :e2 . Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestProvenanceQuery.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestProvenanceQuery.java 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestProvenanceQuery.java 2014-11-18 12:03:24 UTC (rev 8716) @@ -42,14 +42,13 @@ import org.openrdf.query.QueryLanguage; import org.openrdf.query.TupleQuery; import org.openrdf.query.TupleQueryResult; -import org.openrdf.query.algebra.evaluation.QueryBindingSet; -import org.openrdf.query.impl.DatasetImpl; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFWriter; import org.openrdf.rio.RDFWriterFactory; import org.openrdf.rio.RDFWriterRegistry; import org.openrdf.sail.SailConnection; +import com.bigdata.rdf.ServiceProviderHook; import com.bigdata.rdf.model.BigdataStatementImpl; import com.bigdata.rdf.store.BigdataStatementIterator; import com.bigdata.rdf.store.DataLoader; @@ -99,18 +98,17 @@ final DataLoader dataLoader = sail.database.getDataLoader(); dataLoader.loadData( - "bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttl", - ""/*baseURL*/, RDFFormat.TURTLE); + "bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttlx", + ""/*baseURL*/, ServiceProviderHook.TURTLE_RDR); } /* - * Serialize as RDF/XML using a vendor specific extension to represent - * the statement identifiers and statements about statements. + * Serialize as RDF/XML. * * Note: This is just for debugging. */ - { + if (log.isInfoEnabled()) { final BigdataStatementIterator itr = sail.database.getStatements(null, null, null); final String rdfXml; @@ -152,8 +150,7 @@ } // write the rdf/xml - if (log.isInfoEnabled()) - log.info(rdfXml); + log.info(rdfXml); } @@ -203,10 +200,10 @@ * Note: a [null] DataSet will cause context to be ignored when the * query is processed. */ - final DatasetImpl dataSet = null; //new DatasetImpl(); - - final BindingSet bindingSet = new QueryBindingSet(); - +// final DatasetImpl dataSet = null; //new DatasetImpl(); +// +// final BindingSet bindingSet = new QueryBindingSet(); +// // final CloseableIteration<? extends BindingSet, QueryEvaluationException> itr = conn // .evaluate(tupleExpr, dataSet, bindingSet, true/* includeInferred */); Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSids.java 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSids.java 2014-11-18 12:03:24 UTC (rev 8716) @@ -37,8 +37,8 @@ import org.openrdf.query.TupleQuery; import org.openrdf.query.TupleQueryResult; import org.openrdf.query.impl.BindingImpl; -import org.openrdf.rio.RDFFormat; +import com.bigdata.rdf.ServiceProviderHook; import com.bigdata.rdf.axioms.NoAxioms; import com.bigdata.rdf.model.BigdataBNode; import com.bigdata.rdf.model.BigdataStatement; @@ -98,7 +98,7 @@ cxn.setAutoCommit(false); - cxn.add(getClass().getResourceAsStream("sids.ttl"), "", RDFFormat.TURTLE); + cxn.add(getClass().getResourceAsStream("sids.ttl"), "", ServiceProviderHook.TURTLE_RDR); /* * Note: The either flush() or commit() is required to flush the @@ -118,7 +118,7 @@ // final String s = "http://localhost/host1"; // final String s = "http://localhost/switch1"; - String query = + final String query = "PREFIX myns: <http://mynamespace.com#> " + "SELECT distinct ?s ?p ?o " + " { " + @@ -138,7 +138,7 @@ log.info("no bindings"); } else { while (result.hasNext()) { - BindingSet bs = result.next(); + final BindingSet bs = result.next(); // log.info(bs.getBinding("s").getValue() + " " + bs.getBinding("p").getValue() + " " + bs.getBinding("o").getValue() + " ."); log.info((s == null ? bs.getBinding("s").getValue() : s) + " " + bs.getBinding("p").getValue() + " " + bs.getBinding("o").getValue() + " ."); } @@ -149,7 +149,7 @@ final TupleQueryResult result = tupleQuery.evaluate(); - Collection<BindingSet> solution = new LinkedList<BindingSet>(); + final Collection<BindingSet> solution = new LinkedList<BindingSet>(); solution.add(createBindingSet(new Binding[] { new BindingImpl("s", new URIImpl("http://localhost/host1")), new BindingImpl("p", new URIImpl("http://mynamespace.com#connectedTo")), Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttl =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/provenance01.ttlx 2014-11-18 12:03:24 UTC (rev 8716) @@ -0,0 +1,14 @@ +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix bd: <http://bigdata.com/RDF#> . +@prefix dc: <http://purl.org/dc/terms/> . +@prefix foo: <http://www.foo.org/> . + +foo:x rdf:type foo:A . +<<foo:x rdf:type foo:A>> dc:creator "bryan" . + +foo:y rdf:type foo:B . +<<foo:y rdf:type foo:B>> dc:creator "bryan" ; + dc:creator "mike" . + +foo:z rdf:type foo:C . +<<foo:z rdf:type foo:C>> dc:creator "mike" . \ No newline at end of file Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java 2014-11-18 12:03:24 UTC (rev 8716) @@ -33,33 +33,11 @@ */ package com.bigdata.rdf.sail.tck; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertThat; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; - import java.io.File; import java.io.IOException; -import java.lang.management.GarbageCollectorMXBean; -import java.lang.management.ManagementFactory; -import java.lang.reflect.Method; -import java.util.List; import java.util.Properties; import org.apache.log4j.Logger; -import org.openrdf.model.Statement; -import org.openrdf.model.Value; -import org.openrdf.model.vocabulary.RDFS; -import org.openrdf.query.BindingSet; -import org.openrdf.query.GraphQuery; -import org.openrdf.query.GraphQueryResult; -import org.openrdf.query.QueryInterruptedException; -import org.openrdf.query.QueryLanguage; -import org.openrdf.query.TupleQuery; -import org.openrdf.query.TupleQueryResult; import org.openrdf.repository.Repository; import org.openrdf.repository.RepositoryConnectionTest; @@ -72,7 +50,6 @@ import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sail.BigdataSail.Options; import com.bigdata.rdf.sail.BigdataSailRepository; -import com.bigdata.rdf.store.LocalTripleStore; /** * Bigdata uses snapshot isolation for transactions while openrdf assumes that Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java 2014-11-17 22:11:08 UTC (rev 8715) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java 2014-11-18 12:03:24 UTC (rev 8716) @@ -1719,19 +1719,19 @@ } testCon.commit(); - TupleQuery query = testCon.prepareTupleQuery(QueryLanguage.SPARQL, + final TupleQuery query = testCon.prepareTupleQuery(QueryLanguage.SPARQL, "SELECT * WHERE { ?s ?p ?o . ?s1 ?p1 ?o1 . ?s2 ?p2 ?o2 . ?s3 ?p3 ?o3 } ORDER BY ?s1 ?p1 ?o1 LIMIT 1000"); query.setMaxQueryTime(2); - TupleQueryResult result = query.evaluate(); - long startTime = System.currentTimeMillis(); + final TupleQueryResult result = query.evaluate(); + final long startTime = System.currentTimeMillis(); try { result.hasNext(); fail("Query should have been interrupted"); } catch (QueryInterruptedException e) { // Expected - long duration = System.currentTimeMillis() - startTime; + final long duration = System.currentTimeMillis() - startTime; assertTrue("Query not interrupted quickly enough, should have been ~2s, but was " + (duration / 1000) + "s", duration < 5000); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-17 22:11:22
|
Revision: 8715 http://sourceforge.net/p/bigdata/code/8715 Author: thompsonbry Date: 2014-11-17 22:11:08 +0000 (Mon, 17 Nov 2014) Log Message: ----------- Test suite and other bug fixes related to RDR support for 1.4.0 release. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/impl/sail/AbstractSailGraphTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParser.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstruct.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstructFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriter.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstruct.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstructFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONParserBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONWriterBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/ntriples/BigdataNTriplesParserFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParser.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParserFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriter.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriterFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/graph/impl/bd/AbstractBigdataGraphTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestNTriplesWithSids.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttlx branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttlx Removed Paths: ------------- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttl branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttl Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-17 22:11:08 UTC (rev 8715) @@ -51,16 +51,17 @@ 1.4.0: -- http://trac.bigdata.com/ticket/714 (Migrate to openrdf 2.7) -- http://trac.bigdata.com/ticket/745 (BackgroundTupleResult overrides final method close) -- http://trac.bigdata.com/ticket/813 (Documentation on BigData Reasoning) -- http://trac.bigdata.com/ticket/911 (workbench does not display errors well) -- http://trac.bigdata.com/ticket/1035 (DISTINCT PREDICATEs query is slow) -- http://trac.bigdata.com/ticket/1037 (SELECT COUNT(...) (DISTINCT|REDUCED) {single-triple-pattern} is slow) -- http://trac.bigdata.com/ticket/1044 (ORDER_BY ordering not preserved by projection operator) -- http://trac.bigdata.com/ticket/1047 (NQuadsParser hangs when loading latest dbpedia dump.) -- http://trac.bigdata.com/ticket/1052 (ASTComplexOptionalOptimizer did not account for Values clauses) -- http://trac.bigdata.com/ticket/1054 (BigdataGraphFactory create method cannot be invoked from the gremlin command line due to a Boolean vs boolean type mismatch.) +- http://trac.bigdata.com/ticket/714 (Migrate to openrdf 2.7) +- http://trac.bigdata.com/ticket/745 (BackgroundTupleResult overrides final method close) +- http://trac.bigdata.com/ticket/813 (Documentation on BigData Reasoning) +- http://trac.bigdata.com/ticket/911 (workbench does not display errors well) +- http://trac.bigdata.com/ticket/1035 (DISTINCT PREDICATEs query is slow) +- http://trac.bigdata.com/ticket/1037 (SELECT COUNT(...) (DISTINCT|REDUCED) {single-triple-pattern} is slow) +- http://trac.bigdata.com/ticket/1044 (ORDER_BY ordering not preserved by projection operator) +- http://trac.bigdata.com/ticket/1047 (NQuadsParser hangs when loading latest dbpedia dump.) +- http://trac.bigdata.com/ticket/1052 (ASTComplexOptionalOptimizer did not account for Values clauses) +- http://trac.bigdata.com/ticket/1054 (BigdataGraphFactory create method cannot be invoked from the gremlin command line due to a Boolean vs boolean type mismatch.) +- http://trac.bigdata.com/ticket/1061 (Server does not generate RDR aware JSON for RDF/SPARQL RESULTS) 1.3.4: Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttl =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttl 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttl 2014-11-17 22:11:08 UTC (rev 8715) @@ -1,41 +0,0 @@ -# A graph using the RDR syntax to express link weights. -# -@prefix bd: <http://www.bigdata.com/> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . -@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . -@prefix foaf: <http://xmlns.com/foaf/0.1/> . - - bd:1 foaf:knows bd:2 . -<<bd:1 foaf:knows bd:2 >> bd:weight "100"^^xsd:int . - -# Note: This uses a different link type. if the link type constraint is -# not respected, then the hops and weighted distance between (1) and -# (5) will be wrong (the hops will become 1 instead of 2, the weighted -# distance will be 100-23=77 less than expected.) - bd:1 foaf:knows2 bd:5 . -<<bd:1 foaf:knows2 bd:5 >> bd:weight "13"^^xsd:int . - -# This vertex property should be ignored by traversal if the test sets up -# a constraint that only "links" are visited by the GAS traversal. - bd:1 rdf:label "blue" . - -# Note: This uses a different link attribute type for the weight. if the -# link attribute type is used and not restricted to bd:weight2, then this -# link will be "visible" and the weighted distance between (1) and (2) will -# change. -#<<bd:1 foaf:knows bd:2 >> bd:weight2 "7"^^xsd:int . - - bd:1 foaf:knows bd:3 . -<<bd:1 foaf:knows bd:3 >> bd:weight "100"^^xsd:int . - - bd:2 foaf:knows bd:4 . -<<bd:2 foaf:knows bd:4 >> bd:weight "50"^^xsd:int . - - bd:3 foaf:knows bd:4 . -<<bd:3 foaf:knows bd:4 >> bd:weight "100"^^xsd:int . - - bd:3 foaf:knows bd:5 . -<<bd:3 foaf:knows bd:5 >> bd:weight "100"^^xsd:int . - - bd:4 foaf:knows bd:5 . -<<bd:4 foaf:knows bd:5 >> bd:weight "25"^^xsd:int . Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttlx 2014-11-17 22:11:08 UTC (rev 8715) @@ -0,0 +1,41 @@ +# A graph using the RDR syntax to express link weights. +# +@prefix bd: <http://www.bigdata.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix foaf: <http://xmlns.com/foaf/0.1/> . + + bd:1 foaf:knows bd:2 . +<<bd:1 foaf:knows bd:2 >> bd:weight "100"^^xsd:int . + +# Note: This uses a different link type. if the link type constraint is +# not respected, then the hops and weighted distance between (1) and +# (5) will be wrong (the hops will become 1 instead of 2, the weighted +# distance will be 100-23=77 less than expected.) + bd:1 foaf:knows2 bd:5 . +<<bd:1 foaf:knows2 bd:5 >> bd:weight "13"^^xsd:int . + +# This vertex property should be ignored by traversal if the test sets up +# a constraint that only "links" are visited by the GAS traversal. + bd:1 rdf:label "blue" . + +# Note: This uses a different link attribute type for the weight. if the +# link attribute type is used and not restricted to bd:weight2, then this +# link will be "visible" and the weighted distance between (1) and (2) will +# change. +#<<bd:1 foaf:knows bd:2 >> bd:weight2 "7"^^xsd:int . + + bd:1 foaf:knows bd:3 . +<<bd:1 foaf:knows bd:3 >> bd:weight "100"^^xsd:int . + + bd:2 foaf:knows bd:4 . +<<bd:2 foaf:knows bd:4 >> bd:weight "50"^^xsd:int . + + bd:3 foaf:knows bd:4 . +<<bd:3 foaf:knows bd:4 >> bd:weight "100"^^xsd:int . + + bd:3 foaf:knows bd:5 . +<<bd:3 foaf:knows bd:5 >> bd:weight "100"^^xsd:int . + + bd:4 foaf:knows bd:5 . +<<bd:4 foaf:knows bd:5 >> bd:weight "25"^^xsd:int . Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttl =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttl 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttl 2014-11-17 22:11:08 UTC (rev 8715) @@ -1,20 +0,0 @@ -@prefix : <http://www.bigdata.com/ssspGraph/> . - -# Source, Target, Weight -# -------------------------------- -# 1 2 1.00 -# 1 3 1.00 -# 2 4 0.50 -# 3 4 1.00 -# 3 5 1.00 -# 4 5 0.25 - -# FIXME This does not have the link weights! Express as reified statements -# or using RDR. Or allow loading of trivial sparse matrix formats for the -# tests and use an assumed link type for all links. - -:1 :link :2 . -:1 :link :3 . -:2 :link :4 . -:3 :link :4 . -:4 :link :5 . Added: branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttlx =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttlx (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttlx 2014-11-17 22:11:08 UTC (rev 8715) @@ -0,0 +1,20 @@ +@prefix : <http://www.bigdata.com/ssspGraph/> . + +# Source, Target, Weight +# -------------------------------- +# 1 2 1.00 +# 1 3 1.00 +# 2 4 0.50 +# 3 4 1.00 +# 3 5 1.00 +# 4 5 0.25 + +# FIXME This does not have the link weights! Express as reified statements +# or using RDR. Or allow loading of trivial sparse matrix formats for the +# tests and use an assumed link type for all links. + +:1 :link :2 . +:1 :link :3 . +:2 :link :4 . +:3 :link :4 . +:4 :link :5 . Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/impl/sail/AbstractSailGraphTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/impl/sail/AbstractSailGraphTestCase.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/test/com/bigdata/rdf/graph/impl/sail/AbstractSailGraphTestCase.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -202,8 +202,8 @@ /** * The data file. */ - static private final String ssspGraph1 = "bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttl"; - static private final String ssspGraph2 = "src/test/com/bigdata/rdf/graph/data/ssspGraph.ttl"; + static private final String ssspGraph1 = "bigdata-gas/src/test/com/bigdata/rdf/graph/data/ssspGraph.ttlx"; + static private final String ssspGraph2 = "src/test/com/bigdata/rdf/graph/data/ssspGraph.ttlx"; public final URI link, v1, v2, v3, v4, v5; Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -29,13 +29,19 @@ import info.aduna.lang.service.ServiceRegistry; +import java.nio.charset.Charset; +import java.util.Arrays; import java.util.ServiceLoader; +import org.apache.log4j.Logger; import org.openrdf.query.QueryLanguage; import org.openrdf.query.resultio.TupleQueryResultParserRegistry; +import org.openrdf.query.resultio.TupleQueryResultWriterFactory; import org.openrdf.query.resultio.TupleQueryResultWriterRegistry; import org.openrdf.rio.RDFFormat; +import org.openrdf.rio.RDFParserFactory; import org.openrdf.rio.RDFParserRegistry; +import org.openrdf.rio.RDFWriterFactory; import org.openrdf.rio.RDFWriterRegistry; import com.bigdata.rdf.model.StatementEnum; @@ -80,28 +86,124 @@ * loader problems </a> * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class ServiceProviderHook { + private static final Logger log = Logger.getLogger(ServiceProviderHook.class); + static private boolean loaded = false; static { + + /* + * Note: These MUST be declared before the forceLoad() call or they will + * be NULL when that method runs. + */ + + TURTLE_RDR = new RDFFormat("Turtle-RDR", + Arrays.asList("application/x-turtle-RDR"), + Charset.forName("UTF-8"), Arrays.asList("ttlx"), true, false); + + NTRIPLES_RDR = new RDFFormat("N-Triples-RDR", + "application/x-n-triples-RDR", Charset.forName("US-ASCII"), + "ntx", false, false); + + JSON_RDR = new RDFFormat("SPARQL/JSON", Arrays.asList( + "application/sparql-results+json", "application/json"), + Charset.forName("UTF-8"), Arrays.asList("srj", "json"), + RDFFormat.NO_NAMESPACES, RDFFormat.SUPPORTS_CONTEXTS); + forceLoad(); + } + + /** + * The extension MIME type for RDR data interchange using the RDR extension + * of TURTLE. + * + * @see <a href="http://trac.bigdata.com/ticket/1038" >RDR RDF parsers not + * always discovered </a> + * @see http://wiki.bigdata.com/wiki/index.php/Reification_Done_Right + */ + public static final RDFFormat TURTLE_RDR; + + /** + * The extension MIME type for RDR data interchange using the RDR extension + * of N-TRIPLES. + * + * @see <a href="http://trac.bigdata.com/ticket/1038" >RDR RDF parsers not + * always discovered </a> + * @see http://wiki.bigdata.com/wiki/index.php/Reification_Done_Right + */ + public static final RDFFormat NTRIPLES_RDR; + + /** + * The extension MIME type for RDR aware data interchange of RDF and SPARQL + * result stes using JSON. + */ + public static final RDFFormat JSON_RDR; /** - * This hook may be used to force the load of this class so it can ensure - * that the bigdata version of a service provider is used instead of the - * openrdf version. This is NOT optional. Without this hook, we do not have - * control over which version is resolved last in the processed - * <code>META-INF/services</code> files. - */ + * This hook may be used to force the load of this class so it can ensure + * that the bigdata version of a service provider is used instead of the + * openrdf version. This is NOT optional. Without this hook, we do not have + * control over which version is resolved last in the processed + * <code>META-INF/services</code> files. + * <p> + * Note: We need to use a synchronized pattern in order to ensure that any + * threads contending for this method awaits its completion. It would not + * be enough for a thread to know that the method was running. The thread + * needs to wait until the method is done. + */ synchronized static public void forceLoad() { if (loaded) return; - /* + log.warn("Running."); + + if (log.isInfoEnabled()) { + + for (RDFFormat f : RDFFormat.values()) { + log.info("RDFFormat: before: " + f); + } + for (RDFParserFactory f : RDFParserRegistry.getInstance().getAll()) { + log.info("RDFParserFactory: before: " + f); + } + for (RDFWriterFactory f : RDFWriterRegistry.getInstance().getAll()) { + log.info("RDFWriterFactory: before: " + f); + } + for (TupleQueryResultWriterFactory f : TupleQueryResultWriterRegistry + .getInstance().getAll()) { + log.info("TupleQueryResultWriterFactory: before: " + f); + } + + } +// /* +// * Force load of the openrdf service registry before we load our own +// * classes. +// */ +// { +// final String className = "info.aduna.lang.service.ServiceRegistry"; +// try { +// Class.forName(className); +// } catch (ClassNotFoundException ex) { +// log.error(ex); +// } +// } +// +// RDFFormat.register(NTRIPLES_RDR); +// RDFFormat.register(TURTLE_RDR); + + /* + * Register our RDFFormats. + * + * Note: They are NOT registered automatically by their constructors. + */ + RDFFormat.register(TURTLE_RDR); + RDFFormat.register(NTRIPLES_RDR); + RDFFormat.register(JSON_RDR); + + /* * Force the class loader to resolve the register, which will cause it * to be populated with the service provides as declared in the various * META-INF/services/serviceIface files. @@ -112,16 +214,14 @@ { final RDFParserRegistry r = RDFParserRegistry.getInstance(); - -// r.add(new BigdataRDFXMLParserFactory()); - -// // Note: This ensures that the RDFFormat for NQuads is loaded. -// r.get(RDFFormat.NQUADS); - - r.add(new BigdataNTriplesParserFactory()); + + // RDR-enabled + r.add(new BigdataNTriplesParserFactory()); + assert r.has(new BigdataNTriplesParserFactory().getRDFFormat()); - // subclassed the turtle parser for RDR + // RDR-enabled r.add(new BigdataTurtleParserFactory()); + assert r.has(new BigdataTurtleParserFactory().getRDFFormat()); /* * Allows parsing of JSON SPARQL Results with an {s,p,o,[c]} header. @@ -180,9 +280,26 @@ // r.add(new PropertiesTextWriterFactory()); // // } - + + if (log.isInfoEnabled()) { + for (RDFFormat f : RDFFormat.values()) { + log.info("RDFFormat: after: " + f); + } + for (RDFParserFactory f : RDFParserRegistry.getInstance().getAll()) { + log.info("RDFParserFactory: after: " + f); + } + for (RDFWriterFactory f : RDFWriterRegistry.getInstance().getAll()) { + log.info("RDFWriterFactory: after: " + f); + } + for (TupleQueryResultWriterFactory f : TupleQueryResultWriterRegistry.getInstance().getAll()) { + log.info("TupleQueryResultWriterFactory: after: " + f); + } + } + loaded = true; } +// private static void registerFactory + } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParser.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParser.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParser.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -45,9 +45,6 @@ * @see <a href="http://www.w3.org/TR/sparql11-results-json/">SPARQL 1.1 Query * Results JSON Format</a> * @author Peter Ansell - * - * @deprecated We now use the openrdf versions of the JSON parser / writer. The - * bigdata specific versions will go away in the future. */ public class BigdataSPARQLResultsJSONParser extends SPARQLJSONParserBase implements TupleQueryResultParser { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -19,15 +19,13 @@ import org.openrdf.query.resultio.TupleQueryResultFormat; import org.openrdf.query.resultio.TupleQueryResultParser; import org.openrdf.query.resultio.TupleQueryResultParserFactory; +import org.openrdf.query.resultio.sparqljson.SPARQLResultsJSONParser; /** * A {@link TupleQueryResultParserFactory} for parsers of SPARQL-1.1 JSON Tuple * Query Results. * * @author Peter Ansell - * - * @deprecated We now use the openrdf versions of the JSON parser / writer. The - * bigdata specific versions will go away in the future. */ public class BigdataSPARQLResultsJSONParserFactory implements TupleQueryResultParserFactory { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstruct.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstruct.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstruct.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -38,6 +38,8 @@ import org.openrdf.rio.RDFParser; import org.openrdf.rio.helpers.RDFParserBase; +import com.bigdata.rdf.ServiceProviderHook; + /** * Parser for SPARQL-1.1 JSON Results Format documents * @@ -61,7 +63,7 @@ @Override public RDFFormat getRDFFormat() { - return BigdataSPARQLResultsJSONWriterForConstructFactory.JSON; + return ServiceProviderHook.JSON_RDR; } @Override Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstructFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstructFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstructFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -21,6 +21,8 @@ import org.openrdf.rio.RDFParser; import org.openrdf.rio.RDFParserFactory; +import com.bigdata.rdf.ServiceProviderHook; + /** * A {@link TupleQueryResultParserFactory} for parsers of SPARQL-1.1 JSON Tuple * Query Results. @@ -28,8 +30,6 @@ * @author Peter Ansell */ public class BigdataSPARQLResultsJSONParserForConstructFactory implements RDFParserFactory { - - public static final RDFFormat JSON = BigdataSPARQLResultsJSONWriterForConstructFactory.JSON; @Override public RDFParser getParser() { @@ -38,7 +38,7 @@ @Override public RDFFormat getRDFFormat() { - return JSON; + return ServiceProviderHook.JSON_RDR; } } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriter.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriter.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriter.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -32,9 +32,6 @@ * A TupleQueryResultWriter that writes query results in the <a * href="http://www.w3.org/TR/rdf-sparql-json-res/">SPARQL Query Results JSON * Format</a>. - * - * @deprecated We now use the openrdf versions of the JSON parser / writer. The - * bigdata specific versions will go away in the future. */ public class BigdataSPARQLResultsJSONWriter extends SPARQLJSONWriterBase implements TupleQueryResultWriter { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -33,6 +33,7 @@ /** * Returns {@link TupleQueryResultFormat#JSON}. */ + @Override public TupleQueryResultFormat getTupleQueryResultFormat() { return TupleQueryResultFormat.JSON; } @@ -40,6 +41,7 @@ /** * Returns a new instance of SPARQLResultsJSONWriter. */ + @Override public TupleQueryResultWriter getWriter(OutputStream out) { return new BigdataSPARQLResultsJSONWriter(out); } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstruct.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstruct.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstruct.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -32,6 +32,8 @@ import org.openrdf.rio.RioSetting; import org.openrdf.rio.WriterConfig; +import com.bigdata.rdf.ServiceProviderHook; + /** * A TupleQueryResultWriter that writes query results in the <a * href="http://www.w3.org/TR/rdf-sparql-json-res/">SPARQL Query Results JSON @@ -59,7 +61,7 @@ @Override public RDFFormat getRDFFormat() { - return BigdataSPARQLResultsJSONWriterForConstructFactory.JSON; + return ServiceProviderHook.JSON_RDR; } @@ -80,8 +82,11 @@ @Override public void endRDF() throws RDFHandlerException { try { - writer.endDocument(); - } catch (IOException e) { + writer.endQueryResult(); +// writer.endDocument(); +// } catch (IOException e) { +// throw new RDFHandlerException(e); + } catch (TupleQueryResultHandlerException e) { throw new RDFHandlerException(e); } } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstructFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstructFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstructFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -18,15 +18,15 @@ import java.io.OutputStream; import java.io.Writer; -import java.nio.charset.Charset; -import java.util.Arrays; import org.openrdf.query.resultio.TupleQueryResultWriterFactory; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFWriter; import org.openrdf.rio.RDFWriterFactory; +import com.bigdata.rdf.ServiceProviderHook; + /** * A {@link TupleQueryResultWriterFactory} for writers of SPARQL/JSON query * results. @@ -38,18 +38,10 @@ // public static final RDFFormat JSON = new RDFFormat("N-Triples", "text/plain", // Charset.forName("US-ASCII"), "nt", NO_NAMESPACES, NO_CONTEXTS); - /** - * SPARQL Query Results JSON Format. - */ - public static final RDFFormat JSON = new RDFFormat("SPARQL/JSON", Arrays.asList( - "application/sparql-results+json", "application/json"), Charset.forName("UTF-8"), Arrays.asList( - "srj", "json"), RDFFormat.NO_NAMESPACES, RDFFormat.SUPPORTS_CONTEXTS); - - @Override public RDFFormat getRDFFormat() { - return JSON; - } + return ServiceProviderHook.JSON_RDR; + } @Override public RDFWriter getWriter(final Writer writer) { @@ -60,4 +52,5 @@ public RDFWriter getWriter(final OutputStream out) { return new BigdataSPARQLResultsJSONWriterForConstruct(out); } + } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONParserBase.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONParserBase.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONParserBase.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -42,12 +42,14 @@ /** * Abstract base class for SPARQL Results JSON Parsers. Provides a common * implementation of both boolean and tuple parsing. - * + * <p> * Bigdata Changes: - * - Changed the visibility of the method parseValue() from private to protected - * so we could override it. - * - Pulled some code out of parseQueryResultInternal into its own method so - * that we can override it. + * <ul> + * <li>Changed the visibility of the method parseValue() from private to + * protected so we could override it.</li> + * <li>Pulled some code out of parseQueryResultInternal into its own method so + * that we can override it.</li> + * </ul> * * @author Peter Ansell * @author Sebastian Schaffert Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONWriterBase.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONWriterBase.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONWriterBase.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -47,11 +47,13 @@ /** * An abstract class to implement the base functionality for both * SPARQLBooleanJSONWriter and SPARQLResultsJSONWriter. + * <p> + * Bigdata Changes: + * <ul> + * <li>Changed the visibility of JsonGenerator jg from private to protected so + * we can use it in a subclass.</li> + * </ul> * - * Bigdata Changes: - * - Changed the visibility of JsonGenerator jg from private to protected - * so we can use it in a subclass. - * * @author Peter Ansell */ abstract class SPARQLJSONWriterBase extends QueryResultWriterBase implements QueryResultWriter { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/ntriples/BigdataNTriplesParserFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/ntriples/BigdataNTriplesParserFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/ntriples/BigdataNTriplesParserFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -9,24 +9,30 @@ import org.openrdf.rio.RDFParser; import org.openrdf.rio.RDFParserFactory; +import com.bigdata.rdf.ServiceProviderHook; + /** - * An {@link RDFParserFactory} for N-Triples parsers. + * An RDR-aware {@link RDFParserFactory} for N-Triples parsers. * * @author Arjohn Kampman * @openrdf + * + * @see http://wiki.bigdata.com/wiki/index.php/Reification_Done_Right */ public class BigdataNTriplesParserFactory implements RDFParserFactory { /** - * Returns {@link RDFFormat#NTRIPLES}. + * Returns {@link RDFFormat#NTRIPLES_RDR}. */ + @Override public RDFFormat getRDFFormat() { - return RDFFormat.NTRIPLES; + return ServiceProviderHook.NTRIPLES_RDR; } /** * Returns a new instance of BigdataNTriplesParser. */ + @Override public RDFParser getParser() { return new BigdataNTriplesParser(); } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParser.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParser.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParser.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -15,7 +15,6 @@ import org.openrdf.model.Value; import org.openrdf.rio.RDFHandlerException; import org.openrdf.rio.RDFParseException; -import org.openrdf.rio.helpers.BasicParserSettings; import org.openrdf.rio.turtle.TurtleParser; import org.openrdf.rio.turtle.TurtleUtil; Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParserFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParserFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParserFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -10,24 +10,33 @@ import org.openrdf.rio.RDFParserFactory; import org.openrdf.rio.turtle.TurtleParser; +import com.bigdata.rdf.ServiceProviderHook; + /** - * An {@link RDFParserFactory} for Turtle parsers. + * An RDR-aware {@link RDFParserFactory} for Turtle parsers. * * @author Arjohn Kampman * @openrdf + * + * @see http://wiki.bigdata.com/wiki/index.php/Reification_Done_Right */ public class BigdataTurtleParserFactory implements RDFParserFactory { /** - * Returns {@link RDFFormat#TURTLE}. + * Returns {@link ServiceProviderHook#TURTLE_RDR}. + * + * @see <a href="http://trac.bigdata.com/ticket/1038" >RDR RDF parsers not + * always discovered </a> */ + @Override public RDFFormat getRDFFormat() { - return RDFFormat.TURTLE; + return ServiceProviderHook.TURTLE_RDR; } /** * Returns a new instance of {@link TurtleParser}. */ + @Override public RDFParser getParser() { return new BigdataTurtleParser(); } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriter.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriter.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriter.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -20,6 +20,7 @@ * An implementation of the RDFWriter interface that writes RDF documents in * Turtle format. The Turtle format is defined in <a * href="http://www.dajobe.org/2004/01/turtle/">in this document</a>. + * @openrdf */ public class BigdataTurtleWriter extends TurtleWriter implements RDFWriter { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriterFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriterFactory.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriterFactory.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -12,31 +12,42 @@ import org.openrdf.rio.RDFWriter; import org.openrdf.rio.RDFWriterFactory; +import com.bigdata.rdf.ServiceProviderHook; + /** - * An {@link RDFWriterFactory} for Turtle writers. + * An RDR-aware {@link RDFWriterFactory} for Turtle writers. * * @author Arjohn Kampman + * @openrdf + * + * @see http://wiki.bigdata.com/wiki/index.php/Reification_Done_Right */ public class BigdataTurtleWriterFactory implements RDFWriterFactory { /** - * Returns {@link RDFFormat#TURTLE}. + * Returns {@link ServiceProviderHook#TURTLE_RDR}. + * + * @see <a href="http://trac.bigdata.com/ticket/1038" >RDR RDF parsers not + * always discovered </a> */ + @Override public RDFFormat getRDFFormat() { - return RDFFormat.TURTLE; + return ServiceProviderHook.TURTLE_RDR; } /** * Returns a new instance of {@link BigdataTurtleWriter}. */ - public RDFWriter getWriter(OutputStream out) { + @Override + public RDFWriter getWriter(final OutputStream out) { return new BigdataTurtleWriter(out); } /** * Returns a new instance of {@link BigdataTurtleWriter}. */ - public RDFWriter getWriter(Writer writer) { + @Override + public RDFWriter getWriter(final Writer writer) { return new BigdataTurtleWriter(writer); } } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/graph/impl/bd/AbstractBigdataGraphTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/graph/impl/bd/AbstractBigdataGraphTestCase.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/graph/impl/bd/AbstractBigdataGraphTestCase.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -202,7 +202,7 @@ /** * The data file. */ - static private final String smallWeightedGraph = "bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttl"; + static private final String smallWeightedGraph = "bigdata-gas/src/test/com/bigdata/rdf/graph/data/smallWeightedGraph.ttlx"; private final BigdataURI foafKnows, linkWeight, v1, v2, v3, v4, v5; Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestNTriplesWithSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestNTriplesWithSids.java 2014-11-17 19:29:53 UTC (rev 8714) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestNTriplesWithSids.java 2014-11-17 22:11:08 UTC (rev 8715) @@ -11,6 +11,7 @@ import org.openrdf.rio.RDFParseException; import org.openrdf.rio.RDFParserRegistry; +import com.bigdata.rdf.ServiceProviderHook; import com.bigdata.rdf.axioms.NoAxioms; import com.bigdata.rdf.model.BigdataStatement; import com.bigdata.rdf.model.BigdataURI; @@ -98,13 +99,13 @@ // Verify that the correct parser will be used. assertEquals("NTriplesParserClass", BigdataNTriplesParser.class.getName(), RDFParserRegistry - .getInstance().get(RDFFormat.NTRIPLES).getParser() + .getInstance().get(ServiceProviderHook.NTRIPLES_RDR).getParser() .getClass().getName()); final DataLoader dataLoader = store.getDataLoader(); final LoadStats loadStats = dataLoader.loadData(new StringReader( - data), getName()/* baseURL */, RDFFormat.NTRIPLES); + data), getName()/* baseURL */, ServiceProviderHook.NTRIPLES_RDR); if (log.isInfoEnabled()) log.info(store.dumpStore()); @@ -249,13 +250,13 @@ // Verify that the correct parser will be used. assertEquals("NTriplesParserClass", BigdataNTriplesParser.class.getName(), RDFParserRegistry - .getInstance().get(RDFFormat.NTRIPLES).getParser() + .getInstance().get(ServiceProviderHook.NTRIPLES_RDR).getParser() .getClass().getName()); final DataLoader dataLoader = store.getDataLoader(); final LoadStats loadStats = dataLoader.loadData(new StringReader( - data), getName()/* baseURL */, RDFFormat.NTRIPLES); + data), getName()/* baseURL */, ServiceProviderHook.NTRIPLES_RDR); if (log.isInfoEnabled()) log.info(store.dumpStore()); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-17 19:30:11
|
Revision: 8714 http://sourceforge.net/p/bigdata/code/8714 Author: thompsonbry Date: 2014-11-17 19:29:53 +0000 (Mon, 17 Nov 2014) Log Message: ----------- Fixed for SD RDR modes (fixes some CI errors) Updated 1.4.0 release notes. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-17 18:39:34 UTC (rev 8713) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-17 19:29:53 UTC (rev 8714) @@ -54,6 +54,7 @@ - http://trac.bigdata.com/ticket/714 (Migrate to openrdf 2.7) - http://trac.bigdata.com/ticket/745 (BackgroundTupleResult overrides final method close) - http://trac.bigdata.com/ticket/813 (Documentation on BigData Reasoning) +- http://trac.bigdata.com/ticket/911 (workbench does not display errors well) - http://trac.bigdata.com/ticket/1035 (DISTINCT PREDICATEs query is slow) - http://trac.bigdata.com/ticket/1037 (SELECT COUNT(...) (DISTINCT|REDUCED) {single-triple-pattern} is slow) - http://trac.bigdata.com/ticket/1044 (ORDER_BY ordering not preserved by projection operator) Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2014-11-17 18:39:34 UTC (rev 8713) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2014-11-17 19:29:53 UTC (rev 8714) @@ -51,6 +51,7 @@ import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; import org.openrdf.rio.RDFWriter; +import org.openrdf.rio.RDFWriterFactory; import org.openrdf.rio.RDFWriterRegistry; import com.bigdata.journal.IAtomicStore; @@ -435,14 +436,37 @@ if (format == null) format = RDFFormat.RDFXML; + RDFWriterFactory writerFactory = RDFWriterRegistry.getInstance().get( + format); + + if (writerFactory == null) { + + log.warn("No writer for format: format=" + format + ", Accept=\"" + + acceptStr + "\""); + + format = RDFFormat.RDFXML; + + writerFactory = RDFWriterRegistry.getInstance().get(format); + + } + +// if (writerFactory == null) { +// +// buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, +// "No writer for format: Accept=\"" + acceptStr +// + "\", format=" + format); +// +// return; +// +// } + resp.setStatus(HTTP_OK); resp.setContentType(format.getDefaultMIMEType()); final OutputStream os = resp.getOutputStream(); try { - final RDFWriter writer = RDFWriterRegistry.getInstance() - .get(format).getWriter(os); + final RDFWriter writer = writerFactory.getWriter(os); writer.startRDF(); final Iterator<Statement> itr = g.iterator(); while (itr.hasNext()) { Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2014-11-17 18:39:34 UTC (rev 8713) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2014-11-17 19:29:53 UTC (rev 8714) @@ -46,7 +46,6 @@ import org.openrdf.sail.SailException; import com.bigdata.journal.ITx; -import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; import com.bigdata.rdf.sail.webapp.client.MiniMime; @@ -151,9 +150,10 @@ * UpdateServlet fails to parse MIMEType when doing conneg. </a> */ - final RDFFormat format = RDFFormat - .forMIMEType(new MiniMime(contentType).getMimeType()); + final String mimeTypeStr = new MiniMime(contentType).getMimeType(); + final RDFFormat format = RDFFormat.forMIMEType(mimeTypeStr); + if (format == null) { buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, @@ -216,13 +216,12 @@ } /** - * - * @author <a href="mailto:tho...@us...">Bryan - * Thompson</a> - * - * TODO The {@link IRDFParserOptions} defaults should be coming from - * the KB instance, right? What does the REST API say about this? - */ + * + * @author <a href="mailto:tho...@us...">Bryan + * Thompson</a> + * + * TODO #1056 (Add ability to set RIO options to REST API and workbench) + */ private static class InsertWithBodyTask extends AbstractRestApiTask<Void> { private final String baseURI; Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java 2014-11-17 18:39:34 UTC (rev 8713) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java 2014-11-17 19:29:53 UTC (rev 8714) @@ -295,6 +295,10 @@ static public final URI TURTLE = new URIImpl( "http://www.w3.org/ns/formats/Turtle"); + // RDR specific extension of TURTLE. + static public final URI TURTLE_RDR = new URIImpl( + "http://www.bigdata.com/ns/formats/Turtle-RDR"); + /** * Unique URI for N3. * @@ -335,7 +339,11 @@ */ static public final URI NQUADS = new URIImpl( "http://sw.deri.org/2008/07/n-quads/#n-quads"); - + + // RDR specific extension of N-Triples. + static public final URI NTRIPLES_RDR = new URIImpl( + "http://www.bigdata.com/ns/formats/N-Triples-RDR"); + /* * SPARQL results */ @@ -553,8 +561,6 @@ * URIs) * * @see #inputFormat - * - * TODO Add an explicit declaration for SIDS mode data interchange? */ protected void describeInputFormats() { @@ -566,7 +572,11 @@ g.add(aService, SD.inputFormat, SD.TRIG); // g.add(service, SD.inputFormat, SD.BINARY); // TODO BINARY g.add(aService, SD.inputFormat, SD.NQUADS); - + if (tripleStore.getStatementIdentifiers()) { + // RDR specific data interchange. + g.add(aService, SD.inputFormat, SD.NTRIPLES_RDR); + g.add(aService, SD.inputFormat, SD.TURTLE_RDR); + } g.add(aService, SD.inputFormat, SD.SPARQL_RESULTS_XML); g.add(aService, SD.inputFormat, SD.SPARQL_RESULTS_JSON); g.add(aService, SD.inputFormat, SD.SPARQL_RESULTS_CSV); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-17 18:40:05
|
Revision: 8713 http://sourceforge.net/p/bigdata/code/8713 Author: thompsonbry Date: 2014-11-17 18:39:34 +0000 (Mon, 17 Nov 2014) Log Message: ----------- Merging in fixes for RDR support in the workbench. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/.classpath branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/css/style.css branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/index.html branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/js/workbench.js branches/BIGDATA_RELEASE_1_4_0/build.xml Modified: branches/BIGDATA_RELEASE_1_4_0/.classpath =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/.classpath 2014-11-17 14:40:18 UTC (rev 8712) +++ branches/BIGDATA_RELEASE_1_4_0/.classpath 2014-11-17 18:39:34 UTC (rev 8713) @@ -1,5 +1,6 @@ <?xml version="1.0" encoding="UTF-8"?> <classpath> + <classpathentry kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.7.13-onejar.jar"/> <classpathentry kind="src" path="bigdata/src/java"/> <classpathentry kind="src" path="bigdata-rdf/src/java"/> <classpathentry kind="src" path="bigdata-sails/src/java"/> @@ -98,6 +99,5 @@ <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/sesame-rio-testsuite-2.7.13.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.7.13.jar" sourcepath="/Users/mikepersonick/.m2/repository/org/openrdf/sesame/sesame-sparql-testsuite/2.7.13/sesame-sparql-testsuite-2.7.13-sources.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.7.13.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.7.13-onejar.jar"/> <classpathentry kind="output" path="bin"/> </classpath> Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/css/style.css =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/css/style.css 2014-11-17 14:40:18 UTC (rev 8712) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/css/style.css 2014-11-17 18:39:34 UTC (rev 8713) @@ -153,6 +153,13 @@ overflow-x: scroll; } +.error-box { + margin: 20px; + overflow-x: scroll; + font-family: monospace; + white-space: pre; +} + .box:last-of-type { } Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/index.html =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/index.html 2014-11-17 14:40:18 UTC (rev 8712) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/index.html 2014-11-17 18:39:34 UTC (rev 8713) @@ -59,12 +59,14 @@ <select id="rdf-type" disabled> <option value="n-quads">N-Quads</option> <option value="n-triples">N-Triples</option> + <option value="n-triples-RDR">N-Triples-RDR</option> <option value="n3">Notation3</option> <option value="rdf/xml">RDF/XML</option> <option value="json">JSON</option> <option value="trig">TriG</option> <option value="trix">TriX</option> <option value="turtle">Turtle</option> + <option value="turtle-RDR">Turtle-RDR</option> </select> </p> <a href="#" class="advanced-features-toggle">Advanced features</a> @@ -78,7 +80,7 @@ </div> - <div class="box" id="update-response"> + <div class="error-box" id="update-response"> <span></span> <iframe name="update-response-container"></iframe> </div> @@ -124,7 +126,7 @@ </div> - <div id="query-response" class="box"> + <div id="query-response" class="error-box"> </div> <div id="query-pagination" class="box"> Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/js/workbench.js =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/js/workbench.js 2014-11-17 14:40:18 UTC (rev 8712) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/js/workbench.js 2014-11-17 18:39:34 UTC (rev 8713) @@ -18,9 +18,11 @@ // key is value of RDF type selector, value is name of CodeMirror mode var RDF_MODES = { 'n-triples': 'ntriples', + 'n-triples-RDR': 'ntriples-RDR', 'rdf/xml': 'xml', 'json': 'json', - 'turtle': 'turtle' + 'turtle': 'turtle', + 'turtle-RDR': 'turtle' }; var FILE_CONTENTS = null; // file/update editor type handling @@ -29,6 +31,7 @@ var RDF_TYPES = { 'nq': 'n-quads', 'nt': 'n-triples', + 'ntx': 'n-triples-RDR', 'n3': 'n3', 'rdf': 'rdf/xml', 'rdfs': 'rdf/xml', @@ -38,17 +41,20 @@ 'trig': 'trig', 'trix': 'trix', //'xml': 'trix', - 'ttl': 'turtle' + 'ttl': 'turtle', + 'ttlx': 'turtle-RDR' }; var RDF_CONTENT_TYPES = { 'n-quads': 'text/x-nquads', 'n-triples': 'text/plain', + 'n-triples-RDR': 'application/x-n-triples-RDR', 'n3': 'text/rdf+n3', 'rdf/xml': 'application/rdf+xml', 'json': 'application/sparql-results+json', 'trig': 'application/x-trig', 'trix': 'application/trix', - 'turtle': 'application/x-turtle' + 'turtle': 'application/x-turtle', + 'turtle-RDR': 'application/x-turtle-RDR' }; var SPARQL_UPDATE_COMMANDS = [ 'INSERT', @@ -104,7 +110,9 @@ var EXPORT_EXTENSIONS = { 'application/rdf+xml': ['RDF/XML', 'rdf', true], 'application/n-triples': ['N-Triples', 'nt', true], + 'application/x-n-triples-RDR': ['N-Triples-RDR', 'ntx', true], 'application/x-turtle': ['Turtle', 'ttl', true], + 'application/x-turtle-RDR': ['Turtle-RDR', 'ttlx', true], 'text/rdf+n3': ['N3', 'n3', true], 'application/trix': ['TriX', 'trix', true], 'application/x-trig': ['TRIG', 'trig', true], @@ -783,6 +791,7 @@ if(jqXHR.status === 0) { message += 'Could not contact server'; } else { + var response = $('<div>').append(jqXHR.responseText); if(response.find('pre').length === 0) { message += response.text(); } else { @@ -917,7 +926,7 @@ var settings = { type: 'POST', data: $('#query-form').serialize(), - headers: { 'Accept': 'application/sparql-results+json, application/rdf+xml' }, + headers: { 'Accept': 'application/sparql-results+json' }, success: showQueryResults, error: queryResultsError }; @@ -1737,7 +1746,7 @@ /* Utility functions */ function getSID(binding) { - return '<<\n ' + abbreviate(binding.value.s.value) + '\n ' + abbreviate(binding.value.p.value) + '\n ' + abbreviate(binding.value.o.value) + '\n>>'; + return '<<\n ' + abbreviate(binding.subject.value) + '\n ' + abbreviate(binding.predicate.value) + '\n ' + abbreviate(binding.object.value) + '\n>>'; } function abbreviate(uri) { Modified: branches/BIGDATA_RELEASE_1_4_0/build.xml =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/build.xml 2014-11-17 14:40:18 UTC (rev 8712) +++ branches/BIGDATA_RELEASE_1_4_0/build.xml 2014-11-17 18:39:34 UTC (rev 8713) @@ -75,11 +75,12 @@ </path> <!-- runtime classpath w/o install. --> - <path id="runtime.classpath"> - <pathelement location="${build.dir}/classes" /> - <path refid="build.classpath" /> + <path id="runtime.classpath"> + <fileset file="${bigdata.dir}/bigdata-rdf/lib/openrdf-sesame-${sesame.version}-onejar.jar"/> + <pathelement location="${build.dir}/classes" /> + <path refid="build.classpath" /> </path> - + <!-- classpath as installed. --> <!-- @todo .so and .dll --> <path id="install.classpath"> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-17 14:40:28
|
Revision: 8712 http://sourceforge.net/p/bigdata/code/8712 Author: thompsonbry Date: 2014-11-17 14:40:18 +0000 (Mon, 17 Nov 2014) Log Message: ----------- Adding 1.4.0 release notes. Added Paths: ----------- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt Added: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/releases/RELEASE_1_4_0.txt 2014-11-17 14:40:18 UTC (rev 8712) @@ -0,0 +1,552 @@ +This is a major release of bigdata(R). + +Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal), highly available replication cluster mode (HAJournalServer), and a horizontally sharded cluster mode (BigdataFederation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The HAJournalServer adds replication, online backup, horizontal scaling of query, and high availability. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the HAJournalServer for high availability and linear scaling in query throughput. Choose the BigdataFederation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +Starting with the 1.3.0 release, we offer a tarball artifact [10] for easy installation of the HA replication cluster. + +You can download the WAR (standalone) or HA artifacts from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_4_0 + +New features in 1.4.x: + +- Openrdf 2.7 support. +- Numerous bug fixes and performance enhancements. + +Feature summary: + +- Highly Available Replication Clusters (HAJournalServer [10]) +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited (BigdataFederation); +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (RDR/SIDs); +- Fast RDFS+ inference and truth maintenance; +- Fast 100% native SPARQL 1.1 evaluation; +- Integrated "analytic" query package; +- %100 Java memory manager leverages the JVM native heap (no GC); +- RDF Graph Mining Service (GASService) [12]. +- Reification Done Right (RDR) support [11]. +- RDF/SPARQL workbench. +- Blueprints API. + +Road map [3]: + +- Column-wise indexing; +- Runtime Query Optimizer for quads; +- New scale-out platform based on MapGraph (100x => 10000x faster) + +Change log: + + Note: Versions with (*) MAY require data migration. For details, see [9]. + +1.4.0: + +- http://trac.bigdata.com/ticket/714 (Migrate to openrdf 2.7) +- http://trac.bigdata.com/ticket/745 (BackgroundTupleResult overrides final method close) +- http://trac.bigdata.com/ticket/813 (Documentation on BigData Reasoning) +- http://trac.bigdata.com/ticket/1035 (DISTINCT PREDICATEs query is slow) +- http://trac.bigdata.com/ticket/1037 (SELECT COUNT(...) (DISTINCT|REDUCED) {single-triple-pattern} is slow) +- http://trac.bigdata.com/ticket/1044 (ORDER_BY ordering not preserved by projection operator) +- http://trac.bigdata.com/ticket/1047 (NQuadsParser hangs when loading latest dbpedia dump.) +- http://trac.bigdata.com/ticket/1052 (ASTComplexOptionalOptimizer did not account for Values clauses) +- http://trac.bigdata.com/ticket/1054 (BigdataGraphFactory create method cannot be invoked from the gremlin command line due to a Boolean vs boolean type mismatch.) + +1.3.4: + +- http://trac.bigdata.com/ticket/946 (Empty PROJECTION causes IllegalArgumentException) +- http://trac.bigdata.com/ticket/1036 (Journal leaks storage with SPARQL UPDATE and REST API) +- http://trac.bigdata.com/ticket/1008 (remote service queries should put parameters in the request body when using POST) + +1.3.3: + +- http://trac.bigdata.com/ticket/980 (Object position of query hint is not a Literal (partial resolution - see #1028 as well)) +- http://trac.bigdata.com/ticket/1018 (Add the ability to track and cancel all queries issued through a BigdataSailRemoteRepositoryConnection) +- http://trac.bigdata.com/ticket/1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) +- http://trac.bigdata.com/ticket/1024 (GregorianCalendar? does weird things before 1582) +- http://trac.bigdata.com/ticket/1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) +- http://trac.bigdata.com/ticket/1028 (very rare NotMaterializedException: XSDBoolean(true)) +- http://trac.bigdata.com/ticket/1029 (RWStore commit state not correctly rolled back if abort fails on empty journal) +- http://trac.bigdata.com/ticket/1030 (RWStorage stats cleanup) + +1.3.2: + +- http://trac.bigdata.com/ticket/1016 (Jetty/LBS issues when deployed as WAR under tomcat) +- http://trac.bigdata.com/ticket/1010 (Upgrade apache http components to 1.3.1 (security)) +- http://trac.bigdata.com/ticket/1005 (Invalidate BTree objects if error occurs during eviction) +- http://trac.bigdata.com/ticket/1004 (Concurrent binding problem) +- http://trac.bigdata.com/ticket/1002 (Concurrency issues in JVMHashJoinUtility caused by MAX_PARALLEL query hint override) +- http://trac.bigdata.com/ticket/1000 (Add configuration option to turn off bottom-up evaluation) +- http://trac.bigdata.com/ticket/999 (Extend BigdataSailFactory to take arbitrary properties) +- http://trac.bigdata.com/ticket/998 (SPARQL Update through BigdataGraph) +- http://trac.bigdata.com/ticket/996 (Add custom prefix support for query results) +- http://trac.bigdata.com/ticket/995 (Allow general purpose SPARQL queries through BigdataGraph) +- http://trac.bigdata.com/ticket/992 (Deadlock between AbstractRunningQuery.cancel(), QueryLog.log(), and ArbitraryLengthPathTask) +- http://trac.bigdata.com/ticket/990 (Query hints not recognized in FILTERs) +- http://trac.bigdata.com/ticket/989 (Stored query service) +- http://trac.bigdata.com/ticket/988 (Bad performance for FILTER EXISTS) +- http://trac.bigdata.com/ticket/987 (maven build is broken) +- http://trac.bigdata.com/ticket/986 (Improve locality for small allocation slots) +- http://trac.bigdata.com/ticket/985 (Deadlock in BigdataTriplePatternMaterializer) +- http://trac.bigdata.com/ticket/975 (HA Health Status Page) +- http://trac.bigdata.com/ticket/974 (Name2Addr.indexNameScan(prefix) uses scan + filter) +- http://trac.bigdata.com/ticket/973 (RWStore.commit() should be more defensive) +- http://trac.bigdata.com/ticket/971 (Clarify HTTP Status codes for CREATE NAMESPACE operation) +- http://trac.bigdata.com/ticket/968 (no link to wiki from workbench) +- http://trac.bigdata.com/ticket/966 (Failed to get namespace under concurrent update) +- http://trac.bigdata.com/ticket/965 (Can not run LBS mode with HA1 setup) +- http://trac.bigdata.com/ticket/961 (Clone/modify namespace to create a new one) +- http://trac.bigdata.com/ticket/960 (Export namespace properties in XML/Java properties text format) +- http://trac.bigdata.com/ticket/938 (HA Load Balancer) +- http://trac.bigdata.com/ticket/936 (Support larger metabits allocations) +- http://trac.bigdata.com/ticket/932 (Bigdata/Rexster integration) +- http://trac.bigdata.com/ticket/919 (Formatted Layout for Status pages) +- http://trac.bigdata.com/ticket/899 (REST API Query Cancellation) +- http://trac.bigdata.com/ticket/885 (Panels do not appear on startup in Firefox) +- http://trac.bigdata.com/ticket/884 (Executing a new query should clear the old query results from the console) +- http://trac.bigdata.com/ticket/882 (Abbreviate URIs that can be namespaced with one of the defined common namespaces) +- http://trac.bigdata.com/ticket/880 (Can't explore an absolute URI with < >) +- http://trac.bigdata.com/ticket/878 (Explore page looks weird when empty) +- http://trac.bigdata.com/ticket/873 (Allow user to go use browser back & forward buttons to view explore history) +- http://trac.bigdata.com/ticket/865 (OutOfMemoryError instead of Timeout for SPARQL Property Paths) +- http://trac.bigdata.com/ticket/858 (Change explore URLs to include URI being clicked so user can see what they've clicked on before) +- http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) +- http://trac.bigdata.com/ticket/850 (Search functionality in workbench) +- http://trac.bigdata.com/ticket/847 (Query results panel should recognize well known namespaces for easier reading) +- http://trac.bigdata.com/ticket/845 (Display the properties for a namespace) +- http://trac.bigdata.com/ticket/843 (Create new tabs for status & performance counters, and add per namespace service/VoID description links) +- http://trac.bigdata.com/ticket/837 (Configurator for new namespaces) +- http://trac.bigdata.com/ticket/836 (Allow user to create namespace in the workbench) +- http://trac.bigdata.com/ticket/830 (Output RDF data from queries in table format) +- http://trac.bigdata.com/ticket/829 (Export query results) +- http://trac.bigdata.com/ticket/828 (Save selected namespace in browser) +- http://trac.bigdata.com/ticket/827 (Explore tab in workbench) +- http://trac.bigdata.com/ticket/826 (Create shortcut to execute load/query) +- http://trac.bigdata.com/ticket/823 (Disable textarea when a large file is selected) +- http://trac.bigdata.com/ticket/820 (Allow non-file:// URLs to be loaded) +- http://trac.bigdata.com/ticket/819 (Retrieve default namespace on page load) +- http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop) +- http://trac.bigdata.com/ticket/765 (order by expr skips invalid expressions) +- http://trac.bigdata.com/ticket/587 (JSP page to configure KBs) +- http://trac.bigdata.com/ticket/343 (Stochastic assert in AbstractBTree#writeNodeOrLeaf() in CI) + +1.3.1: + +- http://trac.bigdata.com/ticket/242 (Deadlines do not play well with GROUP_BY, ORDER_BY, etc.) +- http://trac.bigdata.com/ticket/256 (Amortize RTO cost) +- http://trac.bigdata.com/ticket/257 (Support BOP fragments in the RTO.) +- http://trac.bigdata.com/ticket/258 (Integrate RTO into SAIL) +- http://trac.bigdata.com/ticket/259 (Dynamically increase RTO sampling limit.) +- http://trac.bigdata.com/ticket/526 (Reification done right) +- http://trac.bigdata.com/ticket/580 (Problem with the bigdata RDF/XML parser with sids) +- http://trac.bigdata.com/ticket/622 (NSS using jetty+windows can lose connections (windows only; jdk 6/7 bug)) +- http://trac.bigdata.com/ticket/624 (HA Load Balancer) +- http://trac.bigdata.com/ticket/629 (Graph processing API) +- http://trac.bigdata.com/ticket/721 (Support HA1 configurations) +- http://trac.bigdata.com/ticket/730 (Allow configuration of embedded NSS jetty server using jetty-web.xml) +- http://trac.bigdata.com/ticket/759 (multiple filters interfere) +- http://trac.bigdata.com/ticket/763 (Stochastic results with Analytic Query Mode) +- http://trac.bigdata.com/ticket/774 (Converge on Java 7.) +- http://trac.bigdata.com/ticket/779 (Resynchronization of socket level write replication protocol (HA)) +- http://trac.bigdata.com/ticket/780 (Incremental or asynchronous purge of HALog files) +- http://trac.bigdata.com/ticket/782 (Wrong serialization version) +- http://trac.bigdata.com/ticket/784 (Describe Limit/offset don't work as expected) +- http://trac.bigdata.com/ticket/787 (Update documentations and samples, they are OUTDATED) +- http://trac.bigdata.com/ticket/788 (Name2Addr does not report all root causes if the commit fails.) +- http://trac.bigdata.com/ticket/789 (ant task to build sesame fails, docs for setting up bigdata for sesame are ancient) +- http://trac.bigdata.com/ticket/790 (should not be pruning any children) +- http://trac.bigdata.com/ticket/791 (Clean up query hints) +- http://trac.bigdata.com/ticket/793 (Explain reports incorrect value for opCount) +- http://trac.bigdata.com/ticket/796 (Filter assigned to sub-query by query generator is dropped from evaluation) +- http://trac.bigdata.com/ticket/797 (add sbt setup to getting started wiki) +- http://trac.bigdata.com/ticket/798 (Solution order not always preserved) +- http://trac.bigdata.com/ticket/799 (mis-optimation of quad pattern vs triple pattern) +- http://trac.bigdata.com/ticket/802 (Optimize DatatypeFactory instantiation in DateTimeExtension) +- http://trac.bigdata.com/ticket/803 (prefixMatch does not work in full text search) +- http://trac.bigdata.com/ticket/804 (update bug deleting quads) +- http://trac.bigdata.com/ticket/806 (Incorrect AST generated for OPTIONAL { SELECT }) +- http://trac.bigdata.com/ticket/808 (Wildcard search in bigdata for type suggessions) +- http://trac.bigdata.com/ticket/810 (Expose GAS API as SPARQL SERVICE) +- http://trac.bigdata.com/ticket/815 (RDR query does too much work) +- http://trac.bigdata.com/ticket/816 (Wildcard projection ignores variables inside a SERVICE call.) +- http://trac.bigdata.com/ticket/817 (Unexplained increase in journal size) +- http://trac.bigdata.com/ticket/821 (Reject large files, rather then storing them in a hidden variable) +- http://trac.bigdata.com/ticket/831 (UNION with filter issue) +- http://trac.bigdata.com/ticket/841 (Using "VALUES" in a query returns lexical error) +- http://trac.bigdata.com/ticket/848 (Fix SPARQL Results JSON writer to write the RDR syntax) +- http://trac.bigdata.com/ticket/849 (Create writers that support the RDR syntax) +- http://trac.bigdata.com/ticket/851 (RDR GAS interface) +- http://trac.bigdata.com/ticket/852 (RemoteRepository.cancel() does not consume the HTTP response entity.) +- http://trac.bigdata.com/ticket/853 (Follower does not accept POST of idempotent operations (HA)) +- http://trac.bigdata.com/ticket/854 (Allow override of maximum length before converting an HTTP GET to an HTTP POST) +- http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) +- http://trac.bigdata.com/ticket/862 (Create parser for JSON SPARQL Results) +- http://trac.bigdata.com/ticket/863 (HA1 commit failure) +- http://trac.bigdata.com/ticket/866 (Batch remove API for the SAIL) +- http://trac.bigdata.com/ticket/867 (NSS concurrency problem with list namespaces and create namespace) +- http://trac.bigdata.com/ticket/869 (HA5 test suite) +- http://trac.bigdata.com/ticket/872 (Full text index range count optimization) +- http://trac.bigdata.com/ticket/874 (FILTER not applied when there is UNION in the same join group) +- http://trac.bigdata.com/ticket/876 (When I upload a file I want to see the filename.) +- http://trac.bigdata.com/ticket/877 (RDF Format selector is invisible) +- http://trac.bigdata.com/ticket/883 (CANCEL Query fails on non-default kb namespace on HA follower.) +- http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups.) +- http://trac.bigdata.com/ticket/887 (BIND is leaving a variable unbound) +- http://trac.bigdata.com/ticket/892 (HAJournalServer does not die if zookeeper is not running) +- http://trac.bigdata.com/ticket/893 (large sparql insert optimization slow?) +- http://trac.bigdata.com/ticket/894 (unnecessary synchronization) +- http://trac.bigdata.com/ticket/895 (stack overflow in populateStatsMap) +- http://trac.bigdata.com/ticket/902 (Update Basic Bigdata Chef Cookbook) +- http://trac.bigdata.com/ticket/904 (AssertionError: PropertyPathNode got to ASTJoinOrderByType.optimizeJoinGroup) +- http://trac.bigdata.com/ticket/905 (unsound combo query optimization: union + filter) +- http://trac.bigdata.com/ticket/906 (DC Prefix Button Appends "</li>") +- http://trac.bigdata.com/ticket/907 (Add a quick-start ant task for the BD Server "ant start") +- http://trac.bigdata.com/ticket/912 (Provide a configurable IAnalyzerFactory) +- http://trac.bigdata.com/ticket/913 (Blueprints API Implementation) +- http://trac.bigdata.com/ticket/914 (Settable timeout on SPARQL Query (REST API)) +- http://trac.bigdata.com/ticket/915 (DefaultAnalyzerFactory issues) +- http://trac.bigdata.com/ticket/920 (Content negotiation orders accept header scores in reverse) +- http://trac.bigdata.com/ticket/939 (NSS does not start from command line: bigdata-war/src not found.) +- http://trac.bigdata.com/ticket/940 (ProxyServlet in web.xml breaks tomcat WAR (HA LBS) + +1.3.0: + +- http://trac.bigdata.com/ticket/530 (Journal HA) +- http://trac.bigdata.com/ticket/621 (Coalesce write cache records and install reads in cache) +- http://trac.bigdata.com/ticket/623 (HA TXS) +- http://trac.bigdata.com/ticket/639 (Remove triple-buffering in RWStore) +- http://trac.bigdata.com/ticket/645 (HA backup) +- http://trac.bigdata.com/ticket/646 (River not compatible with newer 1.6.0 and 1.7.0 JVMs) +- http://trac.bigdata.com/ticket/648 (Add a custom function to use full text index for filtering.) +- http://trac.bigdata.com/ticket/651 (RWS test failure) +- http://trac.bigdata.com/ticket/652 (Compress write cache blocks for replication and in HALogs) +- http://trac.bigdata.com/ticket/662 (Latency on followers during commit on leader) +- http://trac.bigdata.com/ticket/663 (Issue with OPTIONAL blocks) +- http://trac.bigdata.com/ticket/664 (RWStore needs post-commit protocol) +- http://trac.bigdata.com/ticket/665 (HA3 LOAD non-responsive with node failure) +- http://trac.bigdata.com/ticket/666 (Occasional CI deadlock in HALogWriter testConcurrentRWWriterReader) +- http://trac.bigdata.com/ticket/670 (Accumulating HALog files cause latency for HA commit) +- http://trac.bigdata.com/ticket/671 (Query on follower fails during UPDATE on leader) +- http://trac.bigdata.com/ticket/673 (DGC in release time consensus protocol causes native thread leak in HAJournalServer at each commit) +- http://trac.bigdata.com/ticket/674 (WCS write cache compaction causes errors in RWS postHACommit()) +- http://trac.bigdata.com/ticket/676 (Bad patterns for timeout computations) +- http://trac.bigdata.com/ticket/677 (HA deadlock under UPDATE + QUERY) +- http://trac.bigdata.com/ticket/678 (DGC Thread and Open File Leaks: sendHALogForWriteSet()) +- http://trac.bigdata.com/ticket/679 (HAJournalServer can not restart due to logically empty log file) +- http://trac.bigdata.com/ticket/681 (HAJournalServer deadlock: pipelineRemove() and getLeaderId()) +- http://trac.bigdata.com/ticket/684 (Optimization with skos altLabel) +- http://trac.bigdata.com/ticket/686 (Consensus protocol does not detect clock skew correctly) +- http://trac.bigdata.com/ticket/687 (HAJournalServer Cache not populated) +- http://trac.bigdata.com/ticket/689 (Missing URL encoding in RemoteRepositoryManager) +- http://trac.bigdata.com/ticket/690 (Error when using the alias "a" instead of rdf:type for a multipart insert) +- http://trac.bigdata.com/ticket/691 (Failed to re-interrupt thread in HAJournalServer) +- http://trac.bigdata.com/ticket/692 (Failed to re-interrupt thread) +- http://trac.bigdata.com/ticket/693 (OneOrMorePath SPARQL property path expression ignored) +- http://trac.bigdata.com/ticket/694 (Transparently cancel update/query in RemoteRepository) +- http://trac.bigdata.com/ticket/695 (HAJournalServer reports "follower" but is in SeekConsensus and is not participating in commits.) +- http://trac.bigdata.com/ticket/701 (Problems in BackgroundTupleResult) +- http://trac.bigdata.com/ticket/702 (InvocationTargetException on /namespace call) +- http://trac.bigdata.com/ticket/704 (ask does not return json) +- http://trac.bigdata.com/ticket/705 (Race between QueryEngine.putIfAbsent() and shutdownNow()) +- http://trac.bigdata.com/ticket/706 (MultiSourceSequentialCloseableIterator.nextSource() can throw NPE) +- http://trac.bigdata.com/ticket/707 (BlockingBuffer.close() does not unblock threads) +- http://trac.bigdata.com/ticket/708 (BIND heisenbug - race condition on select query with BIND) +- http://trac.bigdata.com/ticket/711 (sparql protocol: mime type application/sparql-query) +- http://trac.bigdata.com/ticket/712 (SELECT ?x { OPTIONAL { ?x eg:doesNotExist eg:doesNotExist } } incorrect) +- http://trac.bigdata.com/ticket/715 (Interrupt of thread submitting a query for evaluation does not always terminate the AbstractRunningQuery) +- http://trac.bigdata.com/ticket/716 (Verify that IRunningQuery instances (and nested queries) are correctly cancelled when interrupted) +- http://trac.bigdata.com/ticket/718 (HAJournalServer needs to handle ZK client connection loss) +- http://trac.bigdata.com/ticket/720 (HA3 simultaneous service start failure) +- http://trac.bigdata.com/ticket/723 (HA asynchronous tasks must be canceled when invariants are changed) +- http://trac.bigdata.com/ticket/725 (FILTER EXISTS in subselect) +- http://trac.bigdata.com/ticket/726 (Logically empty HALog for committed transaction) +- http://trac.bigdata.com/ticket/727 (DELETE/INSERT fails with OPTIONAL non-matching WHERE) +- http://trac.bigdata.com/ticket/728 (Refactor to create HAClient) +- http://trac.bigdata.com/ticket/729 (ant bundleJar not working) +- http://trac.bigdata.com/ticket/731 (CBD and Update leads to 500 status code) +- http://trac.bigdata.com/ticket/732 (describe statement limit does not work) +- http://trac.bigdata.com/ticket/733 (Range optimizer not optimizing Slice service) +- http://trac.bigdata.com/ticket/734 (two property paths interfere) +- http://trac.bigdata.com/ticket/736 (MIN() malfunction) +- http://trac.bigdata.com/ticket/737 (class cast exception) +- http://trac.bigdata.com/ticket/739 (Inconsistent treatment of bind and optional property path) +- http://trac.bigdata.com/ticket/741 (ctc-striterators should build as independent top-level project (Apache2)) +- http://trac.bigdata.com/ticket/743 (AbstractTripleStore.destroy() does not filter for correct prefix) +- http://trac.bigdata.com/ticket/746 (Assertion error) +- http://trac.bigdata.com/ticket/747 (BOUND bug) +- http://trac.bigdata.com/ticket/748 (incorrect join with subselect renaming vars) +- http://trac.bigdata.com/ticket/754 (Failure to setup SERVICE hook and changeLog for Unisolated and Read/Write connections) +- http://trac.bigdata.com/ticket/755 (Concurrent QuorumActors can interfere leading to failure to progress) +- http://trac.bigdata.com/ticket/756 (order by and group_concat) +- http://trac.bigdata.com/ticket/760 (Code review on 2-phase commit protocol) +- http://trac.bigdata.com/ticket/764 (RESYNC failure (HA)) +- http://trac.bigdata.com/ticket/770 (alpp ordering) +- http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop.) +- http://trac.bigdata.com/ticket/776 (Closed as duplicate of #490) +- http://trac.bigdata.com/ticket/778 (HA Leader fail results in transient problem with allocations on other services) +- http://trac.bigdata.com/ticket/783 (Operator Alerts (HA)) + +1.2.4: + +- http://trac.bigdata.com/ticket/777 (ConcurrentModificationException in ASTComplexOptionalOptimizer) + +1.2.3: + +- http://trac.bigdata.com/ticket/168 (Maven Build) +- http://trac.bigdata.com/ticket/196 (Journal leaks memory). +- http://trac.bigdata.com/ticket/235 (Occasional deadlock in CI runs in com.bigdata.io.writecache.TestAll) +- http://trac.bigdata.com/ticket/312 (CI (mock) quorums deadlock) +- http://trac.bigdata.com/ticket/405 (Optimize hash join for subgroups with no incoming bound vars.) +- http://trac.bigdata.com/ticket/412 (StaticAnalysis#getDefinitelyBound() ignores exogenous variables.) +- http://trac.bigdata.com/ticket/485 (RDFS Plus Profile) +- http://trac.bigdata.com/ticket/495 (SPARQL 1.1 Property Paths) +- http://trac.bigdata.com/ticket/519 (Negative parser tests) +- http://trac.bigdata.com/ticket/531 (SPARQL UPDATE for SOLUTION SETS) +- http://trac.bigdata.com/ticket/535 (Optimize JOIN VARS for Sub-Selects) +- http://trac.bigdata.com/ticket/555 (Support PSOutputStream/InputStream at IRawStore) +- http://trac.bigdata.com/ticket/559 (Use RDFFormat.NQUADS as the format identifier for the NQuads parser) +- http://trac.bigdata.com/ticket/570 (MemoryManager Journal does not implement all methods). +- http://trac.bigdata.com/ticket/575 (NSS Admin API) +- http://trac.bigdata.com/ticket/577 (DESCRIBE with OFFSET/LIMIT needs to use sub-select) +- http://trac.bigdata.com/ticket/578 (Concise Bounded Description (CBD)) +- http://trac.bigdata.com/ticket/579 (CONSTRUCT should use distinct SPO filter) +- http://trac.bigdata.com/ticket/583 (VoID in ServiceDescription) +- http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) +- http://trac.bigdata.com/ticket/590 (nxparser fails with uppercase language tag) +- http://trac.bigdata.com/ticket/592 (Optimize RWStore allocator sizes) +- http://trac.bigdata.com/ticket/593 (Ugrade to Sesame 2.6.10) +- http://trac.bigdata.com/ticket/594 (WAR was deployed using TRIPLES rather than QUADS by default) +- http://trac.bigdata.com/ticket/596 (Change web.xml parameter names to be consistent with Jini/River) +- http://trac.bigdata.com/ticket/597 (SPARQL UPDATE LISTENER) +- http://trac.bigdata.com/ticket/598 (B+Tree branching factor and HTree addressBits are confused in their NodeSerializer implementations) +- http://trac.bigdata.com/ticket/599 (BlobIV for blank node : NotMaterializedException) +- http://trac.bigdata.com/ticket/600 (BlobIV collision counter hits false limit.) +- http://trac.bigdata.com/ticket/601 (Log uncaught exceptions) +- http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) +- http://trac.bigdata.com/ticket/607 (History service / index) +- http://trac.bigdata.com/ticket/608 (LOG BlockingBuffer not progressing at INFO or lower level) +- http://trac.bigdata.com/ticket/609 (bigdata-ganglia is required dependency for Journal) +- http://trac.bigdata.com/ticket/611 (The code that processes SPARQL Update has a typo) +- http://trac.bigdata.com/ticket/612 (Bigdata scale-up depends on zookeper) +- http://trac.bigdata.com/ticket/613 (SPARQL UPDATE response inlines large DELETE or INSERT triple graphs) +- http://trac.bigdata.com/ticket/614 (static join optimizer does not get ordering right when multiple tails share vars with ancestry) +- http://trac.bigdata.com/ticket/615 (AST2BOpUtility wraps UNION with an unnecessary hash join) +- http://trac.bigdata.com/ticket/616 (Row store read/update not isolated on Journal) +- http://trac.bigdata.com/ticket/617 (Concurrent KB create fails with "No axioms defined?") +- http://trac.bigdata.com/ticket/618 (DirectBufferPool.poolCapacity maximum of 2GB) +- http://trac.bigdata.com/ticket/619 (RemoteRepository class should use application/x-www-form-urlencoded for large POST requests) +- http://trac.bigdata.com/ticket/620 (UpdateServlet fails to parse MIMEType when doing conneg.) +- http://trac.bigdata.com/ticket/626 (Expose performance counters for read-only indices) +- http://trac.bigdata.com/ticket/627 (Environment variable override for NSS properties file) +- http://trac.bigdata.com/ticket/628 (Create a bigdata-client jar for the NSS REST API) +- http://trac.bigdata.com/ticket/631 (ClassCastException in SIDs mode query) +- http://trac.bigdata.com/ticket/632 (NotMaterializedException when a SERVICE call needs variables that are provided as query input bindings) +- http://trac.bigdata.com/ticket/633 (ClassCastException when binding non-uri values to a variable that occurs in predicate position) +- http://trac.bigdata.com/ticket/638 (Change DEFAULT_MIN_RELEASE_AGE to 1ms) +- http://trac.bigdata.com/ticket/640 (Conditionally rollback() BigdataSailConnection if dirty) +- http://trac.bigdata.com/ticket/642 (Property paths do not work inside of exists/not exists filters) +- http://trac.bigdata.com/ticket/643 (Add web.xml parameters to lock down public NSS end points) +- http://trac.bigdata.com/ticket/644 (Bigdata2Sesame2BindingSetIterator can fail to notice asynchronous close()) +- http://trac.bigdata.com/ticket/650 (Can not POST RDF to a graph using REST API) +- http://trac.bigdata.com/ticket/654 (Rare AssertionError in WriteCache.clearAddrMap()) +- http://trac.bigdata.com/ticket/655 (SPARQL REGEX operator does not perform case-folding correctly for Unicode data) +- http://trac.bigdata.com/ticket/656 (InFactory bug when IN args consist of a single literal) +- http://trac.bigdata.com/ticket/647 (SIDs mode creates unnecessary hash join for GRAPH group patterns) +- http://trac.bigdata.com/ticket/667 (Provide NanoSparqlServer initialization hook) +- http://trac.bigdata.com/ticket/669 (Doubly nested subqueries yield no results with LIMIT) +- http://trac.bigdata.com/ticket/675 (Flush indices in parallel during checkpoint to reduce IO latency) +- http://trac.bigdata.com/ticket/682 (AtomicRowFilter UnsupportedOperationException) + +1.2.2: + +- http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) +- http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) +- http://trac.bigdata.com/ticket/603 (Prepare critical maintenance release as branch of 1.2.1) + +1.2.1: + +- http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) +- http://trac.bigdata.com/ticket/539 (NotMaterializedException with REGEX and Vocab) +- http://trac.bigdata.com/ticket/540 (SPARQL UPDATE using NSS via index.html) +- http://trac.bigdata.com/ticket/541 (MemoryManaged backed Journal mode) +- http://trac.bigdata.com/ticket/546 (Index cache for Journal) +- http://trac.bigdata.com/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) +- http://trac.bigdata.com/ticket/550 (NPE in Leaf.getKey() : root cause was user error) +- http://trac.bigdata.com/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) +- http://trac.bigdata.com/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) +- http://trac.bigdata.com/ticket/563 (DISTINCT ORDER BY) +- http://trac.bigdata.com/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) +- http://trac.bigdata.com/ticket/568 (DELETE WHERE fails with Java AssertionError) +- http://trac.bigdata.com/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) +- http://trac.bigdata.com/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) +- http://trac.bigdata.com/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) + +1.2.0: (*) + +- http://trac.bigdata.com/ticket/92 (Monitoring webapp) +- http://trac.bigdata.com/ticket/267 (Support evaluation of 3rd party operators) +- http://trac.bigdata.com/ticket/337 (Compact and efficient movement of binding sets between nodes.) +- http://trac.bigdata.com/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) +- http://trac.bigdata.com/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) +- http://trac.bigdata.com/ticket/438 (KeyBeforePartitionException on cluster) +- http://trac.bigdata.com/ticket/439 (Class loader problem) +- http://trac.bigdata.com/ticket/441 (Ganglia integration) +- http://trac.bigdata.com/ticket/443 (Logger for RWStore transaction service and recycler) +- http://trac.bigdata.com/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) +- http://trac.bigdata.com/ticket/445 (RWStore does not track tx release correctly) +- http://trac.bigdata.com/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) +- http://trac.bigdata.com/ticket/448 (SPARQL 1.1 UPDATE) +- http://trac.bigdata.com/ticket/449 (SPARQL 1.1 Federation extension) +- http://trac.bigdata.com/ticket/451 (Serialization error in SIDs mode on cluster) +- http://trac.bigdata.com/ticket/454 (Global Row Store Read on Cluster uses Tx) +- http://trac.bigdata.com/ticket/456 (IExtension implementations do point lookups on lexicon) +- http://trac.bigdata.com/ticket/457 ("No such index" on cluster under concurrent query workload) +- http://trac.bigdata.com/ticket/458 (Java level deadlock in DS) +- http://trac.bigdata.com/ticket/460 (Uncaught interrupt resolving RDF terms) +- http://trac.bigdata.com/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) +- http://trac.bigdata.com/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) +- http://trac.bigdata.com/ticket/464 (Query statistics do not update correctly on cluster) +- http://trac.bigdata.com/ticket/465 (Too many GRS reads on cluster) +- http://trac.bigdata.com/ticket/469 (Sail does not flush assertion buffers before query) +- http://trac.bigdata.com/ticket/472 (acceptTaskService pool size on cluster) +- http://trac.bigdata.com/ticket/475 (Optimize serialization for query messages on cluster) +- http://trac.bigdata.com/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) +- http://trac.bigdata.com/ticket/478 (Cluster does not map input solution(s) across shards) +- http://trac.bigdata.com/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) +- http://trac.bigdata.com/ticket/481 (PhysicalAddressResolutionException against 1.0.6) +- http://trac.bigdata.com/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) +- http://trac.bigdata.com/ticket/484 (Java API for NanoSparqlServer REST API) +- http://trac.bigdata.com/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) +- http://trac.bigdata.com/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) +- http://trac.bigdata.com/ticket/493 (Virtual Graphs) +- http://trac.bigdata.com/ticket/496 (Sesame 2.6.3) +- http://trac.bigdata.com/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) +- http://trac.bigdata.com/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) +- http://trac.bigdata.com/ticket/500 (SPARQL 1.1 Service Description) +- http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) +- http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) +- http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) +- http://trac.bigdata.com/ticket/501 (SPARQL 1.1 BINDINGS are ignored) +- http://trac.bigdata.com/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) +- http://trac.bigdata.com/ticket/504 (UNION with Empty Group Pattern) +- http://trac.bigdata.com/ticket/505 (Exception when using SPARQL sort & statement identifiers) +- http://trac.bigdata.com/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) +- http://trac.bigdata.com/ticket/508 (LIMIT causes hash join utility to log errors) +- http://trac.bigdata.com/ticket/513 (Expose the LexiconConfiguration to Function BOPs) +- http://trac.bigdata.com/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) +- http://trac.bigdata.com/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) +- http://trac.bigdata.com/ticket/517 (Java 7 Compiler Compatibility) +- http://trac.bigdata.com/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) +- http://trac.bigdata.com/ticket/520 (CONSTRUCT WHERE shortcut) +- http://trac.bigdata.com/ticket/521 (Incremental materialization of Tuple and Graph query results) +- http://trac.bigdata.com/ticket/525 (Modify the IChangeLog interface to support multiple agents) +- http://trac.bigdata.com/ticket/527 (Expose timestamp of LexiconRelation to function bops) +- http://trac.bigdata.com/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) +- http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) +- http://trac.bigdata.com/ticket/534 (BSBM BI Q5 error using MERGE JOIN) + +1.1.0 (*) + + - http://trac.bigdata.com/ticket/23 (Lexicon joins) + - http://trac.bigdata.com/ticket/109 (Store large literals as "blobs") + - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://trac.bigdata.com/ticket/203 (Implement an persistence capable hash table to support analytic query) + - http://trac.bigdata.com/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) + - http://trac.bigdata.com/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) + - http://trac.bigdata.com/ticket/232 (Bottom-up evaluation semantics). + - http://trac.bigdata.com/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) + - http://trac.bigdata.com/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) + - http://trac.bigdata.com/ticket/261 (Lift conditions out of subqueries.) + - http://trac.bigdata.com/ticket/300 (Native ORDER BY) + - http://trac.bigdata.com/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) + - http://trac.bigdata.com/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) + - http://trac.bigdata.com/ticket/334 (Support inlining of unicode data in the statement indices.) + - http://trac.bigdata.com/ticket/364 (Scalable default graph evaluation) + - http://trac.bigdata.com/ticket/368 (Prune variable bindings during query evaluation) + - http://trac.bigdata.com/ticket/370 (Direct translation of openrdf AST to bigdata AST) + - http://trac.bigdata.com/ticket/373 (Fix StrBOp and other IValueExpressions) + - http://trac.bigdata.com/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) + - http://trac.bigdata.com/ticket/380 (Native SPARQL evaluation on cluster) + - http://trac.bigdata.com/ticket/387 (Cluster does not compute closure) + - http://trac.bigdata.com/ticket/395 (HTree hash join performance) + - http://trac.bigdata.com/ticket/401 (inline xsd:unsigned datatypes) + - http://trac.bigdata.com/ticket/408 (xsd:string cast fails for non-numeric data) + - http://trac.bigdata.com/ticket/421 (New query hints model.) + - http://trac.bigdata.com/ticket/431 (Use of read-only tx per query defeats cache on cluster) + +1.0.3 + + - http://trac.bigdata.com/ticket/217 (BTreeCounters does not track bytes released) + - http://trac.bigdata.com/ticket/269 (Refactor performance counters using accessor interface) + - http://trac.bigdata.com/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://trac.bigdata.com/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://trac.bigdata.com/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://trac.bigdata.com/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://trac.bigdata.com/ticket/391 (Release age advanced on WORM mode journal) + - http://trac.bigdata.com/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://trac.bigdata.com/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://trac.bigdata.com/ticket/394 (log4j configuration error message in WAR deployment) + - http://trac.bigdata.com/ticket/399 (Add a fast range count method to the REST API) + - http://trac.bigdata.com/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://trac.bigdata.com/ticket/424 (NQuads support for NanoSparqlServer) + - http://trac.bigdata.com/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://trac.bigdata.com/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://trac.bigdata.com/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://trac.bigdata.com/ticket/435 (Address is 0L) + - http://trac.bigdata.com/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://trac.bigdata.com/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://trac.bigdata.com/ticket/356 (Query not terminated by error.) + - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://trac.bigdata.com/ticket/361 (IRunningQuery not closed promptly.) + - http://trac.bigdata.com/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://trac.bigdata.com/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://trac.bigdata.com/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://trac.bigdata.com/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://trac.bigdata.com/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://trac.bigdata.com/ticket/107 (Unicode clean schema names in the sparse row store). + - http://trac.bigdata.com/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://trac.bigdata.com/ticket/225 (OSX requires specialized performance counter collection classes). + - http://trac.bigdata.com/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://trac.bigdata.com/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://trac.bigdata.com/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://trac.bigdata.com/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://trac.bigdata.com/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://trac.bigdata.com/ticket/355 (Query failure when comparing with non materialized value). + - http://trac.bigdata.com/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://trac.bigdata.com/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata(R), please see the following links: + +[1] http://wiki.bigdata.com/wiki/index.php/Main_Page +[2] http://wiki.bigdata.com/wiki/index.php/GettingStarted +[3] http://wiki.bigdata.com/wiki/index.php/Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://wiki.bigdata.com/wiki/index.php/DataMigration +[10] http://wiki.bigdata.com/wiki/index.php/HAJournalServer +[11] http://www.bigdata.com/whitepapers/reifSPARQL.pdf +[12] http://wiki.bigdata.com/wiki/index.php/RDF_GAS_API + +About bigdata: + +Bigdata(R) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(R) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(R) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(R) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-15 20:36:28
|
Revision: 8711 http://sourceforge.net/p/bigdata/code/8711 Author: thompsonbry Date: 2014-11-15 20:36:20 +0000 (Sat, 15 Nov 2014) Log Message: ----------- A few changes missed in the last commit to SVN (deletes of older jars). Removed Paths: ------------- branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/nxparser-1.2.3.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/openrdf-sesame-2.6.10-onejar.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/sesame-rio-testsuite-2.6.10.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/nquads/ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/lib/sesame-sparql-testsuite-2.6.10.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/lib/sesame-store-testsuite-2.6.10.jar Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/nxparser-1.2.3.jar =================================================================== (Binary files differ) Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/openrdf-sesame-2.6.10-onejar.jar =================================================================== (Binary files differ) Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/sesame-rio-testsuite-2.6.10.jar =================================================================== (Binary files differ) Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/lib/sesame-sparql-testsuite-2.6.10.jar =================================================================== (Binary files differ) Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/lib/sesame-store-testsuite-2.6.10.jar =================================================================== (Binary files differ) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-15 20:33:55
|
Revision: 8710 http://sourceforge.net/p/bigdata/code/8710 Author: thompsonbry Date: 2014-11-15 20:33:32 +0000 (Sat, 15 Nov 2014) Log Message: ----------- Merge from git master to branches/BIGDATA_RELEASE_1_4_0 for CI leading up to the 1.4.0 release. See #1042 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_4_0/.classpath branches/BIGDATA_RELEASE_1_4_0/.project branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/Depends.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/Var.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/HashIndexOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/NestedLoopJoinOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/PipelineJoin.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/htree/DirectoryPage.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/relation/accesspath/AccessPath.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/service/DataService.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/striterator/AbstractKeyOrder.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/util/config/LogUtil.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/resources/logging/log4j-dev.properties branches/BIGDATA_RELEASE_1_4_0/bigdata/src/resources/logging/log4j.properties branches/BIGDATA_RELEASE_1_4_0/bigdata/src/test/com/bigdata/bop/join/TestAll.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/test/com/bigdata/btree/AbstractBTreeTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/test/com/bigdata/journal/ProxyTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraph.java branches/BIGDATA_RELEASE_1_4_0/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphEmbedded.java branches/BIGDATA_RELEASE_1_4_0/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-gas/src/java/com/bigdata/rdf/graph/util/AbstractGraphFixture.java branches/BIGDATA_RELEASE_1_4_0/bigdata-gom/src/test/com/bigdata/gom/TestAll.java branches/BIGDATA_RELEASE_1_4_0/bigdata-gom/src/test/com/bigdata/gom/TestNumericBNodes.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/ServiceProviderHook.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/ConcatBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/DatatypeBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/DateBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/DigestBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IriBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsNumericBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/NumericBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrAfterBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrBeforeBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrdtBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrlangBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/AbstractIV.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/uri/IPv4AddrIV.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BNodeContextFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataURIImpl.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/RDFParserOptions.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParser.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriter.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/ntriples/BigdataNTriplesParser.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleParser.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/turtle/BigdataTurtleWriter.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/AssignmentNode.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/BindingsClause.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/CompiledSolutionSetStats.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/FunctionNode.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/FunctionRegistry.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/GroupMemberValueExpressionNodeBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryRoot.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StatementPatternNode.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StaticAnalysis.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpJoins.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdateContext.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUtility.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTConstructIterator.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTBottomUpOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTComplexOptionalOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTGraphGroupOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTJoinOrderByTypeOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/DefaultOptimizerList.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/DistinctTermAdvancer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOKeyOrder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPORelation.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/resources/service-providers/META-INF/services/org.openrdf.query.resultio.TupleQueryResultWriterFactory branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/resources/service-providers/META-INF/services/org.openrdf.rio.RDFParserFactory branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/resources/service-providers/META-INF/services/org.openrdf.rio.RDFWriterFactory branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/bop/rdf/filter/TestNativeDistinctFilter.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/internal/TestUnsignedIVs.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/TestStrAfterBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAll_RIO.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/AbstractSolutionSetStatsTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestNegation.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/service/TestRemoteSparql10QueryBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/service/TestRemoteSparql11QueryBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/service/TestRemoteSparqlBuilderFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/AbstractOptimizerTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTEmptyGroupOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestAll.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOKeyOrder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/store/TestStatementIdentifiers.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/store/TestTripleStore.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailUpdate.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/CreateKBTask.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/remote/BigdataSailRemoteRepositoryConnection.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ASTVisitorBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/BaseDeclProcessor.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/BigdataExprBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/BlankNodeVarProcessor.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/GroupGraphPatternBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/PrefixDeclProcessor.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/UpdateExprBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ValueExprBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilderConstants.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilderTokenManager.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilderTreeConstants.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilderVisitor.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/sparql.jj branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/sparql.jjt branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConnegUtil.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/BackgroundTupleResult.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/ProxyBigdataSailTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestProvenanceQuery.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/AbstractBigdataExprBuilderTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/Bigdata2ASTSPARQL11SyntaxTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/Bigdata2ASTSPARQLSyntaxTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestBigdataExprBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestBindingsClause.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestGroupGraphPatternBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestSubqueryPatterns.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestTriplePatternBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestUpdateExprBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestValueExprBuilder.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestVirtualGraphs.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataComplexSparqlQueryTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataEmbeddedFederationSparqlTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataFederationSparqlTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSPARQLUpdateConformanceTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSPARQLUpdateTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlFullRWTxTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataStoreTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractNamedGraphUpdateTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractSimpleInsertTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/ProxySuiteHelper.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/ProxyTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAskJsonTrac704.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestBigdataSailRemoteRepository.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestFederatedQuery.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestHelper.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlClient.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestPostNotURLEncoded.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestRelease123Protocol.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestSparqlUpdate.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/ComplexSPARQLQueryTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLUpdateTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/sail/RDFStoreTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-war/src/html/js/workbench.js branches/BIGDATA_RELEASE_1_4_0/build.properties branches/BIGDATA_RELEASE_1_4_0/build.xml branches/BIGDATA_RELEASE_1_4_0/pom.xml Added Paths: ----------- branches/BIGDATA_RELEASE_1_4_0/.settings/org.eclipse.core.resources.prefs branches/BIGDATA_RELEASE_1_4_0/bigdata/LEGAL/hamcrest-license.txt branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/hamcrest-core-1.3.jar branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-4.11.jar branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/DistinctTermScanOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/FastRangeCountOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/test/com/bigdata/bop/join/TestDistinctTermScanOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata/src/test/com/bigdata/bop/join/TestFastRangeCountOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphListener.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/openrdf-sesame-2.7.13-onejar.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/lib/sesame-rio-testsuite-2.7.13.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/NowBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/UUIDBOp.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstruct.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONParserForConstructFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstruct.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterForConstructFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONParserBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/SPARQLJSONWriterBase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTDistinctTermScanOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTFastRangeCountOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTValuesOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/resources/service-providers/META-INF/services/org.openrdf.query.resultio.TupleQueryResultParserFactory branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestDistinctTermScanOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestFastRangeCountOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01.trig branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01b.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01b.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01c.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_01c.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_02.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_02.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_03.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_03.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_correctRejection_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_correctRejection_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_correctRejection_02.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_quads_correctRejection_02.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_01.ttl branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_02.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_02.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_03.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_03.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_correctRejection_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_correctRejection_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_correctRejection_01.ttl branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_subQuery_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/distinctTermScan_triples_subQuery_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_01.trig branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_02.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_02.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_03.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_03.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_04.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_04.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_05.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_quads_05.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_triples_01.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_triples_01.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_triples_01.ttl branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_triples_02.rq branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/fastRangeCount_triples_02.srx branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTDistinctTermScanOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/optimizers/TestASTFastRangeCountOptimizer.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/lib/sesame-sparql-testsuite-2.7.13.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/lib/sesame-store-testsuite-2.7.13.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTInlineData.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTSTRUUID.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTUUID.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTUnparsedQuadDataBlock.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/org/openrdf/query/parser/sparql/ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/org/openrdf/query/parser/sparql/manifest/ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/java/org/openrdf/query/parser/sparql/manifest/SPARQL11ManifestTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/model/ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/model/util/ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/model/util/ModelUtil.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/manifest/ branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/manifest/SPARQLQueryTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/manifest/SPARQLUpdateConformanceTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java Removed Paths: ------------- branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-3.8.1.jar branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactoryForConstruct.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/json/BigdataSPARQLResultsJSONWriterFactoryForSelect.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/nquads/ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/rdfxml/ branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRDFXMLInterchangeWithStatementIdentifiers.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/RDFWriterTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/RDFXMLParserTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/RDFXMLParserTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/RDFXMLWriterTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/RDFXMLWriterTestCase.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/TestAll.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/TestRDFXMLParserFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/rdfxml/TestRDFXMLWriterFactory.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/EarlReport.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQL11SyntaxTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLASTQueryTest.java branches/BIGDATA_RELEASE_1_4_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLQueryTest.java Modified: branches/BIGDATA_RELEASE_1_4_0/.classpath =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/.classpath 2014-11-15 16:46:45 UTC (rev 8709) +++ branches/BIGDATA_RELEASE_1_4_0/.classpath 2014-11-15 20:33:32 UTC (rev 8710) @@ -40,7 +40,6 @@ <classpathentry exported="true" kind="lib" path="bigdata/lib/unimi/colt-1.2.0.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4j-4.8.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4j-charset-4.8.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/junit-3.8.1.jar" sourcepath="/root/.m2/repository/junit/junit/3.8.1/junit-3.8.1-sources.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/browser.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/classserver.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/fiddler.jar"/> @@ -60,8 +59,8 @@ <classpathentry exported="true" kind="lib" path="bigdata/lib/unimi/fastutil-5.1.5.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/lucene/lucene-analyzers-3.0.0.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/lucene/lucene-core-3.0.0.jar"/> - <classpathentry kind="lib" path="bigdata/lib/jetty/jetty-jmx-9.1.4.v20140401.jar"/> - <classpathentry kind="lib" path="bigdata/lib/jetty/jetty-jndi-9.1.4.v20140401.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-jmx-9.1.4.v20140401.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-jndi-9.1.4.v20140401.jar"/> <classpathentry exported="true" kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/high-scale-lib-v1.1.2.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/junit-ext-1.1-b3-dev.jar"/> @@ -75,11 +74,6 @@ <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/httpcomponents/httpmime-4.1.3.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/httpcomponents/commons-io-2.1.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/apache/log4j-1.2.17.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.6.10-onejar.jar" sourcepath="/Users/bryan/Documents/workspace/org.openrdf.sesame-2.6.10"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/sesame-rio-testsuite-2.6.10.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.6.10.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.6.10.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/nxparser-1.2.3.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-client-9.1.4.v20140401.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-continuation-9.1.4.v20140401.jar"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-http-9.1.4.v20140401.jar"/> @@ -93,11 +87,17 @@ <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-webapp-9.1.4.v20140401.jar" sourcepath="/Users/bryan/Downloads/org.eclipse.jetty.project-jetty-9.1.4.v20140401"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-xml-9.1.4.v20140401.jar"/> <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/jackson-core-2.2.3.jar"/> - <classpathentry kind="lib" path="bigdata-blueprints/lib/jettison-1.3.3.jar"/> - <classpathentry kind="lib" path="bigdata-blueprints/lib/blueprints-core-2.5.0.jar"/> - <classpathentry kind="lib" path="bigdata-blueprints/lib/blueprints-test-2.5.0.jar"/> - <classpathentry kind="lib" path="bigdata-blueprints/lib/rexster-core-2.5.0.jar"/> - <classpathentry kind="lib" path="bigdata-blueprints/lib/commons-configuration-1.10.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/httpcomponents/commons-fileupload-1.3.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-blueprints/lib/jettison-1.3.3.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-blueprints/lib/blueprints-core-2.5.0.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-blueprints/lib/blueprints-test-2.5.0.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-blueprints/lib/rexster-core-2.5.0.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-blueprints/lib/commons-configuration-1.10.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/junit-4.11.jar" sourcepath="/Users/mikepersonick/.m2/repository/junit/junit/4.11/junit-4.11-sources.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/hamcrest-core-1.3.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/httpcomponents/commons-fileupload-1.3.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/sesame-rio-testsuite-2.7.13.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.7.13.jar" sourcepath="/Users/mikepersonick/.m2/repository/org/openrdf/sesame/sesame-sparql-testsuite/2.7.13/sesame-sparql-testsuite-2.7.13-sources.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.7.13.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.7.13-onejar.jar"/> <classpathentry kind="output" path="bin"/> </classpath> Modified: branches/BIGDATA_RELEASE_1_4_0/.project =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/.project 2014-11-15 16:46:45 UTC (rev 8709) +++ branches/BIGDATA_RELEASE_1_4_0/.project 2014-11-15 20:33:32 UTC (rev 8710) @@ -1,6 +1,6 @@ <?xml version="1.0" encoding="UTF-8"?> <projectDescription> - <name>BIGDATA_RELEASE_1_1_0</name> + <name>bigdata</name> <comment></comment> <projects> </projects> Added: branches/BIGDATA_RELEASE_1_4_0/.settings/org.eclipse.core.resources.prefs =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/.settings/org.eclipse.core.resources.prefs (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/.settings/org.eclipse.core.resources.prefs 2014-11-15 20:33:32 UTC (rev 8710) @@ -0,0 +1,2 @@ +eclipse.preferences.version=1 +encoding//bigdata-sails/src/test/org/openrdf/repository/RepositoryConnectionTest.java=UTF-8 Added: branches/BIGDATA_RELEASE_1_4_0/bigdata/LEGAL/hamcrest-license.txt =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/LEGAL/hamcrest-license.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/LEGAL/hamcrest-license.txt 2014-11-15 20:33:32 UTC (rev 8710) @@ -0,0 +1,27 @@ +BSD License + +Copyright (c) 2000-2006, www.hamcrest.org +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +Redistributions of source code must retain the above copyright notice, this list of +conditions and the following disclaimer. Redistributions in binary form must reproduce +the above copyright notice, this list of conditions and the following disclaimer in +the documentation and/or other materials provided with the distribution. + +Neither the name of Hamcrest nor the names of its contributors may be used to endorse +or promote products derived from this software without specific prior written +permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY +EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES +OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT +SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR +BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY +WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH +DAMAGE. \ No newline at end of file Added: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/hamcrest-core-1.3.jar =================================================================== (Binary files differ) Index: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/hamcrest-core-1.3.jar =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/hamcrest-core-1.3.jar 2014-11-15 16:46:45 UTC (rev 8709) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/hamcrest-core-1.3.jar 2014-11-15 20:33:32 UTC (rev 8710) Property changes on: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/hamcrest-core-1.3.jar ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Deleted: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-3.8.1.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-4.11.jar =================================================================== (Binary files differ) Index: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-4.11.jar =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-4.11.jar 2014-11-15 16:46:45 UTC (rev 8709) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-4.11.jar 2014-11-15 20:33:32 UTC (rev 8710) Property changes on: branches/BIGDATA_RELEASE_1_4_0/bigdata/lib/junit-4.11.jar ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/Depends.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/Depends.java 2014-11-15 16:46:45 UTC (rev 8709) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/Depends.java 2014-11-15 20:33:32 UTC (rev 8710) @@ -67,6 +67,7 @@ @SuppressWarnings("unused") private static class OrderByLicense implements Comparator<Dependency> { + @Override public int compare(Dependency o1, Dependency o2) { return o1.licenseURL().compareTo(o2.licenseURL()); } @@ -244,9 +245,9 @@ "http://site.icu-project.org/", "http://source.icu-project.org/repos/icu/icu/trunk/license.html"); - private final static Dep nxparser = new Dep("nxparser", - "http://sw.deri.org/2006/08/nxparser/", - "http://sw.deri.org/2006/08/nxparser/license.txt"); +// private final static Dep nxparser = new Dep("nxparser", +// "http://sw.deri.org/2006/08/nxparser/", +// "http://sw.deri.org/2006/08/nxparser/license.txt"); private final static Dep nanohttp = new Dep("nanohttp", "http://elonen.iki.fi/code/nanohttpd/", @@ -281,6 +282,12 @@ "https://github.com/tinkerpop/rexster", "https://github.com/tinkerpop/rexster/blob/master/LICENSE.txt"); + // Note: This is a test-only dependency at this time. + @SuppressWarnings("unused") + private final static Dep hamcrestCore = new Dep("hamcrest-core", + "https://code.google.com/p/hamcrest/", + "http://opensource.org/licenses/BSD-3-Clause"); + static private final Dep[] depends; static { depends = new Dep[] { // @@ -304,7 +311,7 @@ slf4j,// sesame,// icu,// - nxparser,// +// nxparser,// nanohttp,// jetty,// servletApi,// Modified: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/Var.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/Var.java 2014-11-15 16:46:45 UTC (rev 8709) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/Var.java 2014-11-15 20:33:32 UTC (rev 8710) @@ -53,12 +53,14 @@ final private String name; + @Override final public boolean isVar() { return true; } + @Override final public boolean isConstant() { return false; @@ -86,6 +88,7 @@ * part of the canonicalizing mapping). Because we override clone we do not * need to provide the deep copy constructor (it is never invoked). */ + @Override final public Var<E> clone() { return this; @@ -96,6 +99,7 @@ * @todo Why two versions of equals? This one is coming from * IConstantOrVariable. */ + @Override public final boolean equals(final IVariableOrConstant<E> o) { if (this == o) @@ -111,6 +115,7 @@ } + @Override public final boolean equals(final Object o) { if (this == o) @@ -126,18 +131,21 @@ } + @Override public final int hashCode() { return name.hashCode(); } + @Override public String toString() { return name; } + @Override public boolean isWildcard() { return name.length() == 1 && name.charAt(0) == '*'; @@ -153,6 +161,7 @@ // // } + @Override public E get() { throw new UnsupportedOperationException(); @@ -171,6 +180,7 @@ } + @Override public String getName() { return name; Added: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/DistinctTermScanOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/DistinctTermScanOp.java (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/DistinctTermScanOp.java 2014-11-15 20:33:32 UTC (rev 8710) @@ -0,0 +1,476 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Aug 25, 2010 + */ + +package com.bigdata.bop.join; + +import java.util.Iterator; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.FutureTask; + +import com.bigdata.bop.BOp; +import com.bigdata.bop.BOpContext; +import com.bigdata.bop.BOpUtility; +import com.bigdata.bop.Constant; +import com.bigdata.bop.IBindingSet; +import com.bigdata.bop.IConstraint; +import com.bigdata.bop.IPredicate; +import com.bigdata.bop.IVariable; +import com.bigdata.bop.NV; +import com.bigdata.bop.PipelineOp; +import com.bigdata.bop.bindingSet.ListBindingSet; +import com.bigdata.bop.engine.BOpStats; +import com.bigdata.btree.IRangeQuery; +import com.bigdata.btree.ITuple; +import com.bigdata.btree.filter.TupleFilter; +import com.bigdata.rdf.internal.IV; +import com.bigdata.rdf.internal.IVUtility; +import com.bigdata.rdf.lexicon.ITermIVFilter; +import com.bigdata.rdf.spo.DistinctTermAdvancer; +import com.bigdata.rdf.spo.SPOKeyOrder; +import com.bigdata.rdf.spo.SPORelation; +import com.bigdata.relation.IRelation; +import com.bigdata.relation.accesspath.AccessPath; +import com.bigdata.relation.accesspath.IAccessPath; +import com.bigdata.relation.accesspath.IBlockingBuffer; +import com.bigdata.relation.accesspath.UnsyncLocalOutputBuffer; +import com.bigdata.striterator.ChunkedWrappedIterator; +import com.bigdata.striterator.IChunkedIterator; +import com.bigdata.striterator.IKeyOrder; + +import cutthecrap.utils.striterators.Resolver; +import cutthecrap.utils.striterators.Striterator; + +/** + * This operator performs a distinct terms scan for an {@link IPredicate}, + * binding the distinct values for the specified variable(s) from the + * {@link IAccessPath} for the {@link IPredicate}. This is done using a + * {@link DistinctTermAdvancer} to skip over any duplicate solutions in the + * index. Thus the cost of this operator is O(N) where N is the number of + * distinct solutions that exist in the index. + * + * @see <a href="http://trac.bigdata.com/ticket/1035" > DISTINCT PREDICATEs + * query is slow </a> + * @see DistinctTermAdvancer + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public class DistinctTermScanOp<E> extends PipelineOp { + + /** + * + */ + private static final long serialVersionUID = 1L; + + public interface Annotations extends AccessPathJoinAnnotations { + + /** + * The name of the variable whose distinct projection against the + * {@link IAccessPath} associated with the as-bound {@link IPredicate} + * is output by this operator. + */ + String DISTINCT_VAR = DistinctTermScanOp.class.getName() + + ".distinctVar"; + + } + + /** + * Deep copy constructor. + * + * @param op + */ + public DistinctTermScanOp(final DistinctTermScanOp<E> op) { + + super(op); + + } + + /** + * Shallow copy constructor. + * + * @param args + * @param annotations + */ + public DistinctTermScanOp(final BOp[] args, + final Map<String, Object> annotations) { + + super(args, annotations); + + // MUST be given. + getDistinctVar(); + getRequiredProperty(Annotations.PREDICATE); + + if (isOptional()) { + + /* + * TODO OPTIONAL is not implemented for this operator. + */ + + throw new UnsupportedOperationException(); + + } + + } + + public DistinctTermScanOp(final BOp[] args, final NV... annotations) { + + this(args, NV.asMap(annotations)); + + } + + /** + * @see Annotations#DISTINCT_VAR + */ + protected IVariable<?> getDistinctVar() { + + return (IVariable<?>) getRequiredProperty(Annotations.DISTINCT_VAR); + + } + + /** + * @see Annotations#SELECT + */ + protected IVariable<?>[] getSelect() { + + return getProperty(Annotations.SELECT, null/* defaultValue */); + + } + + /** + * @see Annotations#CONSTRAINTS + */ + protected IConstraint[] constraints() { + + return getProperty(Annotations.CONSTRAINTS, null/* defaultValue */); + + } + + @SuppressWarnings("unchecked") + public IPredicate<E> getPredicate() { + + return (IPredicate<E>) getRequiredProperty(Annotations.PREDICATE); + + } + + /** + * Return the value of {@link IPredicate#isOptional()} for the + * {@link IPredicate} associated with this join. + * + * @see IPredicate.Annotations#OPTIONAL + */ + private boolean isOptional() { + + return getPredicate().isOptional(); + + } + + @Override + public FutureTask<Void> eval(final BOpContext<IBindingSet> context) { + + return new FutureTask<Void>(new ChunkTask<E>(this, context)); + + } + + /** + * Copy the source to the sink. + */ + static private class ChunkTask<E> implements Callable<Void> { + + private final DistinctTermScanOp<E> op; + + private final BOpContext<IBindingSet> context; + + /** + * The variable that gets bound to the distinct values by the scan. + */ + private final IVariable<?> distinctVar; + + /** + * The source for the elements to be joined. + */ + private final IPredicate<E> predicate; + + /** + * The relation associated with the {@link #predicate} operand. + */ + private final IRelation<E> relation; + + ChunkTask(final DistinctTermScanOp<E> op, + final BOpContext<IBindingSet> context) { + + this.op = op; + + this.context = context; + + this.distinctVar = op.getDistinctVar(); + + this.predicate = op.getPredicate(); + + this.relation = context.getRelation(predicate); + + } + + @Override + public Void call() throws Exception { + + final BOpStats stats = context.getStats(); + + // Convert source solutions to array (assumes low cardinality). + final IBindingSet[] leftSolutions = BOpUtility.toArray( + context.getSource(), stats); + + // default sink + final IBlockingBuffer<IBindingSet[]> sink = context.getSink(); + + final UnsyncLocalOutputBuffer<IBindingSet> unsyncBuffer = new UnsyncLocalOutputBuffer<IBindingSet>( + op.getChunkCapacity(), sink); + + final IVariable<?>[] selectVars = op.getSelect(); + + final IConstraint[] constraints = op.constraints(); + + try { + + /* + * TODO If there are multiple left solutions (from the pipeline) + * then we could generate their fromKeys and order them to + * improve cache locality. See PipelineJoin for an example of + * how this is done. For the distinct-term-scan this could + * provide a reasonable improvement in cache locality for the + * index. + */ + + // For each source solution. + for (IBindingSet bindingSet : leftSolutions) { + + // constrain the predicate to the given bindings. + IPredicate<E> asBound = predicate.asBound(bindingSet); + + if (asBound == null) { + + /* + * This can happen for a SIDS mode join if some of the + * (s,p,o,[c]) and SID are bound on entry and they can not + * be unified. For example, the s position might be + * inconsistent with the Subject that can be decoded from + * the SID binding. + * + * @see #815 (RDR query does too much work) + */ + + continue; + + } + +// if (partitionId != -1) { +// +// /* +// * Constrain the predicate to the desired index +// * partition. +// * +// * Note: we do this for scale-out joins since the +// * access path will be evaluated by a JoinTask +// * dedicated to this index partition, which is part +// * of how we give the JoinTask to gain access to the +// * local index object for an index partition. +// */ +// +// asBound = asBound.setPartitionId(partitionId); +// +// } + + /** + * The {@link IAccessPath} corresponding to the asBound + * {@link IPredicate} for this join dimension. The asBound + * {@link IPredicate} is {@link IAccessPath#getPredicate()}. + * + * FIXME What do we do if there is a local filter or an + * access path filter? Do we have to NOT generate this + * operator? It is probably not safe to ignore those + * filters.... + */ + final IAccessPath<E> accessPath = context.getAccessPath( + relation, asBound); + + if (accessPath.getPredicate().getIndexLocalFilter() != null) { + // index has local filter. requires scan. + throw new AssertionError(); + } + + if (accessPath.getPredicate().getAccessPathFilter() != null) { + // access path filter exists. requires scan. + throw new AssertionError(); + } + + // TODO Cast to AccessPath is not type safe. + final IChunkedIterator<IV> rightItr = distinctTermScan( + (AccessPath<E>) accessPath, null/* termIdFilter */); + + while (rightItr.hasNext()) { + + // New binding set. + final IBindingSet right = new ListBindingSet(); + + // Bind the distinctTermVar. + right.set(distinctVar, new Constant(rightItr.next())); + + // See if the solutions join. + final IBindingSet outSolution = BOpContext.bind(// + bindingSet,// left + right,// + constraints,// + selectVars// + ); + + if (outSolution != null) { + + // Output the solution. + unsyncBuffer.add(outSolution); + + } + + } + + } + + // flush the unsync buffer. + unsyncBuffer.flush(); + + // flush the sink. + sink.flush(); + + // Done. + return null; + + } finally { + + sink.close(); + + context.getSource().close(); + + } + + } + + /** + * Efficient scan of the distinct term identifiers that appear in the + * first position of the keys for the statement index corresponding to + * the specified {@link IKeyOrder}. For example, using + * {@link SPOKeyOrder#POS} will give you the term identifiers for the + * distinct predicates actually in use within statements in the + * {@link SPORelation}. + * + * @param keyOrder + * The selected index order. + * @param fromKey + * The first key for the scan -or- <code>null</code> to start + * the scan at the head of the index. + * @param toKey + * The last key (exclusive upper bound) for the scan -or- + * <code>null</code> to scan until the end of the index. + * @param termIdFilter + * An optional filter on the visited {@link IV}s. + * + * @return An iterator visiting the distinct term identifiers. + * + * TODO Move this method to {@link AccessPath}. Also, refactor + * {@link SPORelation#distinctTermScan(IKeyOrder)} to use this + * code. + */ + private static <E> IChunkedIterator<IV> distinctTermScan( + final AccessPath<E> ap, final ITermIVFilter termIdFilter) { + + final IKeyOrder<E> keyOrder = ap.getKeyOrder(); + + final byte[] fromKey = ap.getFromKey(); + + final byte[] toKey = ap.getToKey(); + + final DistinctTermAdvancer filter = new DistinctTermAdvancer( + keyOrder.getKeyArity()); + + /* + * Layer in the logic to advance to the tuple that will have the + * next distinct term identifier in the first position of the key. + */ + + if (termIdFilter != null) { + + /* + * Layer in a filter for only the desired term types. + */ + + filter.addFilter(new TupleFilter<E>() { + + private static final long serialVersionUID = 1L; + + @Override + protected boolean isValid(final ITuple<E> tuple) { + + final byte[] key = tuple.getKey(); + + final IV iv = IVUtility.decode(key); + + return termIdFilter.isValid(iv); + + } + + }); + + } + + @SuppressWarnings("unchecked") + final Iterator<IV> itr = new Striterator(ap.getIndex(/*keyOrder*/) + .rangeIterator(fromKey, toKey,// + 0/* capacity */, IRangeQuery.KEYS | IRangeQuery.CURSOR, + filter)).addFilter(new Resolver() { + + private static final long serialVersionUID = 1L; + + /** + * Resolve tuple to IV. + */ + @Override + protected IV resolve(final Object obj) { + + final byte[] key = ((ITuple<?>) obj).getKey(); + + return IVUtility.decode(key); + + } + + }); + + return new ChunkedWrappedIterator<IV>(itr, ap.getChunkCapacity(), + IV.class); + + } + + } // class ChunkTask + + +} Added: branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/FastRangeCountOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/FastRangeCountOp.java (rev 0) +++ branches/BIGDATA_RELEASE_1_4_0/bigdata/src/java/com/bigdata/bop/join/FastRangeCountOp.java 2014-11-15 20:33:32 UTC (rev 8710) @@ -0,0 +1,391 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Aug 25, 2010 + */ + +package com.bigdata.bop.join; + +import java.math.BigInteger; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.FutureTask; + +import com.bigdata.bop.BOp; +import com.bigdata.bop.BOpContext; +import com.bigdata.bop.BOpUtility; +import com.bigdata.bop.Constant; +import com.bigdata.bop.IBindingSet; +import com.bigdata.bop.IConstraint; +import com.bigdata.bop.IPredicate; +import com.bigdata.bop.IVariable; +import com.bigdata.bop.NV; +import com.bigdata.bop.PipelineOp; +import com.bigdata.bop.bindingSet.ListBindingSet; +import com.bigdata.bop.engine.BOpStats; +import com.bigdata.rdf.internal.impl.literal.XSDIntegerIV; +import com.bigdata.relation.IRelation; +import com.bigdata.relation.accesspath.IAccessPath; +import com.bigdata.relation.accesspath.IBlockingBuffer; +import com.bigdata.relation.accesspath.UnsyncLocalOutputBuffer; + +/** + * This operator reports the fast-range count for an as-bound {@link IPredicate} + * . The cost of this operator is two key probes. Unlike a normal access path, + * this operator does not bind variables to data in tuples in the underlying + * index. Instead it binds a pre-identified variable to the aggregate (COUNT) of + * the tuple range spanned by the {@link IPredicate}. + * + * @see <a href="http://trac.bigdata.com/ticket/1037" > Rewrite SELECT + * COUNT(...) (DISTINCT|REDUCED) {single-triple-pattern} as ESTCARD </a> + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public class FastRangeCountOp<E> extends PipelineOp { + + /** + * +... [truncated message content] |
From: <tho...@us...> - 2014-11-15 16:46:48
|
Revision: 8709 http://sourceforge.net/p/bigdata/code/8709 Author: thompsonbry Date: 2014-11-15 16:46:45 +0000 (Sat, 15 Nov 2014) Log Message: ----------- Branch for 1.4.x maintenance and development. Added Paths: ----------- branches/BIGDATA_RELEASE_1_4_0/ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |