This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(139) |
Aug
(94) |
Sep
(232) |
Oct
(143) |
Nov
(138) |
Dec
(55) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(127) |
Feb
(90) |
Mar
(101) |
Apr
(74) |
May
(148) |
Jun
(241) |
Jul
(169) |
Aug
(121) |
Sep
(157) |
Oct
(199) |
Nov
(281) |
Dec
(75) |
2012 |
Jan
(107) |
Feb
(122) |
Mar
(184) |
Apr
(73) |
May
(14) |
Jun
(49) |
Jul
(26) |
Aug
(103) |
Sep
(133) |
Oct
(61) |
Nov
(51) |
Dec
(55) |
2013 |
Jan
(59) |
Feb
(72) |
Mar
(99) |
Apr
(62) |
May
(92) |
Jun
(19) |
Jul
(31) |
Aug
(138) |
Sep
(47) |
Oct
(83) |
Nov
(95) |
Dec
(111) |
2014 |
Jan
(125) |
Feb
(60) |
Mar
(119) |
Apr
(136) |
May
(270) |
Jun
(83) |
Jul
(88) |
Aug
(30) |
Sep
(47) |
Oct
(27) |
Nov
(23) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <tho...@us...> - 2014-06-16 16:22:52
|
Revision: 8484 http://sourceforge.net/p/bigdata/code/8484 Author: thompsonbry Date: 2014-06-16 16:22:43 +0000 (Mon, 16 Jun 2014) Log Message: ----------- @Override annotations. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/relation/AbstractRelation.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/relation/AbstractRelation.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/relation/AbstractRelation.java 2014-06-16 14:17:57 UTC (rev 8483) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/relation/AbstractRelation.java 2014-06-16 16:22:43 UTC (rev 8484) @@ -96,6 +96,7 @@ * * @return The index name. */ + @Override public String getFQN(final IKeyOrder<? extends E> keyOrder) { return getFQN(this, keyOrder); @@ -161,6 +162,7 @@ * construct to return the correct hard reference. This behavior * should be encapsulated. */ + @Override public IIndex getIndex(final IKeyOrder<? extends E> keyOrder) { return getIndex(getFQN(keyOrder)); @@ -302,12 +304,14 @@ } + @Override final public IAccessPath<E> getAccessPath(final IPredicate<E> predicate) { return getAccessPath(getKeyOrder(predicate), predicate); } + @Override final public IAccessPath<E> getAccessPath(final IKeyOrder<E> keyOrder, final IPredicate<E> predicate) { @@ -315,6 +319,7 @@ } + @Override @SuppressWarnings("unchecked") final public IAccessPath<E> getAccessPath( final IIndexManager localIndexManager, // This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-16 14:18:07
|
Revision: 8483 http://sourceforge.net/p/bigdata/code/8483 Author: thompsonbry Date: 2014-06-16 14:17:57 +0000 (Mon, 16 Jun 2014) Log Message: ----------- It looks like the ArbitraryLengthPathOp could be more defensive to avoid an NPE: ArbitraryLengthPathOp.java line 778 {{{ if (parentSolutionIn.isBound(gearing.outVar)) { // do this later now if (!bs.get(gearing.tVarOut).equals(parentSolutionIn.get(gearing.outVar))) { }}} Since we already know that there is a binding for gearing.outVar, this could be written as: {{{ if (parentSolutionIn.isBound(gearing.outVar)) { // do this now: note already known to be bound per test above. final IConstant<?> poutVar = parentSolutionIn.get(gearing.outVar); if (!poutVar.equals(bs.get(gearing.tVarOut))) { }}} This was noticed when observing an NPE when {{{bs.get(gearing.tVarOut)}}} evaluated to null. This is not the root cause of the problem. I am still looking for that. I have enabled the property-path test suite for the BigdataEmbeddedFederationSparqlTest. This test suite is not automatically run in CI due to resource leaks (which is documented on another ticket). However, you can now trivially recreate the problem by uncommenting the following line in BigdataSparqlTest and running the BigdataEmbeddedFederationSparqlTest. {{{ static final Collection<String> testURIs = Arrays.asList(new String[] { // property paths // "http://www.w3.org/2001/sw/DataAccess/tests/data-r2/syntax-sparql1/manifest#sparql11-collection-01", }}} When run locally, the test fails as follows. The failure is the same as the one documented above. It is attempting to bind a null value onto a variable. The root cause is likely to be a failure to flow the solutions back to the query controller such that the results from the sub-query appear as unbound on the query controller. It could also be a failure to run the sub-query from the query controller. I have not diagnosed this further. {{{ org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=eb7362c8-a987-4448-9113-99816a82311d,bopId=14,partitionId=-1,sinkId=17,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) at org.openrdf.query.impl.TupleQueryResultImpl.hasNext(TupleQueryResultImpl.java:90) at info.aduna.iteration.Iterations.addAll(Iterations.java:71) at org.openrdf.query.impl.MutableTupleQueryResult.<init>(MutableTupleQueryResult.java:86) at org.openrdf.query.impl.MutableTupleQueryResult.<init>(MutableTupleQueryResult.java:92) at org.openrdf.query.parser.sparql.SPARQLQueryTest.compareTupleQueryResults(SPARQLQueryTest.java:244) at org.openrdf.query.parser.sparql.SPARQLASTQueryTest.runTest(SPARQLASTQueryTest.java:196) at junit.framework.TestCase.runBare(TestCase.java:127) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:118) at junit.framework.TestSuite.runTest(TestSuite.java:208) at junit.framework.TestSuite.run(TestSuite.java:203) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=eb7362c8-a987-4448-9113-99816a82311d,bopId=14,partitionId=-1,sinkId=17,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) at com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) at com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) ... 19 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=eb7362c8-a987-4448-9113-99816a82311d,bopId=14,partitionId=-1,sinkId=17,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) ... 24 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=eb7362c8-a987-4448-9113-99816a82311d,bopId=14,partitionId=-1,sinkId=17,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) at com.bigdata.striterator.ChunkedWrappedIterator.close(ChunkedWrappedIterator.java:180) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:297) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:1) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=eb7362c8-a987-4448-9113-99816a82311d,bopId=14,partitionId=-1,sinkId=17,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1477) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) ... 8 more Caused by: java.lang.Exception: task=ChunkTask{query=eb7362c8-a987-4448-9113-99816a82311d,bopId=14,partitionId=-1,sinkId=17,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1335) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:894) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:789) ... 3 more Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1315) ... 8 more Caused by: java.lang.IllegalArgumentException at com.bigdata.bop.bindingSet.ListBindingSet.set(ListBindingSet.java:430) at com.bigdata.bop.ContextBindingSet.set(ContextBindingSet.java:74) at com.bigdata.bop.paths.ArbitraryLengthPathOp$ArbitraryLengthPathTask.processChunk(ArbitraryLengthPathOp.java:816) at com.bigdata.bop.paths.ArbitraryLengthPathOp$ArbitraryLengthPathTask.call(ArbitraryLengthPathOp.java:270) at com.bigdata.bop.paths.ArbitraryLengthPathOp$ArbitraryLengthPathTask.call(ArbitraryLengthPathOp.java:1) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1314) ... 8 more }}} See #942 (Property path failures in scale-out). Revision Links: -------------- http://sourceforge.net/p/bigdata/code/2 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/paths/ArbitraryLengthPathOp.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataEmbeddedFederationSparqlTest.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/paths/ArbitraryLengthPathOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/paths/ArbitraryLengthPathOp.java 2014-06-16 11:23:44 UTC (rev 8482) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/paths/ArbitraryLengthPathOp.java 2014-06-16 14:17:57 UTC (rev 8483) @@ -777,13 +777,14 @@ */ if (parentSolutionIn.isBound(gearing.outVar)) { - // do this later now - - if (!bs.get(gearing.tVarOut).equals(parentSolutionIn.get(gearing.outVar))) { - - if (log.isDebugEnabled()) { - log.debug("transitive output does not match incoming binding for output var, dropping"); - } + // do this now: note already known to be bound per test above. + final IConstant<?> poutVar = parentSolutionIn.get(gearing.outVar); + + if (!poutVar.equals(bs.get(gearing.tVarOut))) { + + if (log.isDebugEnabled()) { + log.debug("transitive output does not match incoming binding for output var, dropping"); + } continue; Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataEmbeddedFederationSparqlTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataEmbeddedFederationSparqlTest.java 2014-06-16 11:23:44 UTC (rev 8482) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataEmbeddedFederationSparqlTest.java 2014-06-16 14:17:57 UTC (rev 8483) @@ -65,7 +65,6 @@ * {@link EmbeddedFederation}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class BigdataEmbeddedFederationSparqlTest extends BigdataSparqlTest { @@ -110,7 +109,7 @@ if(hideDatasetTests) suite1 = filterOutTests(suite1,"dataset"); - suite1 = filterOutTests(suite1, "property-paths"); +// suite1 = filterOutTests(suite1, "property-paths"); /** * BSBM BI use case query 5 @@ -157,6 +156,7 @@ final Factory factory = new Factory() { + @Override public SPARQLQueryTest createSPARQLQueryTest(String testURI, String name, String queryFileURL, String resultFileURL, Dataset dataSet, boolean laxCardinality) { @@ -166,6 +166,7 @@ } + @Override public SPARQLQueryTest createSPARQLQueryTest(String testURI, String name, String queryFileURL, String resultFileURL, Dataset dataSet, boolean laxCardinality, boolean checkOrder) { @@ -173,6 +174,7 @@ return new BigdataEmbeddedFederationSparqlTest(testURI, name, queryFileURL, resultFileURL, dataSet, laxCardinality, checkOrder) { + @Override protected Properties getProperties() { final Properties p = new Properties(super @@ -295,7 +297,8 @@ } - protected void tearDownBackend(IIndexManager backend) { + @Override + protected void tearDownBackend(final IIndexManager backend) { backend.destroy(); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java 2014-06-16 11:23:44 UTC (rev 8482) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java 2014-06-16 14:17:57 UTC (rev 8483) @@ -67,7 +67,6 @@ * a {@link Journal} without full read/write transaction support. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class BigdataSparqlTest //extends SPARQLQueryTest // Sesame TupleExpr based evaluation This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-16 11:23:46
|
Revision: 8482 http://sourceforge.net/p/bigdata/code/8482 Author: thompsonbry Date: 2014-06-16 11:23:44 +0000 (Mon, 16 Jun 2014) Log Message: ----------- @Override tag. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEmbeddedFederationWithQuads.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEmbeddedFederationWithQuads.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEmbeddedFederationWithQuads.java 2014-06-16 11:23:22 UTC (rev 8481) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEmbeddedFederationWithQuads.java 2014-06-16 11:23:44 UTC (rev 8482) @@ -55,7 +55,6 @@ * pipeline join algorithm. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class TestBigdataSailEmbeddedFederationWithQuads extends AbstractBigdataSailTestCase { @@ -151,7 +150,8 @@ return suite; } - + + @Override public Properties getProperties() { final Properties properties = new Properties(super.getProperties()); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-16 11:23:26
|
Revision: 8481 http://sourceforge.net/p/bigdata/code/8481 Author: thompsonbry Date: 2014-06-16 11:23:22 +0000 (Mon, 16 Jun 2014) Log Message: ----------- @Override Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java 2014-06-16 11:22:52 UTC (rev 8480) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java 2014-06-16 11:23:22 UTC (rev 8481) @@ -128,6 +128,7 @@ * * @return A new properties object. */ + @Override public Properties getProperties() { if( m_properties == null ) { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-16 11:23:02
|
Revision: 8480 http://sourceforge.net/p/bigdata/code/8480 Author: thompsonbry Date: 2014-06-16 11:22:52 +0000 (Mon, 16 Jun 2014) Log Message: ----------- Added @Override. Removed System.err. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSPARQLUpdateConformanceTest.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSPARQLUpdateConformanceTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSPARQLUpdateConformanceTest.java 2014-06-16 07:45:27 UTC (rev 8479) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSPARQLUpdateConformanceTest.java 2014-06-16 11:22:52 UTC (rev 8480) @@ -65,14 +65,16 @@ } public static Test suite() throws Exception { + final Test suite = SPARQL11ManifestTest.suite(new Factory() { + @Override public BigdataSPARQLUpdateConformanceTest createSPARQLUpdateConformanceTest( String testURI, String name, String requestFile, URI defaultGraphURI, Map<String, URI> inputNamedGraphs, URI resultDefaultGraphURI, Map<String, URI> resultNamedGraphs) { - System.err.println(">>>>> "+testURI);// FIXME REMOVE. + return new BigdataSPARQLUpdateConformanceTest(testURI, name, requestFile, defaultGraphURI, inputNamedGraphs, resultDefaultGraphURI, resultNamedGraphs); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-16 07:45:30
|
Revision: 8479 http://sourceforge.net/p/bigdata/code/8479 Author: martyncutcher Date: 2014-06-16 07:45:27 +0000 (Mon, 16 Jun 2014) Log Message: ----------- Branch to develop async write cache service to increase potential write parallelism. Added Paths: ----------- branches/BIGDATA_ASYNC_WCS/ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-14 12:59:23
|
Revision: 8478 http://sourceforge.net/p/bigdata/code/8478 Author: thompsonbry Date: 2014-06-14 12:59:18 +0000 (Sat, 14 Jun 2014) Log Message: ----------- Added performance counters that break down all of the latencies associated with both single-phase (local) and 2-phase (HA) commits as an aid in performance optimization. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2014-06-13 16:25:45 UTC (rev 8477) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2014-06-14 12:59:18 UTC (rev 8478) @@ -93,7 +93,9 @@ import com.bigdata.config.LongRangeValidator; import com.bigdata.config.LongValidator; import com.bigdata.counters.AbstractStatisticsCollector; +import com.bigdata.counters.CAT; import com.bigdata.counters.CounterSet; +import com.bigdata.counters.ICounterSetAccess; import com.bigdata.counters.Instrument; import com.bigdata.ha.CommitRequest; import com.bigdata.ha.CommitResponse; @@ -1385,6 +1387,7 @@ * into a btree and the btree api could be used to change the * persistent properties as necessary. */ + @Override final public Properties getProperties() { return new Properties(properties); @@ -1411,6 +1414,7 @@ * this service, but tasks run here may submit tasks to the * {@link ConcurrencyManager}. */ + @Override abstract public ExecutorService getExecutorService(); /** @@ -1422,6 +1426,7 @@ * * @see #shutdownNow() */ + @Override synchronized public void shutdown() { // Note: per contract for shutdown. @@ -1445,6 +1450,7 @@ * * @see #shutdown() */ + @Override synchronized public void shutdownNow() { // Note: per contract for shutdownNow() @@ -1465,6 +1471,7 @@ /** * Closes out the journal iff it is still open. */ + @Override protected void finalize() throws Throwable { if (_bufferStrategy.isOpen()) { @@ -1481,6 +1488,7 @@ /** * Return counters reporting on various aspects of the journal. */ + @Override public CounterSet getCounters() { return CountersFactory.getCounters(this); @@ -1509,6 +1517,7 @@ final WeakReference<AbstractJournal> ref = new WeakReference<AbstractJournal>(jnl); counters.addCounter("file", new Instrument<String>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1522,6 +1531,7 @@ // + jnl.getFile())); counters.addCounter("createTime", new Instrument<Long>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1534,6 +1544,7 @@ }); counters.addCounter("closeTime", new Instrument<Long>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1546,6 +1557,7 @@ }); counters.addCounter("commitCount", new Instrument<Long>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1558,6 +1570,7 @@ }); counters.addCounter("historicalIndexCacheSize", new Instrument<Integer>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1567,6 +1580,7 @@ }); counters.addCounter("indexCacheSize", new Instrument<Integer>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1576,6 +1590,7 @@ }); counters.addCounter("liveIndexCacheSize", new Instrument<Integer>() { + @Override public void sample() { final AbstractJournal jnl = ref.get(); if (jnl != null) { @@ -1587,12 +1602,17 @@ } }); - counters.attach(jnl._bufferStrategy.getCounters()); + // backing strategy performance counters. + counters.attach(jnl._bufferStrategy.getCounters()); - return counters; + // commit protocol performance counters. + counters.makePath("commit") + .attach(jnl.commitCounters.getCounters()); - } + return counters; + } + } // /** @@ -3070,6 +3090,134 @@ } /** + * Performance counters for the journal-level commit operations. + */ + private static class CommitCounters implements ICounterSetAccess { + /** + * Elapsed nanoseconds for the {@link ICommitter#handleCommit(long)} + * (flushing dirty pages from the indices into the write cache service). + */ + private final CAT elapsedNotifyCommittersNanos = new CAT(); + /** + * Elapsed nanoseconds for {@link CommitState#writeCommitRecord()}. + */ + private final CAT elapsedWriteCommitRecordNanos = new CAT(); + /** + * Elapsed nanoseconds for flushing the write set from the write cache + * service to the backing store (this is the bulk of the disk IO unless + * the write cache service fills up during a long running commit, in + * which case there is also incremental eviction). + */ + private final CAT elapsedFlushWriteSetNanos = new CAT(); + /** + * Elapsed nanoseconds for the simple atomic commit (non-HA). This + * consists of sync'ing the disk (iff double-sync is enabled), writing + * the root block, and then sync'ing the disk. + */ + private final CAT elapsedSimpleCommitNanos = new CAT(); + /** + * Elapsed nanoseconds for the entire commit protocol. + */ + private final CAT elapsedTotalCommitNanos = new CAT(); + + // + // HA counters + // + + /** + * Elapsed nanoseconds for GATHER (consensus release time protocol : HA + * only). + */ + private final CAT elapsedGatherNanos = new CAT(); + /** + * Elapsed nanoseconds for PREPARE (2-phase commit: HA only). + */ + private final CAT elapsedPrepare2PhaseNanos = new CAT(); + /** + * Elapsed nanoseconds for COMMIT2PHASE (2-phase commit: HA only). + */ + private final CAT elapsedCommit2PhaseNanos = new CAT(); + + @Override + public CounterSet getCounters() { + + final CounterSet root = new CounterSet(); + + root.addCounter("notifyCommittersSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedNotifyCommittersNanos.get() / 1000000000.); + setValue(secs); + } + }); + + root.addCounter("writeCommitRecordSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedWriteCommitRecordNanos.get() / 1000000000.); + setValue(secs); + } + }); + + root.addCounter("flushWriteSetSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedFlushWriteSetNanos.get() / 1000000000.); + setValue(secs); + } + }); + + root.addCounter("simpleCommitSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedSimpleCommitNanos.get() / 1000000000.); + setValue(secs); + } + }); + + root.addCounter("totalCommitSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedTotalCommitNanos.get() / 1000000000.); + setValue(secs); + } + }); + + // + // HA + // + + root.addCounter("gatherSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedGatherNanos.get() / 1000000000.); + setValue(secs); + } + }); + + root.addCounter("prepare2PhaseSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedPrepare2PhaseNanos.get() / 1000000000.); + setValue(secs); + } + }); + + root.addCounter("commit2PhaseSecs", new Instrument<Double>() { + @Override + public void sample() { + final double secs = (elapsedCommit2PhaseNanos.get() / 1000000000.); + setValue(secs); + } + }); + + return root; + + } + } + final private CommitCounters commitCounters = new CommitCounters(); + + /** * Class to which we attach all of the little pieces of state during * {@link AbstractJournal#commitNow(long)}. * <p> @@ -3187,6 +3335,8 @@ */ private boolean notifyCommitters() { + final long beginNanos = System.nanoTime(); + /* * First, run each of the committers accumulating the updated root * addresses in an array. In general, these are btrees and they may @@ -3233,6 +3383,9 @@ rootAddrs[PREV_ROOTBLOCK] = store.m_rootBlockCommitter .handleCommit(commitTime); + store.commitCounters.elapsedNotifyCommittersNanos.add(System + .nanoTime() - beginNanos); + // Will do commit. return true; @@ -3256,6 +3409,8 @@ */ private void writeCommitRecord() { + final long beginNanos = System.nanoTime(); + /* * Before flushing the commitRecordIndex we need to check for * deferred frees that will prune the index. @@ -3300,6 +3455,9 @@ commitRecordIndexAddr = store._commitRecordIndex .writeCheckpoint(); + store.commitCounters.elapsedWriteCommitRecordNanos.add(System.nanoTime() + - beginNanos); + } /** @@ -3347,8 +3505,13 @@ */ private void flushWriteSet() { + final long beginNanos = System.nanoTime(); + _bufferStrategy.commit(); + store.commitCounters.elapsedFlushWriteSetNanos.add(System.nanoTime() + - beginNanos); + } /** @@ -3431,6 +3594,8 @@ */ private void gatherPhase() { + final long beginNanos = System.nanoTime(); + /* * If not HA, do not do GATHER. */ @@ -3497,6 +3662,9 @@ store._gatherLock.unlock(); + store.commitCounters.elapsedGatherNanos.add(System.nanoTime() + - beginNanos); + } } @@ -3515,6 +3683,8 @@ */ private void commitSimple() { + final long beginNanos = System.nanoTime(); + /* * Force application data to stable storage _before_ * we update the root blocks. This option guarantees @@ -3579,6 +3749,9 @@ if (txLog.isInfoEnabled()) txLog.info("COMMIT: commitTime=" + commitTime); + store.commitCounters.elapsedSimpleCommitNanos.add(System.nanoTime() + - beginNanos); + } /** @@ -3625,6 +3798,7 @@ private void prepare2Phase() throws InterruptedException, TimeoutException, IOException { + final long beginNanos = System.nanoTime(); boolean didPrepare = false; try { @@ -3687,6 +3861,9 @@ } } + + store.commitCounters.elapsedPrepare2PhaseNanos.add(System + .nanoTime() - beginNanos); } @@ -3703,6 +3880,7 @@ */ private void commit2Phase() throws Exception { + final long beginNanos = System.nanoTime(); boolean didCommit = false; try { @@ -3778,6 +3956,9 @@ } + store.commitCounters.elapsedCommit2PhaseNanos.add(System + .nanoTime() - beginNanos); + } } @@ -3815,6 +3996,8 @@ // Note: Overridden by StoreManager (DataService). protected long commitNow(final long commitTime) { + final long beginNanos = System.nanoTime(); + final WriteLock lock = _fieldReadWriteLock.writeLock(); lock.lock(); @@ -3928,6 +4111,10 @@ } finally { lock.unlock(); + + commitCounters.elapsedTotalCommitNanos.add(System.nanoTime() + - beginNanos); + } } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-13 16:25:53
|
Revision: 8477 http://sourceforge.net/p/bigdata/code/8477 Author: martyncutcher Date: 2014-06-13 16:25:45 +0000 (Fri, 13 Jun 2014) Log Message: ----------- Amended main to provide single long running reader and random Sail shutdown. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2014-06-13 15:35:35 UTC (rev 8476) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2014-06-13 16:25:45 UTC (rev 8477) @@ -204,13 +204,15 @@ } for (int rdrs = 0; rdrs < nreaders; rdrs++) { + + final int nreads = rdrs == 0 ? Integer.MAX_VALUE : 60; // reasonably long running hopefully lastReaderFuture = readers.submit(new Reader(r, - 60/* nread */, nwriters, sail, failex, + nreads, nwriters, sail, failex, commits, nreadersDone, subs)); } - + // let the writers run riot for a time, checking for failure while (true) { // final boolean bothDone = lastWriterFuture.isDone() @@ -728,7 +730,7 @@ props.load(new FileInputStream(propertyFile)); - BigdataSail sail = new BigdataSail(props); + final AtomicReference<BigdataSail> sail = new AtomicReference<BigdataSail>(new BigdataSail(props)); final int nreaderThreads = (int) getLongArg(args, "-nreaderthreads", 20); // 20 @@ -737,20 +739,46 @@ final long nreaders = getLongArg(args, "-nreaders", 400); // 100000; final long nruns = getLongArg(args, "-nruns", 1); // 1000; + + final Thread sailShutdown = new Thread() { + public void run() { + final Random r = new Random(); + while(true) { + try { + Thread.sleep(r.nextInt(50000)); + if (sail.get().isOpen()) { + log.warn("SHUTDOWN NOW"); + sail.get().shutDown(); + } + } catch (InterruptedException e) { + break; + } catch (SailException e) { + log.warn(e); + } + } + } + }; + + sailShutdown.start(); for (int i = 0; i < nruns; i++) { - domultiple_csem_transaction2(sail, (int) nreaderThreads, - (int) nwriters, (int) nreaders, false /*no tear down*/); - - // reopen for second run - should be open if !teardown - if (sail.isOpen()) - sail.shutDown(); + try { + domultiple_csem_transaction2(sail.get(), (int) nreaderThreads, + (int) nwriters, (int) nreaders, false /*no tear down*/); + + // reopen for second run - should be open if !teardown + if (sail.get().isOpen()) + sail.get().shutDown(); + } catch (Throwable e) { + log.warn("OOPS", e); // There will be a number of expected causes, eg IllegalStateException - service not available + } - sail = new BigdataSail(props); + sail.set(new BigdataSail(props)); System.out.println("Completed run: " + i); } - + + sailShutdown.interrupt(); } } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-13 15:35:43
|
Revision: 8476 http://sourceforge.net/p/bigdata/code/8476 Author: martyncutcher Date: 2014-06-13 15:35:35 +0000 (Fri, 13 Jun 2014) Log Message: ----------- Revert last commit property changes since they were not needed Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java 2014-06-13 14:06:22 UTC (rev 8475) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java 2014-06-13 15:35:35 UTC (rev 8476) @@ -125,11 +125,10 @@ * returned by this method and the appropriate properties must be provided * either through the command line or in a properties file. * </p> - * @param storeFile * * @return A new properties object. */ - public Properties getProperties(final String storeFile) { + public Properties getProperties() { if( m_properties == null ) { @@ -150,16 +149,12 @@ // transient means that there is nothing to delete after the test. // m_properties.setProperty(Options.BUFFER_MODE,BufferMode.Transient.toString()); m_properties.setProperty(Options.BUFFER_MODE,BufferMode.Disk.toString()); - - if (storeFile != null) { // overrides if one set by super class - m_properties.setProperty(Options.FILE,storeFile); - } /* * If an explicit filename is not specified... */ if(m_properties.get(Options.FILE)==null) { - + /* * Use a temporary file for the test. Such files are always deleted when * the journal is closed or the VM exits. @@ -177,10 +172,6 @@ } - public Properties getProperties() { - return getProperties(null); - } - /** * This method is invoked from methods that MUST be proxied to this class. * {@link GenericProxyTestCase} extends this class, as do the concrete This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-13 14:06:26
|
Revision: 8475 http://sourceforge.net/p/bigdata/code/8475 Author: martyncutcher Date: 2014-06-13 14:06:22 +0000 (Fri, 13 Jun 2014) Log Message: ----------- Added main() to com.bigdata.rdf.sail.TestMROWTransactions to simplify long running tests Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java 2014-06-12 10:43:06 UTC (rev 8474) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/AbstractBigdataSailTestCase.java 2014-06-13 14:06:22 UTC (rev 8475) @@ -125,10 +125,11 @@ * returned by this method and the appropriate properties must be provided * either through the command line or in a properties file. * </p> + * @param storeFile * * @return A new properties object. */ - public Properties getProperties() { + public Properties getProperties(final String storeFile) { if( m_properties == null ) { @@ -149,12 +150,16 @@ // transient means that there is nothing to delete after the test. // m_properties.setProperty(Options.BUFFER_MODE,BufferMode.Transient.toString()); m_properties.setProperty(Options.BUFFER_MODE,BufferMode.Disk.toString()); + + if (storeFile != null) { // overrides if one set by super class + m_properties.setProperty(Options.FILE,storeFile); + } /* * If an explicit filename is not specified... */ if(m_properties.get(Options.FILE)==null) { - + /* * Use a temporary file for the test. Such files are always deleted when * the journal is closed or the VM exits. @@ -172,6 +177,10 @@ } + public Properties getProperties() { + return getProperties(null); + } + /** * This method is invoked from methods that MUST be proxied to this class. * {@link GenericProxyTestCase} extends this class, as do the concrete Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2014-06-12 10:43:06 UTC (rev 8474) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2014-06-13 14:06:22 UTC (rev 8475) @@ -28,6 +28,7 @@ import info.aduna.iteration.CloseableIteration; +import java.io.FileInputStream; import java.util.Properties; import java.util.Random; import java.util.concurrent.Callable; @@ -113,7 +114,6 @@ void domultiple_csem_transaction2(final int retentionMillis, final int nreaderThreads, final int nwriters, final int nreaders, final boolean isolatableIndices) throws Exception { - if (log.isInfoEnabled()) { log.info("================================================================================="); log.info("retentionMillis=" + retentionMillis + ", nreaderThreads=" @@ -122,6 +122,15 @@ log.info("================================================================================="); } + final BigdataSail sail = getSail(getProperties(retentionMillis, + isolatableIndices)); + + domultiple_csem_transaction2(sail, nreaderThreads, nwriters, nreaders, true); + } + + static void domultiple_csem_transaction2( final BigdataSail sail, + final int nreaderThreads, final int nwriters, final int nreaders, final boolean teardown) throws Exception { + /** * The most likely problem is related to the session protection in the * RWStore. In development we saw problems when concurrent transactions @@ -151,8 +160,6 @@ final AtomicReference<Throwable> failex = new AtomicReference<Throwable>(null); // Set [true] iff there are no failures by the time we cancel the running tasks. final AtomicBoolean success = new AtomicBoolean(false); - final BigdataSail sail = getSail(getProperties(retentionMillis, - isolatableIndices)); // log.warn("Journal: "+sail.getDatabase().getIndexManager()+", file="+((Journal)sail.getDatabase().getIndexManager()).getFile()); try { @@ -235,7 +242,6 @@ final Throwable ex = failex.get(); if (ex != null) { fail("Test failed: firstCause=" + ex - + ", retentionMillis=" + retentionMillis + ", nreaderThreads=" + nreaderThreads + ", nwriters=" + nwriters + ", nreaders=" + nreaders + ", indexManager=" @@ -253,17 +259,19 @@ readers.shutdownNow(); } } finally { - try { - sail.__tearDownUnitTest(); - } catch (Throwable t) { - /* - * FIXME The test helper tear down should not throw anything, - * but it can do so if a tx has been asynchronously closed. This - * has to do with the logic that openrdf uses to close open - * transactions when the sail is shutdown by the caller. - */ - log.error("Problem with test shutdown: " + t, t); - } + if (teardown) { + try { + sail.__tearDownUnitTest(); + } catch (Throwable t) { + /* + * FIXME The test helper tear down should not throw anything, + * but it can do so if a tx has been asynchronously closed. This + * has to do with the logic that openrdf uses to close open + * transactions when the sail is shutdown by the caller. + */ + log.error("Problem with test shutdown: " + t, t); + } + } } @@ -326,13 +334,13 @@ log.info("Commit #" + commits); } catch (Throwable ise) { - log.warn(ise, ise); if (InnerCause.isInnerCause(ise, InterruptedException.class)) { // ignore } else if (InnerCause.isInnerCause(ise, MyBTreeException.class) && aborts.get() < maxAborts) { // ignore } else { + log.warn(ise, ise); // Set the first cause (but not for the forced abort). if (failex .compareAndSet(null/* expected */, ise/* newValue */)) { @@ -532,7 +540,7 @@ // // } - protected URI uri(String s) { + protected static URI uri(String s) { return new URIImpl(BD.NAMESPACE + s); } @@ -568,7 +576,33 @@ final boolean isolatableIndices) { final Properties props = getProperties(); + + props.setProperty(BigdataSail.Options.TRUTH_MAINTENANCE, "false"); + props.setProperty(BigdataSail.Options.AXIOMS_CLASS, NoAxioms.class.getName()); + props.setProperty(BigdataSail.Options.VOCABULARY_CLASS, NoVocabulary.class.getName()); + props.setProperty(BigdataSail.Options.JUSTIFY, "false"); + props.setProperty(BigdataSail.Options.TEXT_INDEX, "false"); + // props.setProperty(Options.WRITE_CACHE_BUFFER_COUNT, "3"); + // ensure using RWStore + props.setProperty(Options.BUFFER_MODE, BufferMode.DiskRW.toString()); + // props.setProperty(RWStore.Options.MAINTAIN_BLACKLIST, "false"); + // props.setProperty(RWStore.Options.OVERWRITE_DELETE, "true"); + // props.setProperty(Options.CREATE_TEMP_FILE, "false"); + // props.setProperty(Options.FILE, "/Volumes/SSDData/csem.jnl"); + + // props.setProperty(IndexMetadata.Options.WRITE_RETENTION_QUEUE_CAPACITY, "20"); + // props.setProperty(IndexMetadata.Options.WRITE_RETENTION_QUEUE_SCAN, "0"); + props.setProperty(IndexMetadata.Options.WRITE_RETENTION_QUEUE_CAPACITY, "500"); + props.setProperty(IndexMetadata.Options.WRITE_RETENTION_QUEUE_SCAN, "10"); + + setProperties(props, retention, isolatableIndices); + + return props; + } + + static void setProperties(final Properties props, final int retention, + final boolean isolatableIndices) { props.setProperty(BigdataSail.Options.ISOLATABLE_INDICES, Boolean.toString(isolatableIndices)); @@ -599,8 +633,9 @@ + ".com.bigdata.btree.BTree.className", MyBTree.class.getName()); } - return props; } + + /** * Helper class for force abort of a B+Tree write. @@ -649,4 +684,73 @@ } + /** utilities for main subclass support **/ + static long getLongArg(final String[] args, final String arg, final long def) { + final String sv = getArg(args, arg, null); + + return sv == null ? def : Long.parseLong(sv); + } + + static String getArg(final String[] args, final String arg, final String def) { + for (int p = 0; p < args.length; p+=2) { + if (arg.equals(args[p])) + return args[p+1]; + } + + return def; + } + + /** + * Command line variant to allow stress testing without JUnit support + * + * Invokes the same domultiple_csem_transaction2 method. + * + * A property file is required. Note that if a file is specified then + * it will be re-opened and not removed for each run as specified by + * nruns. + * + * Optional arguments + * -nruns - number of runs through the test + * -nreaderthreads - reader threads + * -nwriters - writer tasks + * -nreaders - reader tasks + */ + public static void main(String[] args) throws Exception { + + final String propertyFile = getArg(args, "-propertyfile", null); + if (propertyFile == null) { + System.out.println("-propertyfile <properties> must be specified"); + return; + } + + + final Properties props = new Properties(); + + props.load(new FileInputStream(propertyFile)); + + BigdataSail sail = new BigdataSail(props); + + final int nreaderThreads = (int) getLongArg(args, "-nreaderthreads", 20); // 20 + + final long nwriters = getLongArg(args, "-nwriters", 100); // 1000000; + + final long nreaders = getLongArg(args, "-nreaders", 400); // 100000; + + final long nruns = getLongArg(args, "-nruns", 1); // 1000; + + for (int i = 0; i < nruns; i++) { + domultiple_csem_transaction2(sail, (int) nreaderThreads, + (int) nwriters, (int) nreaders, false /*no tear down*/); + + // reopen for second run - should be open if !teardown + if (sail.isOpen()) + sail.shutDown(); + + sail = new BigdataSail(props); + + System.out.println("Completed run: " + i); + } + + } + } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-12 10:43:14
|
Revision: 8474 http://sourceforge.net/p/bigdata/code/8474 Author: thompsonbry Date: 2014-06-12 10:43:06 +0000 (Thu, 12 Jun 2014) Log Message: ----------- Bug fix to BigdataTriplePatternMaterializer. It was using a non-blocking add() rather than a blocking put() for the output queue. This could lead to a queue overrun. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataTriplePatternMaterializer.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataTriplePatternMaterializer.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataTriplePatternMaterializer.java 2014-06-12 08:50:08 UTC (rev 8473) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataTriplePatternMaterializer.java 2014-06-12 10:43:06 UTC (rev 8474) @@ -39,7 +39,6 @@ import org.apache.system.SystemUtil; import com.bigdata.rdf.spo.ISPO; -import com.bigdata.rdf.spo.SPOAccessPath; import com.bigdata.relation.accesspath.BlockingBuffer; import com.bigdata.relation.accesspath.IAccessPath; import com.bigdata.striterator.AbstractChunkedResolverator; @@ -274,7 +273,7 @@ // throw new AssertionError(Arrays.toString(a)); // } // } - out.add(a); + out.put(a); n += a.length; } return n; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-12 08:50:17
|
Revision: 8473 http://sourceforge.net/p/bigdata/code/8473 Author: martyncutcher Date: 2014-06-12 08:50:08 +0000 (Thu, 12 Jun 2014) Log Message: ----------- Add an MROW main test to enable parameterised stress testing of multiple reader/one writer scenario. See TestRWJournal$TestMROW.main() for documentation Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/AbstractMROWTestCase.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/AbstractMROWTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/AbstractMROWTestCase.java 2014-06-11 16:35:53 UTC (rev 8472) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/AbstractMROWTestCase.java 2014-06-12 08:50:08 UTC (rev 8473) @@ -119,7 +119,7 @@ } - /** + /** * A correctness/stress/performance test with a pool of concurrent clients * designed to verify MROW operations. If the store passes these tests, then * {@link StressTestConcurrentTx} is designed to reveal concurrency problems @@ -153,10 +153,11 @@ * * @param nerr * Used to report the #of errors back as a side-effect. + * @param readAll */ static public void doMROWTest(IRawStore store, - int nwrites, long writeDelayMillis, long timeout, int nclients, - int ntrials, int reclen, int nreads, final AtomicInteger nerr) + long nwrites, long writeDelayMillis, long timeout, int nclients, + long ntrials, int reclen, long nreads, final AtomicInteger nerr, boolean readAll) throws Exception { @@ -172,7 +173,7 @@ * Pre-write 25% of the records so that clients have something to * choose from when they start running. */ - final int npreWrites = nwrites/4; + final long npreWrites = nwrites/4; for( int i=0; i<npreWrites; i++) { // write a single record. @@ -182,7 +183,7 @@ System.err.println("Pre-wrote "+npreWrites+" records"); // start the writer. - writerExecutor.submit(writerTask); + final Future writer = writerExecutor.submit(writerTask); // Concurrent readers. ExecutorService readerExecutor = Executors.newFixedThreadPool( @@ -209,6 +210,23 @@ final long elapsed = System.currentTimeMillis() - begin; + // wait for writer + writer.get(); + + // wait for all readers to finish + if (readAll) { + for (Future<Long> rdr : results) { + try { + rdr.get(); + } catch (Exception e) { + // ignore + } + } + } + + final long elapsed2 = System.currentTimeMillis() - begin; + System.err.println("All " + ntrials + " readers processed " + nreads + " in " + elapsed2 + "ms"); + // force the writer to terminate. writerExecutor.shutdownNow(); @@ -241,7 +259,7 @@ int nok = 0; // #of trials that successfully committed. int ncancelled = 0; // #of trials that did not complete in time. // int nerr = 0; - Throwable[] errors = new Throwable[ntrials]; + Throwable[] errors = new Throwable[(int) ntrials]; while(itr.hasNext()) { @@ -279,7 +297,18 @@ } + /** + * Compatibility method defaulting readAll to false + */ + static void doMROWTest(IRawStore store, int nwrites, int writeDelayMillis, + long timeout, int nclients, int ntrials, int reclen, int nreads, + AtomicInteger nerr) throws Exception { + doMROWTest(store, nwrites, writeDelayMillis, timeout, nclients, + ntrials, reclen, nreads, nerr, false /*readAll*/); + } + + /** * A ground truth record as generated by a {@link WriterTask}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> @@ -307,7 +336,7 @@ private final IRawStore store; private final int reclen; - private final int nwrites; + private final long nwrites; private final long writeDelayMillis; /** @@ -341,17 +370,17 @@ } - public WriterTask(IRawStore store, int reclen, int nwrites, long writeDelayMillis) { + public WriterTask(IRawStore store, int reclen, long nwrites2, long writeDelayMillis) { this.store = store; this.reclen = reclen; - this.nwrites = nwrites; + this.nwrites = nwrites2; this.writeDelayMillis = writeDelayMillis; - this.records = new Record[nwrites]; + this.records = new Record[(int) nwrites2]; } @@ -374,7 +403,8 @@ */ public Integer call() throws Exception { - for (int i = nrecs; i < nwrites; i++) { + final long modCommit = 1 + (nwrites / 20); + for (long i = nrecs; i < nwrites; i++) { write(); @@ -397,9 +427,13 @@ // elapsed = System.nanoTime() - begin; // // } + + if (i % modCommit == 0 && store instanceof Journal) { + ((Journal) store).commit(); + } } - + System.err.println("Writer done: nwritten="+nrecs); return nrecs; @@ -430,7 +464,7 @@ private final IRawStore store; private final WriterTask writer; - private final int nops; + private final long nops; final Random r = new Random(); @@ -438,15 +472,15 @@ * * @param store * @param writer - * @param nops #of reads to perform. + * @param nreads #of reads to perform. */ - public ReaderTask(final IRawStore store, final WriterTask writer, final int nops) { + public ReaderTask(final IRawStore store, final WriterTask writer, final long nreads) { this.store = store; this.writer = writer; - this.nops = nops; + this.nops = nreads; } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2014-06-11 16:35:53 UTC (rev 8472) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2014-06-12 08:50:08 UTC (rev 8473) @@ -42,6 +42,7 @@ import java.util.UUID; import java.util.concurrent.ExecutorService; import java.util.concurrent.Future; +import java.util.concurrent.atomic.AtomicInteger; import java.util.zip.GZIPInputStream; import java.util.zip.GZIPOutputStream; @@ -2971,11 +2972,15 @@ super(name); } - protected IRawStore getStore() { + protected IRawStore getStore(String storeFile) { final Properties properties = getProperties(); - properties.setProperty(Options.CREATE_TEMP_FILE, "true"); + if (storeFile == null) { + properties.setProperty(Options.CREATE_TEMP_FILE, "true"); + } else { + properties.setProperty(Options.FILE, storeFile); + } properties.setProperty(Options.DELETE_ON_EXIT, "true"); @@ -2987,6 +2992,86 @@ } + protected IRawStore getStore() { + + return getStore(null); // no file provided by default + + } + + static long getLongArg(final String[] args, final String arg, final long def) { + final String sv = getArg(args, arg, null); + + return sv == null ? def : Long.parseLong(sv); + } + + static String getArg(final String[] args, final String arg, final String def) { + for (int p = 0; p < args.length; p+=2) { + if (arg.equals(args[p])) + return args[p+1]; + } + + return def; + } + + /** + * Stress variant to support multiple parameterised runs + * + * Arguments + * + * -file - optional explicit file path + * -clients - reader threads + * -nwrites - number of records written + * -reclen - size of record written + * -ntrials - number of readers + * -nreads - number of reads made by each reader + * -nruns - number of times to repeat process with reopen each time + */ + public static void main(final String[] args) throws Exception { + final TestMROW test = new TestMROW("main"); + + final String storeFile = getArg(args, "-file", null); + + Journal store = (Journal) test.getStore(storeFile); + try { + + final long timeout = 20; + + final int nclients = (int) getLongArg(args, "-clients", 20); // 20 + + final long nwrites = getLongArg(args, "-nwrites", 100000); //1000000; + + final int writeDelayMillis = 1; + + final long ntrials = getLongArg(args, "-ntrials", 100000); // 100000; + + final int reclen = (int) getLongArg(args, "-reclen", 128); // 128; + + final long nreads = getLongArg(args, "-nreads", 1000); // 1000; + + final long nruns = getLongArg(args, "-nruns", 1); // 1000; + + final AtomicInteger nerr = new AtomicInteger(); + + for (int i = 0; i < nruns; i++) { + doMROWTest(store, nwrites, writeDelayMillis, timeout, nclients, + ntrials, reclen, nreads, nerr, true /*readAll*/); + + store.commit(); + + store = (new TestRWJournal()).reopenStore(store); + + System.out.println("Completed run: " + i); + } + + } finally { + + if (storeFile == null) + store.destroy(); + + } + + } + } /** This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-11 16:35:58
|
Revision: 8472 http://sourceforge.net/p/bigdata/code/8472 Author: thompsonbry Date: 2014-06-11 16:35:53 +0000 (Wed, 11 Jun 2014) Log Message: ----------- 1. Additional bug fixes for the REST API connection try/finally and launder throwable patterns. 2. Merged in a refactoring to support group commit at the NSS based on hierarchical locking (using the Name2Addr prefix scan) and the ConcurrencyManager + AbstractTask mechanism. This refactoring is not complete, but much of the NSS test suite passes when group commit is enabled. See #566 (NSS group commit) See #966 (Failed to get namespace list under concurrent update) Patched files: - LocalTripleStore: - getIndexManager() returns IJournal (was Journal) - QueryServlet - innocuous changes and FIXME comment block for SPARQL UPDATE to support group commit. - RestApiTask - new - RestApiTaskForIndexManager - new - RestApiTaskForJournal - new - UpdateServlet - adds fix to connection try/finally and launder throwable pattern. - AbstractTestNanoSparqlServerClient : conditional tripleStore.destroy() with FIXME for group commit. - AbstractTask - includes comments about how to create a hierarchical locking system using N2A scans. Unpatched files: - BigdataRDFServlet - no interesting changes. - BigdataServlet - pulled in submitApiTask(), getKBLocks(), and OLD_EXECUTION_MODEL = true. - DeleteServlet - reconciled. captures REST API task pattern. adds fixes to connection try/finally that were somehow overlooked. - InsertServlet - reconciled. captures REST API task pattern and fixes to connection try/finally and launderThrowable patterns. - MultiTenancyServlet - change is incorrect (deals with ITx.READ_COMMITTED). Need modify this class to use the new pattern. Other files: - BigdataStatics - added a global boolean that will allow us to enable the NSS group commit feature from a system property (com.bigdata.nssGroupCommit). - BigdataRDFContext - modified call() to use the try/finally pattern for SPARQL QUERY and UPDATE. - BlueprintsServlet - added the try/finally/launder pattern. - WorkbenchServlet - modified to no longer access the AbstractTripleStore and to use a ValueFactoryImpl instead. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/BigdataStatics.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/LocalTripleStore.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WorkbenchServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/AbstractTestNanoSparqlClient.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RestApiTask.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RestApiTaskForIndexManager.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RestApiTaskForJournal.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/BigdataStatics.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/BigdataStatics.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/BigdataStatics.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -27,12 +27,14 @@ package com.bigdata; +import com.bigdata.journal.IIndexManager; +import com.bigdata.relation.AbstractRelation; + /** * A class for those few statics that it makes sense to reference from other * places. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class BigdataStatics { @@ -109,4 +111,21 @@ } + /** + * FIXME GROUP COMMIT : Disable/Enable group commit on the Journal from the + * NSS API. Some global flag should control this and also disable the + * journal's semaphore and should disable the wrapping of BTree as an + * UnisolatedReadWriteIndex ( + * {@link AbstractRelation#getIndex(IIndexManager, String, long)}, and + * should disable the calls to commit() or abort() from the LocalTripleStore + * to the Journal. + * + * @see <a href="http://sourceforge.net/apps/trac/bigdata/ticket/753" > HA + * doLocalAbort() should interrupt NSS requests and AbstractTasks </a> + * @see <a href="- http://sourceforge.net/apps/trac/bigdata/ticket/566" > + * Concurrent unisolated operations against multiple KBs </a> + */ + public static final boolean NSS_GROUP_COMMIT = Boolean + .getBoolean("com.bigdata.nssGroupCommit"); + } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -1249,7 +1249,7 @@ * Flag is cleared if the task is aborted. This is used to refuse * access to resources for tasks that ignore interrupts. */ - boolean aborted = false; + volatile boolean aborted = false; /** * The {@link AbstractTask} increments various counters of interest to the @@ -1557,7 +1557,7 @@ /** * Return <code>true</code> iff the task declared this as a resource. * - * @param name + * @param theRequestedResource * The name of a resource. * * @return <code>true</code> iff <i>name</i> is a declared resource. @@ -1565,17 +1565,58 @@ * @throws IllegalArgumentException * if <i>name</i> is <code>null</code>. */ - public boolean isResource(String name) { - - if (name == null) + public boolean isResource(final String theRequestedResource) { + + if (theRequestedResource == null) throw new IllegalArgumentException(); - - for(String s : resource) { - - if(s.equals(name)) return true; - + + for (String theDeclaredResource : resource) { + + if (theDeclaredResource.equals(theRequestedResource)) { + /* + * Exact match. This resource was declared. + */ + return true; + } + + /** + * FIXME GROUP_COMMIT: Supporting this requires us to support + * efficient scans of the indices in Name2Addr having the prefix + * values declared by [resources] since getIndex(name) will fail if + * the Name2Addr entry has not been buffered within the [n2a] cache. + * + * @see <a + * href="http://sourceforge.net/apps/trac/bigdata/ticket/753" > + * HA doLocalAbort() should interrupt NSS requests and + * AbstractTasks </a> + * @see <a + * href="- http://sourceforge.net/apps/trac/bigdata/ticket/566" + * > Concurrent unisolated operations against multiple KBs </a> + */ +// if (theRequestedResource.startsWith(theDeclaredResource)) { +// +// // Possible prefix match. +// +// if (theRequestedResource.charAt(theDeclaredResource.length()) == '.') { +// +// /* +// * Prefix match. +// * +// * E.g., name:="kb.spo.osp" and the task declared the +// * resource "kb". In this case, "kb" is a PREFIX of the +// * declared resource and the next character is the separator +// * character for the resource names (this last point is +// * important to avoid unintended contention between +// * namespaces such as "kb" and "kb1"). +// */ +// return true; +// +// } +// +// } + } - + return false; } @@ -2085,46 +2126,53 @@ } + @Override public IResourceManager getResourceManager() { return delegate.getResourceManager(); } + @Override public IJournal getJournal() { return delegate.getJournal(); } + @Override public String[] getResource() { return delegate.getResource(); } + @Override public String getOnlyResource() { return delegate.getOnlyResource(); } + @Override public IIndex getIndex(String name) { return delegate.getIndex(name); } + @Override public TaskCounters getTaskCounters() { return delegate.getTaskCounters(); } + @Override public String toString() { - return getClass().getName()+"("+delegate.toString()+")"; - + return getClass().getName() + "(" + delegate.toString() + ")"; + } } @@ -2577,8 +2625,13 @@ } // read committed view IFF it exists otherwise [null] - return new GlobalRowStoreHelper(this).get(ITx.READ_COMMITTED); + // TODO Review. Make sure we have tx protection to avoid recycling of the view. + final long lastCommitTime = getLastCommitTime(); + return new GlobalRowStoreHelper(this).get(lastCommitTime); + + //return new GlobalRowStoreHelper(this).get(ITx.READ_COMMITTED); + } @Override @@ -2696,12 +2749,32 @@ * Disallowed methods (commit protocol and shutdown protocol). */ + /** + * {@inheritDoc} + * <p> + * Marks the task as aborted. The task will not commit. However, the + * task will continue to execute until control returns from its + * {@link AbstractTask#doTask()} method. + */ @Override public void abort() { - throw new UnsupportedOperationException(); + aborted = true; } + /** + * {@inheritDoc} + * <p> + * Overridden as NOP. Tasks do not directly invoke commit() on the + * Journal. + */ @Override + public long commit() { + if (aborted) + throw new IllegalStateException("aborted"); + return 0; + } + + @Override public void close() { throw new UnsupportedOperationException(); } @@ -2717,11 +2790,6 @@ } @Override - public long commit() { - throw new UnsupportedOperationException(); - } - - @Override public void setCommitter(int index, ICommitter committer) { throw new UnsupportedOperationException(); } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/LocalTripleStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/LocalTripleStore.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/store/LocalTripleStore.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -33,6 +33,7 @@ import com.bigdata.btree.BTree; import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.IJournal; import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; import com.bigdata.relation.locator.DefaultResourceLocator; @@ -55,13 +56,13 @@ final static private Logger log = Logger.getLogger(LocalTripleStore.class); - private final Journal store; + private final IJournal store; /** * The backing embedded database. */ @Override - public Journal getIndexManager() { + public IJournal getIndexManager() { return store; @@ -160,7 +161,7 @@ super(indexManager, namespace, timestamp, properties); - store = (Journal) indexManager; + store = (IJournal) indexManager; } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -1135,63 +1135,46 @@ abstract protected void doQuery(BigdataSailRepositoryConnection cxn, OutputStream os) throws Exception; + @Override final public Void call() throws Exception { BigdataSailRepositoryConnection cxn = null; + boolean success = false; try { + // Note: Will be UPDATE connection if UPDATE request!!! cxn = getQueryConnection(namespace, timestamp); if(log.isTraceEnabled()) log.trace("Query running..."); beginNanos = System.nanoTime(); -// try { - if (explain && !update) { - /* - * The data goes to a bit bucket and we send an - * "explanation" of the query evaluation back to the caller. - * - * Note: The trick is how to get hold of the IRunningQuery - * object. It is created deep within the Sail when we - * finally submit a query plan to the query engine. We have - * the queryId (on queryId2), so we can look up the - * IRunningQuery in [m_queries] while it is running, but - * once it is terminated the IRunningQuery will have been - * cleared from the internal map maintained by the - * QueryEngine, at which point we can not longer find it. - * - * Note: We can't do this for UPDATE since it would have - * a side-effect anyway. The way to "EXPLAIN" an UPDATE - * is to break it down into the component QUERY bits and - * execute those. - */ - doQuery(cxn, new NullOutputStream()); - } else { - doQuery(cxn, os); - os.flush(); - os.close(); - } - if(log.isTraceEnabled()) - log.trace("Query done."); -// } catch(Throwable t) { -// /* -// * Log the query and the exception together. -// */ -// log.error(t.getLocalizedMessage() + ":\n" + queryStr, t); -// } - return null; - } catch (Throwable t) { - log.error("Will abort: " + t, t); - if (cxn != null && !cxn.isReadOnly()) { + if (explain && !update) { /* - * Force rollback of the connection. + * The data goes to a bit bucket and we send an + * "explanation" of the query evaluation back to the caller. * - * Note: It is possible that the commit has already been - * processed, in which case this rollback() will be a NOP. - * This can happen when there is an IO error when - * communicating with the client, but the database has - * already gone through a commit. + * Note: The trick is how to get hold of the IRunningQuery + * object. It is created deep within the Sail when we + * finally submit a query plan to the query engine. We have + * the queryId (on queryId2), so we can look up the + * IRunningQuery in [m_queries] while it is running, but + * once it is terminated the IRunningQuery will have been + * cleared from the internal map maintained by the + * QueryEngine, at which point we can not longer find it. + * + * Note: We can't do this for UPDATE since it would have a + * side-effect anyway. The way to "EXPLAIN" an UPDATE is to + * break it down into the component QUERY bits and execute + * those. */ - cxn.rollback(); + doQuery(cxn, new NullOutputStream()); + success = true; + } else { + doQuery(cxn, os); + success = true; + os.flush(); + os.close(); } - throw new Exception(t); + if (log.isTraceEnabled()) + log.trace("Query done."); + return null; } finally { endNanos = System.nanoTime(); m_queries.remove(queryId); @@ -1204,11 +1187,26 @@ // } // } if (cxn != null) { + if (!success && !cxn.isReadOnly()) { + /* + * Force rollback of the connection. + * + * Note: It is possible that the commit has already been + * processed, in which case this rollback() will be a + * NOP. This can happen when there is an IO error when + * communicating with the client, but the database has + * already gone through a commit. + */ + try { + // Force rollback of the connection. + cxn.rollback(); + } catch (Throwable t) { + log.error(t, t); + } + } try { // Force close of the connection. cxn.close(); - if(log.isTraceEnabled()) - log.trace("Connection closed."); } catch (Throwable t) { log.error(t, t); } @@ -1432,6 +1430,7 @@ * <p> * This executes the SPARQL UPDATE and formats the HTTP response. */ + @Override protected void doQuery(final BigdataSailRepositoryConnection cxn, final OutputStream os) throws Exception { @@ -1439,24 +1438,31 @@ * Setup a change listener. It will notice the #of mutations. */ final CAT mutationCount = new CAT(); + cxn.addChangeLog(new IChangeLog(){ + @Override public void changeEvent(final IChangeRecord record) { mutationCount.increment(); } + @Override public void transactionBegin() { } + @Override public void transactionPrepare() { } + @Override public void transactionCommited(long commitTime) { } + @Override public void transactionAborted() { - }}); - + } + }); + // Prepare the UPDATE request. final BigdataSailUpdate update = setupUpdate(cxn); @@ -2106,10 +2112,11 @@ } /** - * Return a connection transaction. When the timestamp is associated with a - * historical commit point, this will be a read-only connection. When it is - * associated with the {@link ITx#UNISOLATED} view or a read-write - * transaction, this will be a mutable connection. + * Return a connection transaction, which may be read-only or support + * update. When the timestamp is associated with a historical commit point, + * this will be a read-only connection. When it is associated with the + * {@link ITx#UNISOLATED} view or a read-write transaction, this will be a + * mutable connection. * * @param namespace * The namespace. Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -29,8 +29,11 @@ import java.io.InputStreamReader; import java.io.OutputStream; import java.io.Writer; +import java.util.HashSet; import java.util.LinkedList; import java.util.List; +import java.util.Set; +import java.util.concurrent.Future; import javax.servlet.ServletContext; import javax.servlet.http.HttpServlet; @@ -39,12 +42,18 @@ import org.apache.log4j.Logger; +import com.bigdata.BigdataStatics; import com.bigdata.ha.HAStatusEnum; import com.bigdata.journal.AbstractJournal; +import com.bigdata.journal.IConcurrencyManager; import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.Journal; +import com.bigdata.journal.TimestampUtility; import com.bigdata.quorum.AbstractQuorum; import com.bigdata.rdf.sail.webapp.client.IMimeTypes; import com.bigdata.rdf.sail.webapp.lbs.IHALoadBalancerPolicy; +import com.bigdata.rdf.store.AbstractTripleStore; +import com.bigdata.service.IBigdataFederation; /** * Useful glue for implementing service actions, but does not directly implement @@ -190,6 +199,149 @@ } + /** + * Submit a task and return a {@link Future} for that task. The task will be + * run on the appropriate executor service depending on the nature of the + * backing database and the view required by the task. + * + * @param task + * The task. + * + * @return The {@link Future} for that task. + * + * @throws DatasetNotFoundException + * + * @see <a href="http://sourceforge.net/apps/trac/bigdata/ticket/753" > HA + * doLocalAbort() should interrupt NSS requests and AbstractTasks </a> + * @see <a href="- http://sourceforge.net/apps/trac/bigdata/ticket/566" > + * Concurrent unisolated operations against multiple KBs </a> + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + protected <T> Future<T> submitApiTask(final RestApiTask<T> task) + throws DatasetNotFoundException { + + final String namespace = task.getNamespace(); + + final long timestamp = task.getTimestamp(); + + final IIndexManager indexManager = getIndexManager(); + + if (!BigdataStatics.NSS_GROUP_COMMIT || indexManager instanceof IBigdataFederation + || TimestampUtility.isReadOnly(timestamp) + ) { + + /* + * Run on a normal executor service. + * + * Note: For scale-out, the operation will be applied using + * client-side global views of the indices. + * + * Note: This can be used for operations on read-only views (even on + * a Journal). This is helpful since we can avoid some overhead + * associated the AbstractTask lock declarations. + */ + + return indexManager.getExecutorService().submit( + new RestApiTaskForIndexManager(indexManager, task)); + + } else { + + /** + * Run on the ConcurrencyManager of the Journal. + * + * Mutation operations will be scheduled based on the pre-declared + * locks and will have exclusive access to the resources guarded by + * those locks when they run. + * + * FIXME GROUP COMMIT: The {@link AbstractTask} was written to + * require the exact set of resource lock declarations. However, for + * the REST API, we want to operate on all indices associated with a + * KB instance. This requires either: + * <p> + * (a) pre-resolving the names of those indices and passing them all + * into the AbstractTask; or + * <P> + * (b) allowing the caller to only declare the namespace and then to + * be granted access to all indices whose names are in that + * namespace. + * + * (b) is now possible with the fix to the Name2Addr prefix scan. + */ + + // Obtain the necessary locks for R/w access to KB indices. + final String[] locks = getLocksForKB((Journal) indexManager, + namespace); + + final IConcurrencyManager cc = ((Journal) indexManager) + .getConcurrencyManager(); + + // Submit task to ConcurrencyManager. Will acquire locks and run. + return cc.submit(new RestApiTaskForJournal(cc, task.getTimestamp(), + locks, task)); + + } + + } + + /** + * Acquire the locks for the named indices associated with the specified KB. + * + * @param indexManager + * The {@link Journal}. + * @param namespace + * The namespace of the KB instance. + * + * @return The locks for the named indices associated with that KB instance. + * + * @throws DatasetNotFoundException + * + * FIXME GROUP COMMIT : [This should be replaced by the use of + * the namespace and hierarchical locking support in + * AbstractTask.] This could fail to discover a recently create + * KB between the time when the KB is created and when the group + * commit for that create becomes visible. This data race exists + * because we are using [lastCommitTime] rather than the + * UNISOLATED view of the GRS. + * <p> + * Note: This data race MIGHT be closed by the default locator + * cache. If it records the new KB properties when they are + * created, then they should be visible. If they are not + * visible, then we have a data race. (But if it records them + * before the group commit for the KB create, then the actual KB + * indices will not be durable until the that group commit...). + * <p> + * Note: The problem can obviously be resolved by using the + * UNISOLATED index to obtain the KB properties, but that would + * serialize ALL updates. What we need is a suitable caching + * mechanism that (a) ensures that newly create KB instances are + * visible; and (b) has high concurrency for read-only requests + * for the properties for those KB instances. + */ + private static String[] getLocksForKB(final Journal indexManager, + final String namespace) throws DatasetNotFoundException { + + final long timestamp = indexManager.getLastCommitTime(); + + final AbstractTripleStore tripleStore = (AbstractTripleStore) indexManager + .getResourceLocator().locate(namespace, timestamp); + + if (tripleStore == null) + throw new DatasetNotFoundException("Not found: namespace=" + + namespace + ", timestamp=" + + TimestampUtility.toString(timestamp)); + + final Set<String> lockSet = new HashSet<String>(); + + lockSet.addAll(tripleStore.getSPORelation().getIndexNames()); + + lockSet.addAll(tripleStore.getLexiconRelation().getIndexNames()); + + final String[] locks = lockSet.toArray(new String[lockSet.size()]); + + return locks; + + } + // /** // * Return the {@link Quorum} -or- <code>null</code> if the // * {@link IIndexManager} is not participating in an HA {@link Quorum}. Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -105,6 +105,7 @@ try { BigdataSailRepositoryConnection conn = null; + boolean success = false; try { conn = getBigdataRDFContext() @@ -116,6 +117,8 @@ graph.commit(); + success = true; + final long nmodified = graph.getMutationCountLastCommit(); final long elapsed = System.currentTimeMillis() - begin; @@ -124,17 +127,16 @@ return; - } catch(Throwable t) { - - if(conn != null) - conn.rollback(); - - throw new RuntimeException(t); - } finally { - if (conn != null) + if (conn != null) { + + if (!success) + conn.rollback(); + conn.close(); + + } } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -48,6 +48,7 @@ import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; import com.bigdata.rdf.sail.webapp.BigdataRDFContext.AbstractQueryTask; +import com.bigdata.rdf.sail.webapp.RestApiTask.RestApiMutationTask; import com.bigdata.rdf.sail.webapp.client.EncodeDecodeValue; import com.bigdata.rdf.sail.webapp.client.MiniMime; @@ -105,6 +106,14 @@ * process deleting the statements. This is done while it is holding the * unisolated connection which prevents concurrent modifications. Therefore * the entire SELECT + DELETE operation is ACID. + * + * FIXME GROUP COMMIT : Again, a pattern where a query is run to produce + * solutions that are then deleted from the database. Can we rewrite this to + * be a SPARQL UPDATE? (DELETE WHERE). Note that the ACID semantics of this + * operation would be broken by group commit since other tasks could have + * updated the KB since the lastCommitTime and been checkpointed and hence + * be visible to an unisolated operation without there being an intervening + * commit point. */ private void doDeleteWithQuery(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { @@ -158,6 +167,7 @@ final AtomicLong nmodified = new AtomicLong(0L); BigdataSailRepositoryConnection conn = null; + boolean success = false; try { conn = getBigdataRDFContext().getUnisolatedConnection( @@ -196,22 +206,23 @@ // Commit the mutation. conn.commit(); + success = true; + final long elapsed = System.currentTimeMillis() - begin; reportModifiedCount(resp, nmodified.get(), elapsed); - } catch(Throwable t) { - - if(conn != null) - conn.rollback(); - - throw new RuntimeException(t); - } finally { - if (conn != null) + if (conn != null) { + + if (!success) + conn.rollback(); + conn.close(); + } + } } catch (Throwable t) { @@ -258,8 +269,6 @@ private void doDeleteWithBody(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - final long begin = System.currentTimeMillis(); - final String baseURI = req.getRequestURL().toString(); final String namespace = getNamespace(req); @@ -325,16 +334,67 @@ } } - final RDFParser rdfParser = rdfParserFactory.getParser(); + submitApiTask( + new DeleteWithBodyTask(req, resp, namespace, + ITx.UNISOLATED, baseURI, defaultContext, + rdfParserFactory)).get(); + + } catch (Throwable t) { - final AtomicLong nmodified = new AtomicLong(0L); + throw BigdataRDFServlet.launderThrowable(t, resp, ""); + + } + } + + private static class DeleteWithBodyTask extends RestApiMutationTask<Void> { + + private final String baseURI; + private final Resource[] defaultContext; + private final RDFParserFactory rdfParserFactory; + + /** + * + * @param namespace + * The namespace of the target KB instance. + * @param timestamp + * The timestamp used to obtain a mutable connection. + * @param baseURI + * The base URI for the operation. + * @param defaultContext + * The context(s) for triples without an explicit named graph + * when the KB instance is operating in a quads mode. + * @param rdfParserFactory + * The factory for the {@link RDFParser}. This should have + * been chosen based on the caller's knowledge of the + * appropriate content type. + */ + public DeleteWithBodyTask(final HttpServletRequest req, + final HttpServletResponse resp, + final String namespace, final long timestamp, + final String baseURI, final Resource[] defaultContext, + final RDFParserFactory rdfParserFactory) { + super(req, resp, namespace, timestamp); + this.baseURI = baseURI; + this.defaultContext = defaultContext; + this.rdfParserFactory = rdfParserFactory; + } + + @Override + public Void call() throws Exception { + + final long begin = System.currentTimeMillis(); + BigdataSailRepositoryConnection conn = null; + boolean success = false; try { - conn = getBigdataRDFContext() - .getUnisolatedConnection(namespace); + conn = getUnisolatedConnection(); + final RDFParser rdfParser = rdfParserFactory.getParser(); + + final AtomicLong nmodified = new AtomicLong(0L); + rdfParser.setValueFactory(conn.getTripleStore() .getValueFactory()); @@ -356,32 +416,31 @@ // Commit the mutation. conn.commit(); + success = true; + final long elapsed = System.currentTimeMillis() - begin; - reportModifiedCount(resp, nmodified.get(), elapsed); + reportModifiedCount(nmodified.get(), elapsed); - } catch(Throwable t) { + return null; - if (conn != null) - conn.rollback(); - - throw new RuntimeException(t); - } finally { - if (conn != null) + if (conn != null) { + + if (!success) + conn.rollback(); + conn.close(); + } + } - } catch (Throwable t) { - - throw BigdataRDFServlet.launderThrowable(t, resp, ""); - } + + } - } - /** * Helper class removes statements from the sail as they are visited by a parser. */ @@ -429,10 +488,10 @@ } if (c.length >= 2) { - // removed from more than one context - nmodified.addAndGet(c.length); + // removed from more than one context + nmodified.addAndGet(c.length); } else { - nmodified.incrementAndGet(); + nmodified.incrementAndGet(); } } @@ -445,8 +504,6 @@ private void doDeleteWithAccessPath(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - final long begin = System.currentTimeMillis(); - final String namespace = getNamespace(req); final Resource s; @@ -471,68 +528,109 @@ try { - BigdataSailRepositoryConnection conn = null; - try { + submitApiTask( + new DeleteWithAccessPathTask(req, resp, namespace, + ITx.UNISOLATED, s, p, o, c)).get(); - conn = getBigdataRDFContext().getUnisolatedConnection( - namespace); + } catch (Throwable t) { - // Remove all statements matching that access path. -// final long nmodified = conn.getSailConnection() -// .getBigdataSail().getDatabase() -// .removeStatements(s, p, o, c); - - // Remove all statements matching that access path. - long nmodified = 0; - if (c != null && c.length > 0) { - for (Resource r : c) { - nmodified += conn.getSailConnection() - .getBigdataSail().getDatabase() - .removeStatements(s, p, o, r); - } - } else { - nmodified += conn.getSailConnection() - .getBigdataSail().getDatabase() - .removeStatements(s, p, o, null); + throw BigdataRDFServlet.launderThrowable(t, resp, "s=" + s + ",p=" + + p + ",o=" + o + ",c=" + c); + + } + + } + +// static private transient final Resource[] nullArray = new Resource[]{}; + + private static class DeleteWithAccessPathTask extends RestApiMutationTask<Void> { + + private Resource s; + private URI p; + private final Value o; + private final Resource[] c; + + /** + * + * @param namespace + * The namespace of the target KB instance. + * @param timestamp + * The timestamp used to obtain a mutable connection. + * @param baseURI + * The base URI for the operation. + * @param defaultContext + * The context(s) for triples without an explicit named graph + * when the KB instance is operating in a quads mode. + * @param rdfParserFactory + * The factory for the {@link RDFParser}. This should have + * been chosen based on the caller's knowledge of the + * appropriate content type. + */ + public DeleteWithAccessPathTask(final HttpServletRequest req, + final HttpServletResponse resp, // + final String namespace, final long timestamp,// + final Resource s, final URI p, final Value o, final Resource[] c) { + super(req, resp, namespace, timestamp); + this.s = s; + this.p = p; + this.o = o; + this.c = c; + } + + @Override + public Void call() throws Exception { + + final long begin = System.currentTimeMillis(); + + BigdataSailRepositoryConnection conn = null; + boolean success = false; + try { + + conn = getUnisolatedConnection(); + + // Remove all statements matching that access path. + // final long nmodified = conn.getSailConnection() + // .getBigdataSail().getDatabase() + // .removeStatements(s, p, o, c); + + // Remove all statements matching that access path. + long nmodified = 0; + if (c != null && c.length > 0) { + for (Resource r : c) { + nmodified += conn.getSailConnection().getBigdataSail() + .getDatabase().removeStatements(s, p, o, r); } - - // Commit the mutation. - conn.commit(); + } else { + nmodified += conn.getSailConnection().getBigdataSail() + .getDatabase().removeStatements(s, p, o, null); + } - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified, elapsed); + // Commit the mutation. + conn.commit(); - } catch(Throwable t) { - - if(conn != null) + success = true; + + final long elapsed = System.currentTimeMillis() - begin; + + reportModifiedCount(nmodified, elapsed); + + return null; + + } finally { + + if (conn != null) { + + if (!success) conn.rollback(); - - throw new RuntimeException(t); - - } finally { - if (conn != null) - conn.close(); + conn.close(); } - } catch (Throwable t) { + } - throw BigdataRDFServlet.launderThrowable(t, resp, "s=" + s + ",p=" - + p + ",o=" + o + ",c=" + c); - } - -// } catch (Exception ex) { -// -// // Will be rendered as an INTERNAL_ERROR. -// throw new RuntimeException(ex); -// -// } - + } -// static private transient final Resource[] nullArray = new Resource[]{}; - } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -45,8 +45,11 @@ import org.openrdf.rio.helpers.RDFHandlerBase; import org.openrdf.sail.SailException; +import com.bigdata.journal.ITx; +import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.webapp.RestApiTask.RestApiMutationTask; import com.bigdata.rdf.sail.webapp.client.MiniMime; /** @@ -132,8 +135,6 @@ */ private void doPostWithBody(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - - final long begin = System.currentTimeMillis(); final String baseURI = req.getRequestURL().toString(); @@ -221,14 +222,71 @@ try { + submitApiTask( + new InsertWithBodyTask(req, resp, namespace, ITx.UNISOLATED, + baseURI, defaultContext, rdfParserFactory)).get(); + + } catch (Throwable t) { + + throw BigdataRDFServlet.launderThrowable(t, resp, ""); + + } + + } + + /** + * + * @author <a href="mailto:tho...@us...">Bryan + * Thompson</a> + * + * TODO The {@link IRDFParserOptions} defaults should be coming from + * the KB instance, right? What does the REST API say about this? + */ + private static class InsertWithBodyTask extends RestApiMutationTask<Void> { + + private final String baseURI; + private final Resource[] defaultContext; + private final RDFParserFactory rdfParserFactory; + + /** + * + * @param namespace + * The namespace of the target KB instance. + * @param timestamp + * The timestamp used to obtain a mutable connection. + * @param baseURI + * The base URI for the operation. + * @param defaultContext + * The context(s) for triples without an explicit named graph + * when the KB instance is operating in a quads mode. + * @param rdfParserFactory + * The factory for the {@link RDFParser}. This should have + * been chosen based on the caller's knowledge of the + * appropriate content type. + */ + public InsertWithBodyTask(final HttpServletRequest req, + final HttpServletResponse resp, + final String namespace, final long timestamp, + final String baseURI, final Resource[] defaultContext, + final RDFParserFactory rdfParserFactory) { + super(req, resp, namespace, timestamp); + this.baseURI = baseURI; + this.defaultContext = defaultContext; + this.rdfParserFactory = rdfParserFactory; + } + + @Override + public Void call() throws Exception { + + final long begin = System.currentTimeMillis(); + final AtomicLong nmodified = new AtomicLong(0L); BigdataSailRepositoryConnection conn = null; boolean success = false; try { - conn = getBigdataRDFContext() - .getUnisolatedConnection(namespace); + conn = getUnisolatedConnection(); /* * There is a request body, so let's try and parse it. @@ -258,35 +316,31 @@ conn.commit(); final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); - + + reportModifiedCount(nmodified.get(), elapsed); + success = true; - - return; + return (Void) null; + } finally { if (conn != null) { if (!success) conn.rollback(); - + conn.close(); } - + } - } catch (Throwable t) { - - throw BigdataRDFServlet.launderThrowable(t, resp, ""); - } - + } - /** + /** * POST with URIs of resources to be inserted (loads the referenced * resources). * @@ -371,25 +425,69 @@ try { - final AtomicLong nmodified = new AtomicLong(0L); + submitApiTask( + new InsertWithURLsTask(req, resp, namespace, + ITx.UNISOLATED, defaultContext, urls)).get(); + } catch (Throwable t) { + + throw launderThrowable(t, resp, "urls=" + urls); + + } + + } + + private static class InsertWithURLsTask extends RestApiMutationTask<Void> { + + private final Vector<URL> urls; + private final Resource[] defaultContext; + + /** + * + * @param namespace + * The namespace of the target KB instance. + * @param timestamp + * The timestamp used to obtain a mutable connection. + * @param baseURI + * The base URI for the operation. + * @param defaultContext + * The context(s) for triples without an explicit named graph + * when the KB instance is operating in a quads mode. + * @param urls + * The {@link URL}s whose contents will be parsed and loaded + * into the target KB. + */ + public InsertWithURLsTask(final HttpServletRequest req, + final HttpServletResponse resp, final String namespace, + final long timestamp, final Resource[] defaultContext, + final Vector<URL> urls) { + super(req, resp, namespace, timestamp); + this.urls = urls; + this.defaultContext = defaultContext; + } + + @Override + public Void call() throws Exception { + + final long begin = System.currentTimeMillis(); + BigdataSailRepositoryConnection conn = null; + boolean success = false; try { - conn = getBigdataRDFContext().getUnisolatedConnection( - namespace); + conn = getUnisolatedConnection(); + final AtomicLong nmodified = new AtomicLong(0L); + for (URL url : urls) { - // Use the default context if one was given and otherwise - // the URI from which the data are being read. -// final Resource defactoContext = defaultContext == null ? new URIImpl( -// url.toExternalForm()) : defaultContext; - final Resource[] defactoContext = - defaultContext.length == 0 - ? new Resource[] { new URIImpl(url.toExternalForm()) } - : defaultContext; - + // Use the default context if one was given and otherwise + // the URI from which the data are being read. +// final Resource defactoContext = defaultContext == null ? new URIImpl( +// url.toExternalForm()) : defaultContext; + final Resource[] defactoContext = defaultContext.length == 0 ? new Resource[] { new URIImpl( + url.toExternalForm()) } : defaultContext; + URLConnection hconn = null; try { @@ -411,7 +509,7 @@ */ final String contentType = hconn.getContentType(); - + RDFFormat format = RDFFormat.forMIMEType(new MiniMime( contentType).getMimeType()); @@ -420,10 +518,24 @@ /* * Try to get the RDFFormat from the URL's file * path. + * + * FIXME GROUP COMMIT: There is a potential issue + * where the existing code commits the response and + * returns, e.g., from the InsertServlet. Any task + * that does not fail (thrown exception) will + * commit. This means that mutations operations that + * fail will still attempt to join a commit point. + * This is inappropriate and could cause resource + * leaks (e.g., if the operation failed after + * writing on the Journal). We really should throw + * out a typed exception, but in launderThrowable() + * ignore that typed exception if the response has + * already been committed. That way the task will + * not join a commit point. */ - + format = RDFFormat.forFileName(url.getFile()); - + } if (format == null) { @@ -433,7 +545,7 @@ "Content-Type not recognized as RDF: " + contentType); - return; + return null; } @@ -441,12 +553,12 @@ .getInstance().get(format); if (rdfParserFactory == null) { - buildResponse(resp, HTTP_INTERNALERROR, + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, "Parser not found: Content-Type=" + contentType); - - return; + + return null; } final RDFParser rdfParser = rdfParserFactory @@ -462,66 +574,66 @@ rdfParser .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified, defactoContext)); + rdfParser + .setRDFHandler(new AddStatementHandler(conn + .getSailConnection(), nmodified, + defactoContext)); /* * Run the parser, which will cause statements to be * inserted. */ - rdfParser.parse(hconn.getInputStream(), url - .toExternalForm()/* baseURL */); + rdfParser.parse(hconn.getInputStream(), + url.toExternalForm()/* baseURL */); } finally { if (hconn instanceof HttpURLConnection) { /* * Disconnect, but only after we have loaded all the - * URLs. Disconnect is optional for java.net. It is a - * hint that you will not be accessing more resources on - * the connected host. By disconnecting only after all - * resources have been loaded we are basically assuming - * that people are more likely to load from a single - * host. + * URLs. Disconnect is optional for java.net. It is + * a hint that you will not be accessing more + * resources on the connected host. By disconnecting + * only after all resources have been loaded we are + * basically assuming that people are more likely to + * load from a single host. */ ((HttpURLConnection) hconn).disconnect(); } } - - } // next URI. + } // next URI. + // Commit the mutation. conn.commit(); + success = true; + final long elapsed = System.currentTimeMillis() - begin; - reportModifiedCount(resp, nmodified.get(), elapsed); - - } catch(Throwable t) { + reportModifiedCount(nmodified.get(), elapsed); - if(conn != null) - conn.rollback(); + return null; - throw new RuntimeException(t); - } finally { - if (conn != null) + if (conn != null) { + + if (!success) + conn.rollback(); + conn.close(); + } + } - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - } - + } - + /** * Helper class adds statements to the sail as they are visited by a parser. */ Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2014-06-11 15:52:14 UTC (rev 8471) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2014-06-11 16:35:53 UTC (rev 8472) @@ -343,7 +343,7 @@ } /* - * Setup task to execute the query. The task is executed on a thread + * Setup task to execute the request. The task is executed on a thread * pool. This bounds the possible concurrency of query execution (as * opposed to queries accepted for eventual execution). * @@ -353,13 +353,8 @@ */ try { - final OutputStream os = resp.getOutputStream(); - final BigdataRDFContext context = getBigdataRDFContext(); - // final boolean explain = - // req.getParameter(BigdataRDFContext.EXPLAIN) != null; - final UpdateTask updateTask; try { @@ -370,7 +365,7 @@ updateTask = (UpdateTask) context.getQueryTask(namespace, timestamp, updateStr, null/* acceptOverride */, req, - resp, os, true/* update */); + resp, resp.getOutputStream(), t... [truncated message content] |
From: <tho...@us...> - 2014-06-11 15:52:21
|
Revision: 8471 http://sourceforge.net/p/bigdata/code/8471 Author: thompsonbry Date: 2014-06-11 15:52:14 +0000 (Wed, 11 Jun 2014) Log Message: ----------- @Overrides Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2014-06-11 15:05:27 UTC (rev 8470) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2014-06-11 15:52:14 UTC (rev 8471) @@ -3049,6 +3049,7 @@ // // } + @Override public synchronized CloseableIteration<? extends Resource, SailException> getContextIDs() throws SailException { @@ -3083,6 +3084,7 @@ private Resource next = null; private boolean open = true; + @Override public void close() throws SailException { if (open) { open = false; @@ -3091,6 +3093,7 @@ } } + @Override public boolean hasNext() throws SailException { if(open && _hasNext()) return true; @@ -3112,6 +3115,7 @@ return false; } + @Override public Resource next() throws SailException { if (next == null) throw new SailException(); @@ -3120,6 +3124,7 @@ return tmp; } + @Override public void remove() throws SailException { /* * Note: remove is not supported. The semantics would @@ -3143,6 +3148,7 @@ * Note: The semantics depend on the {@link Options#STORE_CLASS}. See * {@link ITripleStore#abort()}. */ + @Override public synchronized void rollback() throws SailException { assertWritableConn(); @@ -3234,12 +3240,14 @@ * Note: The semantics depend on the {@link Options#STORE_CLASS}. See * {@link AbstractTripleStore#commit()}. */ + @Override final public synchronized void commit() throws SailException { commit2(); } + @Override final public boolean isOpen() throws SailException { return openConn; @@ -3269,6 +3277,7 @@ * artifact arises because the {@link SailConnection} is using * unisolated writes on the database). */ + @Override public synchronized void close() throws SailException { // assertOpen(); @@ -3351,6 +3360,7 @@ /** * Invoke close, which will be harmless if we are already closed. */ + @Override protected void finalize() throws Throwable { /* @@ -3488,6 +3498,7 @@ * from each context in a quad store, including anything in the * {@link BigdataSail#NULL_GRAPH}. */ + @Override @SuppressWarnings("unchecked") public CloseableIteration<? extends Statement, SailException> getStatements( final Resource s, final URI p, final Value o, This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-11 15:05:36
|
Revision: 8470 http://sourceforge.net/p/bigdata/code/8470 Author: thompsonbry Date: 2014-06-11 15:05:27 +0000 (Wed, 11 Jun 2014) Log Message: ----------- Made some methods static in order to expose them within the refactored RestApi tasks. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2014-06-11 15:04:28 UTC (rev 8469) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2014-06-11 15:05:27 UTC (rev 8470) @@ -359,7 +359,7 @@ * * @throws IOException */ - protected void reportModifiedCount(final HttpServletResponse resp, + static protected void reportModifiedCount(final HttpServletResponse resp, final long nmodified, final long elapsed) throws IOException { final StringWriter w = new StringWriter(); @@ -385,7 +385,7 @@ * * @throws IOException */ - protected void reportRangeCount(final HttpServletResponse resp, + static protected void reportRangeCount(final HttpServletResponse resp, final long rangeCount, final long elapsed) throws IOException { final StringWriter w = new StringWriter(); @@ -411,20 +411,22 @@ * * @throws IOException */ - protected void reportContexts(final HttpServletResponse resp, - final RepositoryResult<Resource> contexts, final long elapsed) - throws IOException, RepositoryException { + static protected void reportContexts(final HttpServletResponse resp, + final RepositoryResult<Resource> contexts, final long elapsed) + throws IOException, RepositoryException { final StringWriter w = new StringWriter(); final XMLBuilder t = new XMLBuilder(w); final Node root = t.root("contexts"); + while (contexts.hasNext()) { - root.node("context") - .attr("uri", contexts.next()) - .close(); + + root.node("context").attr("uri", contexts.next()).close(); + } + root.close(); buildResponse(resp, HTTP_OK, MIME_APPLICATION_XML, w.toString()); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-11 15:04:32
|
Revision: 8469 http://sourceforge.net/p/bigdata/code/8469 Author: thompsonbry Date: 2014-06-11 15:04:28 +0000 (Wed, 11 Jun 2014) Log Message: ----------- javadoc on configuration of the query service. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2014-06-11 14:48:00 UTC (rev 8468) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2014-06-11 15:04:28 UTC (rev 8469) @@ -223,7 +223,10 @@ /** * A thread pool for running accepted queries against the - * {@link QueryEngine}. + * {@link QueryEngine}. The number of queries that will be processed + * concurrently is determined by this thread pool. + * + * @see SparqlEndpointConfig#queryThreadPoolSize */ /*package*/final ExecutorService queryService; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-11 14:48:03
|
Revision: 8468 http://sourceforge.net/p/bigdata/code/8468 Author: thompsonbry Date: 2014-06-11 14:48:00 +0000 (Wed, 11 Jun 2014) Log Message: ----------- removed versionId Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java 2014-06-11 14:47:13 UTC (rev 8467) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java 2014-06-11 14:48:00 UTC (rev 8468) @@ -30,12 +30,10 @@ import junit.framework.TestCase; import junit.framework.TestSuite; - /** * Test suite. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id: TestAll.java 4908 2011-07-13 19:42:43Z thompsonbry $ */ public class TestAll extends TestCase { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-11 14:47:22
|
Revision: 8467 http://sourceforge.net/p/bigdata/code/8467 Author: thompsonbry Date: 2014-06-11 14:47:13 +0000 (Wed, 11 Jun 2014) Log Message: ----------- Adds error checking for the namespace argument for the createRepository() method. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepositoryManager.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java 2014-06-11 13:13:13 UTC (rev 8466) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java 2014-06-11 14:47:13 UTC (rev 8467) @@ -234,6 +234,16 @@ private volatile String queryMethod; /** + * The name of the property whose value is the namespace of the KB to be + * created. + * <p> + * Note: This string is identicial to one defined by the BigdataSail + * options, but the client API must not include a dependency on the Sail so + * it is given by value again here in a package local scope. + */ + static final String OPTION_CREATE_KB_NAMESPACE = "com.bigdata.rdf.sail.namespace"; + + /** * Return the maximum requestURL length before the request is converted into * a POST using a <code>application/x-www-form-urlencoded</code> request * entity. Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepositoryManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepositoryManager.java 2014-06-11 13:13:13 UTC (rev 8466) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepositoryManager.java 2014-06-11 14:47:13 UTC (rev 8467) @@ -247,6 +247,14 @@ public void createRepository(final String namespace, final Properties properties) throws Exception { + if (namespace == null) + throw new IllegalArgumentException(); + if (properties == null) + throw new IllegalArgumentException(); + if (properties.getProperty(OPTION_CREATE_KB_NAMESPACE) == null) + throw new IllegalArgumentException("Property not defined: " + + OPTION_CREATE_KB_NAMESPACE); + final ConnectOptions opts = newConnectOptions(baseServiceURL + "/namespace"); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-06-11 13:13:24
|
Revision: 8466 http://sourceforge.net/p/bigdata/code/8466 Author: thompsonbry Date: 2014-06-11 13:13:13 +0000 (Wed, 11 Jun 2014) Log Message: ----------- Fix for Name2Addr prefix scan and improved correctness for LexiconRelation.prefixScan(). Key changes are to: - IKeyBuilderFactory - defines getPrimaryKeyBuilder() - LexiconRelation - uses the getPrimaryKeyBuilder() method. - Name2Addr - uses the getPrimaryKeyBuilder() method. Javadoc updates to PrefixFilter. Added @Override and final attributes to several classes that were touched by this fix. I have run through the TestLocalTripleStore and TestRWJournal test suites and everything is good. I am currently running TestBigdataSailWithQuads but do not anticipate any issues. I have verified that the existing tests for Name2Addr and the LexiconRelation prefix scans fail if the code uses the default collation strength rather than PRIMARY so we know that we have regression tests in place for those behaviors. See #974 (Name2Addr.indexNameScan(prefix) uses scan + filter) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/DefaultTupleSerializer.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/IndexMetadata.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/filter/PrefixFilter.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ASCIIKeyBuilderFactory.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/DefaultKeyBuilderFactory.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/IKeyBuilderFactory.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ThreadLocalKeyBuilderFactory.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdTupleSerializer.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestCompletionScan.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTCK.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/store/TestLocalTripleStore.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/DefaultTupleSerializer.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/DefaultTupleSerializer.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/DefaultTupleSerializer.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -102,12 +102,14 @@ private IRabaCoder leafKeysCoder; private IRabaCoder leafValsCoder; + @Override final public IRabaCoder getLeafKeysCoder() { return leafKeysCoder; } + @Override final public IRabaCoder getLeafValuesCoder() { return leafValsCoder; @@ -213,6 +215,7 @@ } + @Override public String toString() { final StringBuilder sb = new StringBuilder(); @@ -237,6 +240,7 @@ * that the specific configuration values are persisted, even when the * {@link DefaultTupleSerializer} is de-serialized on a different host. */ + @Override final public IKeyBuilder getKeyBuilder() { if(threadLocalKeyBuilderFactory == null) { @@ -259,6 +263,30 @@ } + @Override + final public IKeyBuilder getPrimaryKeyBuilder() { + + if(threadLocalKeyBuilderFactory == null) { + + /* + * This can happen if you use the de-serialization ctor by mistake. + */ + + throw new IllegalStateException(); + + } + + /* + * TODO This should probably to a reset() before returning the object. + * However, we need to verify that no callers are assuming that it does + * NOT do a reset and implicitly relying on passing the intermediate key + * via the return value (which would be very bad style). + */ + return threadLocalKeyBuilderFactory.getPrimaryKeyBuilder(); + + } + + @Override public byte[] serializeKey(final Object obj) { if (obj == null) @@ -277,6 +305,7 @@ * @return The serialized representation of the object as a byte[] -or- * <code>null</code> if the reference is <code>null</code>. */ + @Override public byte[] serializeVal(final V obj) { return SerializerUtil.serialize(obj); @@ -287,6 +316,7 @@ * De-serializes an object from the {@link ITuple#getValue() value} stored * in the tuple (ignores the key stored in the tuple). */ + @Override public V deserialize(ITuple tuple) { if (tuple == null) @@ -308,6 +338,7 @@ * @throws UnsupportedOperationException * always. */ + @Override public K deserializeKey(ITuple tuple) { throw new UnsupportedOperationException(); @@ -327,6 +358,7 @@ */ private final static transient byte VERSION = VERSION0; + @Override public void readExternal(final ObjectInput in) throws IOException, ClassNotFoundException { @@ -346,6 +378,7 @@ } + @Override public void writeExternal(final ObjectOutput out) throws IOException { out.writeByte(VERSION); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/IndexMetadata.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/IndexMetadata.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/IndexMetadata.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -2910,11 +2910,19 @@ * specified for <i>this</i> index. * </p> */ + @Override public IKeyBuilder getKeyBuilder() { return getTupleSerializer().getKeyBuilder(); } + + @Override + public IKeyBuilder getPrimaryKeyBuilder() { + + return getTupleSerializer().getPrimaryKeyBuilder(); + + } /** * @see Configuration#getProperty(IIndexManager, Properties, String, String, Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/filter/PrefixFilter.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/filter/PrefixFilter.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/filter/PrefixFilter.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -26,7 +26,7 @@ * </p> * <h4>WARNING</h4> * <p> - * <strong>The prefix keys MUST be formed with {@link StrengthEnum#Identical}. + * <strong>The prefix keys MUST be formed with {@link StrengthEnum#Primary}. * This is necessary in order to match all keys in the index since it causes the * secondary characteristics to NOT be included in the prefix key even if they * are present in the keys in the index.</strong> Using other @@ -55,20 +55,21 @@ * <p> * at IDENTICAL strength. The additional bytes for the IDENTICAL strength * reflect the Locale specific Unicode sort key encoding of secondary - * characteristics such as case. The successor of the PRIMARY strength byte[] is + * characteristics such as case. The successor of the IDENTICAL strength byte[] + * is * </p> * * <pre> - * [43, 75, 89, 41, 68] + * [43, 75, 89, 41, 67, 1, 9, 1, 143, 9] * </pre> * * <p> * (one was added to the last byte) which spans all keys of interest. However - * the successor of the IDENTICAL strength byte[] would + * the successor of the PRIMARY strength byte[] would * </p> * * <pre> - * [43, 75, 89, 41, 67, 1, 9, 1, 143, 9] + * [43, 75, 89, 41, 68] * </pre> * * <p> @@ -81,8 +82,8 @@ * <pre> * Properties properties = new Properties(); * - * properties.setProperty(KeyBuilder.Options.STRENGTH, StrengthEnum.Primary - * .toString()); + * properties.setProperty(KeyBuilder.Options.STRENGTH, + * StrengthEnum.Primary.toString()); * * prefixKeyBuilder = KeyBuilder.newUnicodeInstance(properties); * </pre> @@ -104,7 +105,9 @@ * partition.... * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ + * + * @see <a href="http://trac.bigdata.com/ticket/974" > + * Name2Addr.indexNameScan(prefix) uses scan + filter </a> */ public class PrefixFilter<E> extends FilterBase implements ITupleFilter<E> { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ASCIIKeyBuilderFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ASCIIKeyBuilderFactory.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ASCIIKeyBuilderFactory.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -39,7 +39,6 @@ * Factory for instances that do NOT support Unicode. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class ASCIIKeyBuilderFactory implements IKeyBuilderFactory, Externalizable { @@ -59,6 +58,7 @@ /** * Representation includes all aspects of the {@link Serializable} state. */ + @Override public String toString() { StringBuilder sb = new StringBuilder(getClass().getName()); @@ -87,19 +87,35 @@ } + @Override public IKeyBuilder getKeyBuilder() { return KeyBuilder.newInstance(initialCapacity); } - public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { + /** + * {@inheritDoc} + * <p> + * Note: The PRIMARY is identical to the as-configured {@link IKeyBuilder} + * for ASCII. + */ + @Override + public IKeyBuilder getPrimaryKeyBuilder() { + return getKeyBuilder(); + + } + + @Override + public void readExternal(final ObjectInput in) throws IOException, ClassNotFoundException { + initialCapacity = in.readInt(); } - public void writeExternal(ObjectOutput out) throws IOException { + @Override + public void writeExternal(final ObjectOutput out) throws IOException { out.writeInt(initialCapacity); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/DefaultKeyBuilderFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/DefaultKeyBuilderFactory.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/DefaultKeyBuilderFactory.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -409,6 +409,7 @@ } + @Override public IKeyBuilder getKeyBuilder() { if(log.isDebugEnabled()) { @@ -422,6 +423,20 @@ } + @Override + public IKeyBuilder getPrimaryKeyBuilder() { + + if(log.isDebugEnabled()) { + + log.debug(toString()); + + } + + return KeyBuilder.newInstance(initialCapacity, collator, locale, + StrengthEnum.Primary, decompositionMode); + + } + /** * Text of the exception thrown when the ICU library is required but is not * available. Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/IKeyBuilderFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/IKeyBuilderFactory.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/IKeyBuilderFactory.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -32,7 +32,6 @@ * A factory for pre-configured {@link IKeyBuilder} instances. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public interface IKeyBuilderFactory { @@ -41,4 +40,15 @@ */ public IKeyBuilder getKeyBuilder(); + /** + * Return an instance of the configured {@link IKeyBuilder} that has been + * overridden to have {@link StrengthEnum#Primary} collation strength. This + * may be used to form successors for Unicode prefix scans without having + * the secondary sort ordering characteristics mucking things up. + * + * @see <a href="http://trac.bigdata.com/ticket/974" > + * Name2Addr.indexNameScan(prefix) uses scan + filter </a> + */ + public IKeyBuilder getPrimaryKeyBuilder(); + } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ThreadLocalKeyBuilderFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ThreadLocalKeyBuilderFactory.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/btree/keys/ThreadLocalKeyBuilderFactory.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -31,8 +31,9 @@ import com.bigdata.btree.IIndex; /** + * A thread-local implementation. + * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class ThreadLocalKeyBuilderFactory implements IKeyBuilderFactory { @@ -58,6 +59,7 @@ */ private ThreadLocal<IKeyBuilder> threadLocalKeyBuilder = new ThreadLocal<IKeyBuilder>() { + @Override protected synchronized IKeyBuilder initialValue() { return delegate.getKeyBuilder(); @@ -67,13 +69,41 @@ }; /** + * {@inheritDoc} + * <p> * Return a {@link ThreadLocal} {@link IKeyBuilder} instance configured * using the {@link IKeyBuilderFactory} specified to the ctor. */ + @Override public IKeyBuilder getKeyBuilder() { return threadLocalKeyBuilder.get(); } + private ThreadLocal<IKeyBuilder> threadLocalPrimaryKeyBuilder = new ThreadLocal<IKeyBuilder>() { + + @Override + protected synchronized IKeyBuilder initialValue() { + + return delegate.getPrimaryKeyBuilder(); + + } + + }; + + /** + * {@inheritDoc} + * <p> + * Return a {@link ThreadLocal} {@link IKeyBuilder} instance configured + * using the {@link IKeyBuilderFactory} specified to the ctor but with the + * {@link StrengthEnum} overriden as {@link StrengthEnum#Primary}. + */ + @Override + public IKeyBuilder getPrimaryKeyBuilder() { + + return threadLocalPrimaryKeyBuilder.get(); + + } + } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -62,7 +62,6 @@ import com.bigdata.btree.ITuple; import com.bigdata.btree.ITupleIterator; import com.bigdata.btree.IndexMetadata; -import com.bigdata.btree.keys.CollatorEnum; import com.bigdata.btree.keys.DefaultKeyBuilderFactory; import com.bigdata.btree.keys.IKeyBuilder; import com.bigdata.btree.keys.IKeyBuilderFactory; @@ -82,9 +81,7 @@ import com.bigdata.resources.IndexManager; import com.bigdata.resources.ResourceManager; import com.bigdata.util.concurrent.ExecutionExceptions; -import com.ibm.icu.text.Collator; -import cutthecrap.utils.striterators.Filter; import cutthecrap.utils.striterators.IStriterator; import cutthecrap.utils.striterators.Resolver; import cutthecrap.utils.striterators.Striterator; @@ -185,7 +182,6 @@ * reference to the index and we need both on hand to do the commit. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ private class DirtyListener implements IDirtyListener, Comparable<DirtyListener> { @@ -194,6 +190,7 @@ boolean needsCheckpoint; long checkpointAddr = 0L; + @Override public String toString() { return "DirtyListener{name=" @@ -204,7 +201,8 @@ } - private DirtyListener(String name, ICheckpointProtocol btree, boolean needsCheckpoint) { + private DirtyListener(final String name, + final ICheckpointProtocol btree, final boolean needsCheckpoint) { assert name!=null; @@ -253,6 +251,7 @@ * * @param btree */ + @Override public void dirtyEvent(final ICheckpointProtocol btree) { assert btree == this.btree; @@ -549,6 +548,7 @@ /** * @return <i>self</i> */ + @Override public CommitIndexTask call() throws Exception { if (log.isInfoEnabled()) @@ -666,6 +666,7 @@ * >Flush indices in parallel during checkpoint to reduce IO * latency</a> */ + @Override synchronized public long handleCommit(final long commitTime) { @@ -1394,6 +1395,7 @@ } + @Override public String toString() { return "Entry{name=" + name + ",checkpointAddr=" + checkpointAddr @@ -1558,6 +1560,7 @@ */ private final static transient byte VERSION = VERSION0; + @Override public void readExternal(final ObjectInput in) throws IOException, ClassNotFoundException { @@ -1575,6 +1578,7 @@ } + @Override public void writeExternal(final ObjectOutput out) throws IOException { super.writeExternal(out); @@ -1596,34 +1600,11 @@ * * @return The names of the indices spanned by that prefix in that index. * - * FIXME There is a problem with the prefix scan. It appears that we - * are not able to generate the key for a prefix correctly. This - * problem is being worked around by scanning the entire - * {@link Name2Addr} index and then filter for those entries that - * start with the specified prefix. This is not very scalable. - * <p> - * If you change {@link Name2Addr} to use {@link CollatorEnum#ASCII} - * then the prefix scan works correctly without that filter. The - * problem is related to how the {@link Collator} is encoding the - * keys. Neither the ICU nor the JDK collators work for this right - * now. At least the ICU collator winds up with some additional - * bytes after the "end" of the prefix that do not appear when you - * encode the entire index name. For example, compare "kb" and - * "kb.red". See TestName2Addr for more about this issue. - * <p> - * Fixing this problem MIGHT require a data migration. Or we might - * be able to handle this entirely by using an appropriate - * {@link Name2Addr#getKey(String)} and - * {@link Name2AddrTupleSerializer#serializeKey(Object)} - * implementation (depending on how the keys are being encoded). - * <p> - * Update: See <a - * href="https://sourceforge.net/apps/trac/bigdata/ticket/743"> - * AbstractTripleStore.destroy() does not filter for correct prefix - * </a> as well. Maybe the problem is just that we need to have the - * "." appended to the namespace. This could be something that is - * done automatically if the caller does not take care of it - * themselves. + * @see <a href="http://trac.bigdata.com/ticket/974" > + * Name2Addr.indexNameScan(prefix) uses scan + filter </a> + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/743"> + * AbstractTripleStore.destroy() does not filter for correct prefix + * </a> */ public static final Iterator<String> indexNameScan(final String prefix, final IIndex n2a) { @@ -1631,27 +1612,37 @@ final byte[] fromKey; final byte[] toKey; final boolean hasPrefix = prefix != null && prefix.length() > 0; - final boolean restrictScan = false; +// final boolean restrictScan = true; - if (hasPrefix && restrictScan) { + if (hasPrefix ) //&& restrictScan) + { /* * When the namespace prefix was given, generate the toKey as the * fixed length successor of the fromKey. + * + * Note: We MUST use StrengthEnum:=PRIMARY for the prefix scan in + * order to avoid the secondary collation ordering effects. */ - log.error("prefix=" + prefix); +// final IKeyBuilder keyBuilder = n2a.getIndexMetadata() +// .getTupleSerializer().getKeyBuilder(); +// final Properties properties = new Properties(); +// +// properties.setProperty(KeyBuilder.Options.STRENGTH, +// StrengthEnum.Primary.toString()); +// +// final IKeyBuilder keyBuilder = new DefaultKeyBuilderFactory( +// properties).getKeyBuilder(); final IKeyBuilder keyBuilder = n2a.getIndexMetadata() - .getTupleSerializer().getKeyBuilder(); - + .getPrimaryKeyBuilder(); + fromKey = keyBuilder.reset().append(prefix).getKey(); - // toKey = - // keyBuilder.reset().append(prefix).appendNul().getKey(); toKey = SuccessorUtil.successor(fromKey.clone()); - if (true || log.isDebugEnabled()) { + if (log.isDebugEnabled()) { log.error("fromKey=" + BytesUtil.toString(fromKey)); @@ -1670,6 +1661,9 @@ @SuppressWarnings("unchecked") final ITupleIterator<Entry> itr = n2a.rangeIterator(fromKey, toKey); + /* + * Add resolver from the tuple to the name of the index. + */ IStriterator sitr = new Striterator(itr); sitr = sitr.addFilter(new Resolver() { @@ -1686,38 +1680,63 @@ }); - if (hasPrefix && !restrictScan) { +// if (hasPrefix && !restrictScan) { +// +// /* +// * Only report the names that match the prefix. +// * +// * Note: For the moment, the filter is hacked by examining the +// * de-serialized Entry objects and only reporting those that start +// * with the [prefix]. +// */ +// +// sitr = sitr.addFilter(new Filter() { +// +// private static final long serialVersionUID = 1L; +// +// @Override +// public boolean isValid(final Object obj) { +// +// final String name = (String) obj; +// +// if (name.startsWith(prefix)) { +// +// // acceptable. +// return true; +// } +// return false; +// } +// }); +// +// } - /* - * Only report the names that match the prefix. - * - * Note: For the moment, the filter is hacked by examining the - * de-serialized Entry objects and only reporting those that start - * with the [prefix]. - */ - - sitr = sitr.addFilter(new Filter() { - - private static final long serialVersionUID = 1L; - - @Override - public boolean isValid(final Object obj) { - - final String name = (String) obj; - - if (name.startsWith(prefix)) { - - // acceptable. - return true; - } - return false; - } - }); - - } - return sitr; } +// /** +// * The SuccessorUtil does not work with CollatedKeys since it bumps the "meta/control" data +// * at the end of the key, rather than the "value" data of the key. +// * +// * It has been observed that the key data is delimited with a 01 byte, followed by meta/control +// * data with the key itself delimited by a 00 byte. +// * +// * Note that this has only been analyzed for the ICU collator, the standard Java collator does include +// * 00 bytes in the key. However, it too appears to delimit the value key with a 01 byte so the +// * same method should work. +// * +// * @param src - original key +// * @return the next key +// */ +// private static byte[] successor(final byte[] src) { +// final byte[] nxt = src.clone(); +// for (int i = 1; i < nxt.length; i++) { +// if (nxt[i] == 01) { // end of data +// nxt[i-1]++; +// break; +// } +// } +// +// return nxt; +// } } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -82,7 +82,6 @@ * Test suite for {@link BufferMode#DiskRW} journals. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class TestRWJournal extends AbstractJournalTestCase { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -71,11 +71,8 @@ import com.bigdata.btree.IndexTypeEnum; import com.bigdata.btree.filter.PrefixFilter; import com.bigdata.btree.filter.TupleFilter; -import com.bigdata.btree.keys.DefaultKeyBuilderFactory; import com.bigdata.btree.keys.IKeyBuilder; import com.bigdata.btree.keys.KVO; -import com.bigdata.btree.keys.KeyBuilder; -import com.bigdata.btree.keys.StrengthEnum; import com.bigdata.cache.ConcurrentWeakValueCacheWithBatchedUpdates; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.IResourceLock; @@ -105,7 +102,6 @@ import com.bigdata.rdf.model.BigdataValueSerializer; import com.bigdata.rdf.rio.StatementBuffer; import com.bigdata.rdf.spo.ISPO; -import com.bigdata.rdf.spo.SPO; import com.bigdata.rdf.store.AbstractTripleStore; import com.bigdata.rdf.vocab.NoVocabulary; import com.bigdata.rdf.vocab.Vocabulary; @@ -1421,27 +1417,32 @@ } - /* + /** * The KeyBuilder used to form the prefix keys. * - * Note: The prefix keys are formed with IDENTICAL strength. This is + * Note: The prefix keys are formed with PRIMARY strength. This is * necessary in order to match all keys in the index since it causes the * secondary characteristics to NOT be included in the prefix key even * if they are present in the keys in the index. + * + * @see <a href="http://trac.bigdata.com/ticket/974" > + * Name2Addr.indexNameScan(prefix) uses scan + filter </a> */ - final LexiconKeyBuilder keyBuilder; - { + final LexiconKeyBuilder keyBuilder = ((Term2IdTupleSerializer) getTerm2IdIndex() + .getIndexMetadata().getTupleSerializer()) + .getLexiconPrimaryKeyBuilder(); +// { +// +// final Properties properties = new Properties(); +// +// properties.setProperty(KeyBuilder.Options.STRENGTH, +// StrengthEnum.Primary.toString()); +// +// keyBuilder = new Term2IdTupleSerializer( +// new DefaultKeyBuilderFactory(properties)).getLexiconKeyBuilder(); +// +// } - final Properties properties = new Properties(); - - properties.setProperty(KeyBuilder.Options.STRENGTH, - StrengthEnum.Primary.toString()); - - keyBuilder = new Term2IdTupleSerializer( - new DefaultKeyBuilderFactory(properties)).getLexiconKeyBuilder(); - - } - /* * Formulate the keys[]. * Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdTupleSerializer.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdTupleSerializer.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdTupleSerializer.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -118,12 +118,30 @@ } /** + * Return a {@link LexiconKeyBuilder} that is setup with collation strength + * PRIMARY. + * + * @see <a href="http://trac.bigdata.com/ticket/974" > + * Name2Addr.indexNameScan(prefix) uses scan + filter </a> + */ + public LexiconKeyBuilder getLexiconPrimaryKeyBuilder() { + + /* + * FIXME We should save off a reference to this to reduce heap churn + * and then use that reference in this class. + */ + return new LexiconKeyBuilder(getPrimaryKeyBuilder()); + + } + + /** * You can not decode the term:id keys since they include Unicode sort keys * and that is a lossy transform. * * @throws UnsupportedOperationException * always */ + @Override public Object deserializeKey(ITuple tuple) { throw new UnsupportedOperationException(); @@ -136,6 +154,7 @@ * @param obj * The RDF {@link Value}. */ + @Override public byte[] serializeKey(Object obj) { return getLexiconKeyBuilder().value2Key((Value)obj); @@ -149,6 +168,7 @@ * @param obj * A term identifier expressed as a {@link TermId}. */ + @Override public byte[] serializeVal(final Object obj) { final IV<?,?> iv = (IV<?,?>) obj; @@ -169,6 +189,7 @@ * De-serializes the {@link ITuple} as a {@link IV} whose value is the * term identifier associated with the key. The key itself is not decodable. */ + @Override public IV deserialize(final ITuple tuple) { final ByteArrayBuffer b = tuple.getValueBuffer(); @@ -187,6 +208,7 @@ */ private final static transient byte VERSION = VERSION0; + @Override public void readExternal(final ObjectInput in) throws IOException, ClassNotFoundException { @@ -204,6 +226,7 @@ } + @Override public void writeExternal(final ObjectOutput out) throws IOException { super.writeExternal(out); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestCompletionScan.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestCompletionScan.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestCompletionScan.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -63,7 +63,6 @@ * {@link LexiconRelation#prefixScan(org.openrdf.model.Literal[])}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class TestCompletionScan extends AbstractTripleStoreTestCase { @@ -85,7 +84,7 @@ */ public void test_completionScan() { - AbstractTripleStore store = getStore(); + final AbstractTripleStore store = getStore(); try { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTCK.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTCK.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTCK.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -38,7 +38,6 @@ * Test driver for debugging Sesame or DAWG manifest tests. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class TestTCK extends AbstractDataDrivenSPARQLTestCase { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/store/TestLocalTripleStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/store/TestLocalTripleStore.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/store/TestLocalTripleStore.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -42,7 +42,6 @@ * various indices are NOT isolatable. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class TestLocalTripleStore extends AbstractTestCase { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2014-06-11 09:34:45 UTC (rev 8465) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2014-06-11 13:13:13 UTC (rev 8466) @@ -46,7 +46,6 @@ * the pipeline join algorithm. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public class TestBigdataSailWithQuads extends AbstractBigdataSailTestCase { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-11 09:34:49
|
Revision: 8465 http://sourceforge.net/p/bigdata/code/8465 Author: martyncutcher Date: 2014-06-11 09:34:45 +0000 (Wed, 11 Jun 2014) Log Message: ----------- Prior to implementing the constant store, I have added argument checking to confirm assumptions that all current addresses used are negative (since I want to use the sign bit to determine a constant allocation). Modified Paths: -------------- branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/rwstore/RWStore.java Modified: branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-06-11 09:32:50 UTC (rev 8464) +++ branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-06-11 09:34:45 UTC (rev 8465) @@ -4103,11 +4103,25 @@ lock.lock(); try { + if (addr == 0) { + return 0L; + } else if (addr > 0) { - if (addr >= 0) { + /* + * It used to be that a positive address indicated an + * absolute address. But this is no longer utilised + * by the RWStore use cases. + * + * Instead we intend to use a positive address to reference + * a ConstantAllocator. In such a scenario the 32 bit + * address is not sufficient. we need the full 64 bit value + * that includes the length in the low 16 bits, so that we + * can access a 48 bit value to identify the correct + * constant allocation. + */ + + throw new IllegalArgumentException("Address cannot be positive"); - return addr & 0xFFFFFFE0; - } else { // Find the allocator. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2014-06-11 09:32:55
|
Revision: 8464 http://sourceforge.net/p/bigdata/code/8464 Author: martyncutcher Date: 2014-06-11 09:32:50 +0000 (Wed, 11 Jun 2014) Log Message: ----------- Fix to the Name2Addr indexNameScan that had to perform a full index scan since the prefix comparison did not work correctly. The fix is provided with "successor" method that identifies the end of the data component in the prefix key, which is not the same as the end of the key. Modified Paths: -------------- branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/journal/Name2Addr.java Modified: branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/journal/Name2Addr.java =================================================================== --- branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2014-06-10 23:30:13 UTC (rev 8463) +++ branches/RWSTORE_COMMITSTATE_973/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2014-06-11 09:32:50 UTC (rev 8464) @@ -1631,7 +1631,7 @@ final byte[] fromKey; final byte[] toKey; final boolean hasPrefix = prefix != null && prefix.length() > 0; - final boolean restrictScan = false; + final boolean restrictScan = true; if (hasPrefix && restrictScan) { @@ -1647,9 +1647,15 @@ fromKey = keyBuilder.reset().append(prefix).getKey(); - // toKey = - // keyBuilder.reset().append(prefix).appendNul().getKey(); - toKey = SuccessorUtil.successor(fromKey.clone()); + // toKey = SuccessorUtil.successor(fromKey.clone()); + + /* + * The naive successor code (above) did not work correctly, + * rather than amend the SuccessorUtil (that may be fin in other + * scenarios, I have provided an alternate implementation to + * handle a collated key. + */ + toKey = successor(fromKey); if (true || log.isDebugEnabled()) { @@ -1720,4 +1726,29 @@ } + /** + * The SuccessorUtil does not work with CollatedKeys since it bumps the "meta/control" data + * at the end of the key, rather than the "value" data of the key. + * + * It has been observed that the key data is delimited with a 01 byte, followed by meta/control + * data with the key itself delimited by a 00 byte. + * + * Note that this has only been analyzed for the ICU collator, the standard Java collator does include + * 00 bytes in the key. However, it too appears to delimit the value key with a 01 byte so the + * same method should work. + * + * @param src - original key + * @return the next key + */ + private static byte[] successor(final byte[] src) { + final byte[] nxt = src.clone(); + for (int i = 1; i < nxt.length; i++) { + if (nxt[i] == 01) { // end of data + nxt[i-1]++; + break; + } + } + + return nxt; + } } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tob...@us...> - 2014-06-10 23:30:23
|
Revision: 8463 http://sourceforge.net/p/bigdata/code/8463 Author: tobycraig Date: 2014-06-10 23:30:13 +0000 (Tue, 10 Jun 2014) Log Message: ----------- #968 - Added links to bigdata.com and wiki to bottom of page Modified Paths: -------------- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/index.html Modified: branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css =================================================================== --- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css 2014-06-10 22:43:25 UTC (rev 8462) +++ branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css 2014-06-10 23:30:13 UTC (rev 8463) @@ -346,3 +346,8 @@ border: 1px solid #e1e1e1; box-sizing: border-box; } + +#links { + text-align: center; + margin-top: 20px; +} \ No newline at end of file Modified: branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/index.html =================================================================== --- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/index.html 2014-06-10 22:43:25 UTC (rev 8462) +++ branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/index.html 2014-06-10 23:30:13 UTC (rev 8463) @@ -230,7 +230,7 @@ </div> - <div class="clear"> </div> + <div id="links"><a href="http://www.bigdata.com" target="_blank">Bigdata</a> - <a href="http://wiki.bigdata.com/" target="_blank">Wiki</a></div> </div> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tob...@us...> - 2014-06-10 22:43:31
|
Revision: 8462 http://sourceforge.net/p/bigdata/code/8462 Author: tobycraig Date: 2014-06-10 22:43:25 +0000 (Tue, 10 Jun 2014) Log Message: ----------- Fixed CodeMirror showing through modal overlay mask Modified Paths: -------------- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css Modified: branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css =================================================================== --- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css 2014-06-10 22:02:47 UTC (rev 8461) +++ branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/css/style.css 2014-06-10 22:43:25 UTC (rev 8462) @@ -167,6 +167,7 @@ margin-left: 25%; background-color: white; padding: 20px; + z-index: 4; } #overlay { @@ -178,6 +179,7 @@ height: 100%; background-color: grey; opacity: 0.5; + z-index: 3; } .modal-open #overlay { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tob...@us...> - 2014-06-10 22:02:56
|
Revision: 8461 http://sourceforge.net/p/bigdata/code/8461 Author: tobycraig Date: 2014-06-10 22:02:47 +0000 (Tue, 10 Jun 2014) Log Message: ----------- Fixed namespace abbreviation of query results and added it to explore pane Modified Paths: -------------- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js Modified: branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js =================================================================== --- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js 2014-06-10 21:38:02 UTC (rev 8460) +++ branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js 2014-06-10 22:02:47 UTC (rev 8461) @@ -1176,7 +1176,7 @@ } else { var uri = col.value; if(col.type == 'uri') { - uri = '<' + uri + '>'; + uri = abbreviate(uri); } } var output = escapeHTML(uri).replace(/\n/g, '<br>'); @@ -1402,9 +1402,11 @@ } function abbreviate(uri) { - for(var ns in NAMESPACE_SHORTCUTS) { - if(uri.indexOf(NAMESPACE_SHORTCUTS[ns]) == 0) { - return uri.replace(NAMESPACE_SHORTCUTS[ns], ns + ':'); + for(var nsGroup in NAMESPACE_SHORTCUTS) { + for(var ns in NAMESPACE_SHORTCUTS[nsGroup]) { + if(uri.indexOf(NAMESPACE_SHORTCUTS[nsGroup][ns]) == 0) { + return uri.replace(NAMESPACE_SHORTCUTS[nsGroup][ns], ns + ':'); + } } } return '<' + uri + '>'; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tob...@us...> - 2014-06-10 21:38:10
|
Revision: 8460 http://sourceforge.net/p/bigdata/code/8460 Author: tobycraig Date: 2014-06-10 21:38:02 +0000 (Tue, 10 Jun 2014) Log Message: ----------- Fixed variable made global instead of local Modified Paths: -------------- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js Modified: branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js =================================================================== --- branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js 2014-06-10 20:59:43 UTC (rev 8459) +++ branches/WORKBENCH_QUERY_HISTORY/bigdata-war/src/html/js/workbench.js 2014-06-10 21:38:02 UTC (rev 8460) @@ -1179,7 +1179,7 @@ uri = '<' + uri + '>'; } } - output = escapeHTML(uri).replace(/\n/g, '<br>'); + var output = escapeHTML(uri).replace(/\n/g, '<br>'); if(col.type == 'uri' || col.type == 'sid') { output = '<a href="' + buildExploreHash(uri) + '">' + output + '</a>'; } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |