From: <tho...@us...> - 2012-08-14 20:28:29
|
Revision: 6445 http://bigdata.svn.sourceforge.net/bigdata/?rev=6445&view=rev Author: thompsonbry Date: 2012-08-14 20:28:23 +0000 (Tue, 14 Aug 2012) Log Message: ----------- Conditional logging in TestRDFXMLInterchangeWithStatementIdentifiers. VoiD : made the top-level method to request a description public. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRDFXMLInterchangeWithStatementIdentifiers.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/VoID.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRDFXMLInterchangeWithStatementIdentifiers.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRDFXMLInterchangeWithStatementIdentifiers.java 2012-08-14 19:57:11 UTC (rev 6444) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRDFXMLInterchangeWithStatementIdentifiers.java 2012-08-14 20:28:23 UTC (rev 6445) @@ -395,7 +395,7 @@ final IAccessPath<ISPO> ap = store.getAccessPath(null, rdfType, Software); - log.info(store.dumpStatements(ap).toString()); + if(log.isInfoEnabled()) log.info(store.dumpStatements(ap).toString()); assertEquals("rangeCount", 3L, ap.rangeCount(true/* exact */)); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/VoID.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/VoID.java 2012-08-14 19:57:11 UTC (rev 6444) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/VoID.java 2012-08-14 20:28:23 UTC (rev 6445) @@ -145,7 +145,8 @@ } /** - * Describe the default data set. + * Describe the default data set (the one identified by the namespace + * associated with the {@link AbstractTripleStore}. * * @param describeStatistics * When <code>true</code>, the VoID description will include the @@ -157,7 +158,7 @@ * described in in the same level of detail as the default graph. * Otherwise only the default graph will be described. */ - protected void describeDataSet(final boolean describeStatistics, + public void describeDataSet(final boolean describeStatistics, final boolean describeNamedGraphs) { final String namespace = tripleStore.getNamespace(); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-08-15 18:47:05
|
Revision: 6447 http://bigdata.svn.sourceforge.net/bigdata/?rev=6447&view=rev Author: thompsonbry Date: 2012-08-15 18:46:55 +0000 (Wed, 15 Aug 2012) Log Message: ----------- Working on MVCC view semantics for the SOLUTION SETS and DESCRIBE caches. - Refactored BOpContext to take the PipelineOp and the lastInvocation flag as constructor arguments. This required touching a lot of the bop test suites. The PipelineOp gives us access to the chunk capacity in the BOpContext and should be useful in the long term. At present it is being used to locate the alternate solution set source. - Encapsulated field references in NamedSolutionSetRef. Extracted an INamedSolutionSetRef interface. Created a utility class to encapsulate the creation of named solution set references. Moved the named solution set reference classes and unit tests into bigdata/com.bigdata.bop. Added the concept of the Fully Qualified Name (FQN) for a named solution set. - Renamed ISparqlCache => ISolutionSetCache and SparqlCache => SolutionSetCache. - Temporarily disabled the solution set cache in QueryHints. The solution set cache MUST use Name2Addr now or it will lose access to the solution sets when the [cacheMap] goes out of scope. - Modified StaticAnalysis#getSolutionSetStats(name) to throw an exception if the named solution set could not be resolved (all callers were doing this, so I lifted the exception into the method that was being called). - Refactored the SparqlCacheFactory and related interfaces to create an ICacheConnection abstraction. This is part of working on MVCC view for the IDescribeCache and the ISparqlCache. They both need to be aware of the namespace and timestamp of the KB view. - Delegated all resolution of pre-existing named solution sets to static methods on NamedSolutionSetRefUtility. This fixes a problem where static analysis was not looking it all the right places. @see https://sourceforge.net/apps/trac/bigdata/ticket/531 (SPARQL Update for Solution Sets) @see https://sourceforge.net/apps/trac/bigdata/ticket/584 (DESCRIBE CACHE) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/BOpContext.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/HTreeNamedSubqueryOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/JVMNamedSubqueryOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/NamedSetAnnotations.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/engine/ChunkedRunningQuery.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/engine/QueryLog.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/HTreeHashIndexOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/HTreeHashJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/HTreeMergeJoin.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/HashIndexOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/HashJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/JVMHashIndexOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/JVMHashJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/JVMMergeJoin.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/NestedLoopJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/join/SolutionSetHashJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/solutions/HTreeDistinctBindingSetsOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/solutions/ISolutionSet.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/resources/ResourceEvents.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/IRWStrategy.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/ap/TestPredicateAccessPath.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/bset/TestConditionalRoutingOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/bset/TestCopyBindingSets.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/fed/TestRemoteAccessPath.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/AbstractHashJoinOpTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/HashIndexOpTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/TestHTreeHashIndexOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/TestHTreeHashJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/TestHTreeSolutionSetHashJoin.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/TestJVMHashIndexOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/TestJVMHashJoinOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/join/TestPipelineJoin.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/solutions/AbstractAggregationTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/solutions/AbstractDistinctSolutionsTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/solutions/TestHTreeDistinctBindingSets.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/solutions/TestMemorySortOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/solutions/TestSliceOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/ObjectManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/encoder/IVBindingSetEncoder.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StaticAnalysis.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StaticAnalysisBase.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StaticAnalysis_CanJoin.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/DescribeCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/DescribeServiceFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/IDescribeCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpContext.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpJoins.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUtility.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/IEvaluationContext.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/cache/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestInclude.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DescribeCacheServlet.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRef.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/INamedSolutionSetRef.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/bop/TestNamedSolutionSetRef.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/CacheConnectionFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/CacheConnectionImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/ICacheConnection.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/ISolutionSetCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/SolutionSetCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/cache/TestCacheConnectionFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/cache/TestSolutionSetCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/rio/ Removed Paths: ------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/NamedSolutionSetRef.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/ISparqlCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/SparqlCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/SparqlCacheFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/TestNamedSolutionSetRef.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/cache/TestSparqlCacheFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/cache/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/cache/TestSolutionSetCache.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/BOpContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/BOpContext.java 2012-08-14 20:36:29 UTC (rev 6446) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/BOpContext.java 2012-08-15 18:46:55 UTC (rev 6447) @@ -31,29 +31,29 @@ import java.util.Map; import java.util.NoSuchElementException; import java.util.UUID; -import java.util.concurrent.atomic.AtomicBoolean; import org.apache.http.conn.ClientConnectionManager; import com.bigdata.bop.bindingSet.ListBindingSet; -import com.bigdata.bop.controller.NamedSolutionSetRef; +import com.bigdata.bop.controller.INamedSolutionSetRef; import com.bigdata.bop.engine.BOpStats; import com.bigdata.bop.engine.IChunkMessage; import com.bigdata.bop.engine.IQueryClient; import com.bigdata.bop.engine.IRunningQuery; -import com.bigdata.bop.engine.QueryEngine; import com.bigdata.bop.join.BaseJoinStats; import com.bigdata.bop.join.IHashJoinUtility; import com.bigdata.btree.ISimpleIndexAccess; import com.bigdata.journal.AbstractJournal; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.ITx; +import com.bigdata.journal.TimestampUtility; import com.bigdata.rdf.internal.IV; import com.bigdata.rdf.internal.impl.bnode.SidIV; import com.bigdata.rdf.model.BigdataBNode; import com.bigdata.rdf.sparql.ast.QueryHints; -import com.bigdata.rdf.sparql.ast.cache.ISparqlCache; -import com.bigdata.rdf.sparql.ast.cache.SparqlCacheFactory; +import com.bigdata.rdf.sparql.ast.cache.CacheConnectionFactory; +import com.bigdata.rdf.sparql.ast.cache.ICacheConnection; +import com.bigdata.rdf.sparql.ast.cache.ISolutionSetCache; import com.bigdata.rdf.spo.ISPO; import com.bigdata.rdf.spo.SPO; import com.bigdata.rdf.spo.SPOPredicate; @@ -91,17 +91,14 @@ private final IBlockingBuffer<E[]> sink2; - private final AtomicBoolean lastInvocation = new AtomicBoolean(false); + /** + * The operator that is being executed. + */ + private final PipelineOp op; + + private final boolean lastInvocation; /** - * Set by the {@link QueryEngine} when the criteria specified by - * {@link #isLastInvocation()} are satisfied. - */ - public void setLastInvocation() { - lastInvocation.set(true); - } - - /** * <code>true</code> iff this is the last invocation of the operator. The * property is only set to <code>true</code> for operators which: * <ol> @@ -117,16 +114,11 @@ * parallel. This is why the evaluation context is locked to the query * controller. In addition, the operator must declare that it is NOT thread * safe in order for the query engine to serialize its evaluation tasks. - * - * @todo This should be a ctor parameter. We just have to update the test - * suites for the changed method signature. */ -// * <li>{@link BOp.Annotations#EVALUATION_CONTEXT} is -// * {@link BOpEvaluationContext#CONTROLLER}</li> public boolean isLastInvocation() { - return lastInvocation.get(); + return lastInvocation; } - + /** * The interface for a running query. * <p> @@ -138,7 +130,7 @@ public IRunningQuery getRunningQuery() { return runningQuery; } - + /** * The index partition identifier -or- <code>-1</code> if the index is not * sharded. @@ -156,31 +148,19 @@ } /** + * Return the operator that is being executed. + */ + public PipelineOp getOperator() { + return op; + } + + /** * Where to read the data to be consumed by the operator. */ public final ICloseableIterator<E[]> getSource() { return source; } -// /** -// * Attach another source. The decision to attach the source is mutex with -// * respect to the decision that the source reported by {@link #getSource()} -// * is exhausted. -// * -// * @param source -// * The source. -// * -// * @return <code>true</code> iff the source was attached. -// */ -// public boolean addSource(IAsynchronousIterator<E[]> source) { -// -// if (source == null) -// throw new IllegalArgumentException(); -// -// return this.source.add(source); -// -// } - /** * Where to write the output of the operator. * @@ -202,54 +182,69 @@ return sink2; } - /** - * - * @param runningQuery - * The {@link IRunningQuery}. - * @param partitionId - * The index partition identifier -or- <code>-1</code> if the - * index is not sharded. - * @param stats - * The object used to collect statistics about the evaluation of - * this operator. - * @param source - * Where to read the data to be consumed by the operator. - * @param sink - * Where to write the output of the operator. - * @param sink2 - * Alternative sink for the output of the operator (optional). - * This is used by things like SPARQL optional joins to route - * failed joins outside of the join group. - * - * @throws IllegalArgumentException - * if the <i>stats</i> is <code>null</code> - * @throws IllegalArgumentException - * if the <i>source</i> is <code>null</code> (use an empty - * source if the source will be ignored). - * @throws IllegalArgumentException - * if the <i>sink</i> is <code>null</code> - * - * @todo Modify to accept {@link IChunkMessage} or an interface available - * from getChunk() on {@link IChunkMessage} which provides us with - * flexible mechanisms for accessing the chunk data. - * <p> - * When doing that, modify to automatically track the {@link BOpStats} - * as the <i>source</i> is consumed. - * <p> - * Note: The only call to this method outside of the test suite is - * from ChunkedRunningQuery. It always has a fully materialized - * chunk on hand and ready to be processed. - */ + /** + * + * @param runningQuery + * The {@link IRunningQuery}. + * @param partitionId + * The index partition identifier -or- <code>-1</code> if the + * index is not sharded. + * @param stats + * The object used to collect statistics about the evaluation of + * this operator. + * @param source + * Where to read the data to be consumed by the operator. + * @param op + * The operator that is being executed. + * @param lastInvocation + * <code>true</code> iff this is the last invocation pass for + * that operator. + * @param sink + * Where to write the output of the operator. + * @param sink2 + * Alternative sink for the output of the operator (optional). + * This is used by things like SPARQL optional joins to route + * failed joins outside of the join group. + * + * @throws IllegalArgumentException + * if the <i>stats</i> is <code>null</code> + * @throws IllegalArgumentException + * if the <i>source</i> is <code>null</code> (use an empty + * source if the source will be ignored). + * @throws IllegalArgumentException + * if the <i>sink</i> is <code>null</code> + * + * @todo Modify to accept {@link IChunkMessage} or an interface available + * from getChunk() on {@link IChunkMessage} which provides us with + * flexible mechanisms for accessing the chunk data. + * <p> + * When doing that, modify to automatically track the {@link BOpStats} + * as the <i>source</i> is consumed. + * <p> + * Note: The only call to this method outside of the test suite is + * from ChunkedRunningQuery. It always has a fully materialized chunk + * on hand and ready to be processed. + */ @SuppressWarnings({ "unchecked", "rawtypes" }) - public BOpContext(final IRunningQuery runningQuery, final int partitionId, - final BOpStats stats, final ICloseableIterator<E[]> source, - final IBlockingBuffer<E[]> sink, final IBlockingBuffer<E[]> sink2) { + public BOpContext(// + final IRunningQuery runningQuery,// + final int partitionId,// + final BOpStats stats, // + final PipelineOp op,// + final boolean lastInvocation,// + final ICloseableIterator<E[]> source,// + final IBlockingBuffer<E[]> sink, // + final IBlockingBuffer<E[]> sink2// + ) { super(runningQuery.getFederation(), runningQuery.getLocalIndexManager()); if (stats == null) throw new IllegalArgumentException(); - + + if (op == null) + throw new IllegalArgumentException(); + if (source == null) throw new IllegalArgumentException(); @@ -259,6 +254,8 @@ this.runningQuery = runningQuery; this.partitionId = partitionId; this.stats = stats; + this.op = op; + this.lastInvocation = lastInvocation; /* * Wrap each IBindingSet to provide access to the BOpContext. * @@ -498,7 +495,7 @@ // throw new IllegalArgumentException(); // // /* -// * FIXME There are several cases here, one for each type of +// * TODO There are several cases here, one for each type of // * data structure we need to access and each means of identifying // * that data structure. The main case is Stream (for named solution sets). // * If we wind up always modeling a named solution set as a Stream, then @@ -562,22 +559,38 @@ */ @SuppressWarnings("unchecked") public ICloseableIterator<IBindingSet[]> getAlternateSource( - final PipelineOp op, final NamedSolutionSetRef namedSetRef) { + final INamedSolutionSetRef namedSetRef) { + // Iterator visiting the solution set. + final ICloseableIterator<IBindingSet> src; + + // The local (application) name of the solution set. + final String localName = namedSetRef.getLocalName(); + /* - * Lookup the attributes for the query that will be used to resolve the - * named solution set. + * When non-null, this identifies the IRunningQuery that we need to look + * at to find the named solution set. */ - final IQueryAttributes queryAttributes = getQueryAttributes(namedSetRef.queryId); + final UUID queryId = namedSetRef.getQueryId(); + + if (queryId != null) { - // Resolve the named solution set. - final Object tmp = queryAttributes.get(namedSetRef); + /* + * Lookup the attributes for the query that will be used to resolve + * the named solution set. + */ + final IQueryAttributes queryAttributes = getQueryAttributes(queryId); - // Iterator visiting the solution set. - final ICloseableIterator<IBindingSet> src; + // Resolve the named solution set. + final Object tmp = queryAttributes.get(namedSetRef); - if (tmp != null) { + if (tmp == null) { + throw new RuntimeException("Not found: name=" + localName + + ", namedSetRef=" + namedSetRef); + + } + if (tmp instanceof IHashJoinUtility) { /* @@ -600,8 +613,8 @@ } else { /* - * We found something, but we do not know how to turn it into an - * iterator visiting solutions. + * We found something, but we do not know how to turn it + * into an iterator visiting solutions. */ throw new UnsupportedOperationException("namedSetRef=" @@ -609,101 +622,39 @@ } + return new Chunkerator<IBindingSet>(src, op.getChunkCapacity(), + IBindingSet.class); + } else { - /* - * There is no query attribute for that NamedSolutionSetRef. - * - * The query attributes are the first level of resolution. Since - * nothing was found there, we will now look for an index (BTree, - * HTree, Stream, etc) having the specified name. - * - * The search order is CACHE, local Journal, federation. - * - * TODO The name of the desired solution set might need to be paired - * with the NamedSolutionSetRef or modeled by a different type of - * object for the NAMED_SET_REF annotation, otherwise we might not - * be able to clearly specify the name of a stream and the name of - * an index over that stream. - * - * TODO We might need/want to explicitly identify the conceptual - * location of the named solution set (cache, local index manager, - * federation) when the query is compiled so we only look in the - * right place at when the operator is executing. That could - * decrease latency for operators which execute multiple times, - * report errors early if something can not be resolved, and - * eliminate some overhead with testing remote services during - * operator evaluation (if the cache is non-local). - */ - - // The name of a solution set. - final String name = namedSetRef.namedSet; - // Resolve the object which will give us access to the named // solution set. - final ISparqlCache sparqlCache = SparqlCacheFactory - .getExistingSparqlCache(getRunningQuery().getQueryEngine()); + final ICacheConnection cacheConn = CacheConnectionFactory + .getExistingCacheConnection(getRunningQuery() + .getQueryEngine()); - if (sparqlCache != null && sparqlCache.existsSolutions(name)) { + final String namespace = namedSetRef.getNamespace(); - return sparqlCache.getSolutions(name); + final long timestamp = namedSetRef.getTimestamp(); - } + final ISolutionSetCache sparqlCache = cacheConn == null ? null + : cacheConn.getSparqlCache(namespace, timestamp); - /* - * FIXME Consider specifying the index using NT to specify both the - * name and the timestamp. Or put the timestamp into the - * NamedSolutionSetRef. - */ - final long timestamp = ITx.READ_COMMITTED; + // TODO ClassCastException is possible? + final AbstractJournal localIndexManager = (AbstractJournal) getIndexManager(); - final IIndexManager localIndexManager = getIndexManager(); + return NamedSolutionSetRefUtility.getSolutionSet(// + sparqlCache,// + localIndexManager,// + namespace,// + timestamp,// + localName,// + namedSetRef.getJoinVars(),// + op.getChunkCapacity()// + ); - if (localIndexManager instanceof AbstractJournal) { + } - final ISimpleIndexAccess index = ((AbstractJournal) localIndexManager) - .getIndexLocal(name, timestamp); - - if (index != null) { - - src = (ICloseableIterator<IBindingSet>) index.scan(); - - } else { - - /* - * TODO Provide federation-wide access to a durable named - * index, returning the index scan. - */ - - final IBigdataFederation<?> fed = getFederation(); - - // resolve remote index, obtaining "scan" of solutions. - - src = null; - - } - - } else { - - /* - * This is an odd code path. It is possible that we could hit it - * if the local index manager were null (in unit tests) or some - * wrapped object (such as an IJournal delegate). - */ - - throw new AssertionError(); - - } - - if (src == null) - throw new RuntimeException("Not found: name=" + name - + ", namedSetRef=" + namedSetRef); - - } - - return new Chunkerator<IBindingSet>(src, op.getChunkCapacity(), - IBindingSet.class); - } /** Copied: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRef.java (from rev 6425, branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/NamedSolutionSetRef.java) =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRef.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRef.java 2012-08-15 18:46:55 UTC (rev 6447) @@ -0,0 +1,313 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Aug 31, 2011 + */ + +package com.bigdata.bop; + +import java.util.Arrays; +import java.util.UUID; + +import com.bigdata.bop.controller.INamedSolutionSetRef; +import com.bigdata.bop.engine.IRunningQuery; +import com.bigdata.journal.ITx; + +/** + * Class models the information which uniquely describes a named solution set. + * The "name" is comprised of the following components: + * <dl> + * <dt>queryId</dt> + * <dd>The {@link UUID} of the query which generated the named solution set. + * This provides the scope for the named solution set. It is used to (a) locate + * the data; and (b) release the data when the query goes out of scope.</dd> + * <dt>namedSet</dt> + * <dd>The "name" of the solution set as given in the query. The name is not a + * sufficient identifier since the same solution set name may be used in + * different queries and with different join variables.</dd> + * <dt>joinVars[]</dt> + * <dd>The ordered array of the join variable. This serves to differentiate + * among named solution sets having the same data but different join variables.</dd> + * </dl> + * Together, these components provide for a name that is unique within the scope + * of a query. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id: NamedSolutionSetRef.java 5716 2011-11-21 20:47:10Z thompsonbry + * $ + */ +public class NamedSolutionSetRef implements INamedSolutionSetRef { + + /** + * + */ + private static final long serialVersionUID = 1L; + + /** + * The {@link UUID} of the {@link IRunningQuery} which generated the named + * solution set. This is where you need to look to find the data. + */ + private final UUID queryId; + + /** + * The namespace associated with the KB view -or- <code>null</code> if the + * named solution set is attached to an {@link IRunningQuery}. + */ + private final String namespace; + + /** + * The timestamp associated with the KB view. + * <p> + * Note: This MUST be ignored if {@link #namespace} is <code>null</code>. + */ + private final long timestamp; + + /** + * The application level name for the named solution set. + */ + private final String localName; + + /** + * The ordered set of variables that specifies the ordered set of components + * in the key for the desired index over the named solution set (required, + * but may be an empty array). + */ + @SuppressWarnings("rawtypes") + private final IVariable[] joinVars; + + @Override + final public UUID getQueryId() { + + return queryId; + + } + + @Override + public String getNamespace() { + + return namespace; + + } + + @Override + public long getTimestamp() { + + return timestamp; + + } + + @Override + final public String getLocalName() { + + return localName; + + } + + @Override + final public IVariable[] getJoinVars() { + + // TODO return clone of the array to avoid possible modification? + return joinVars; + + } + + /** + * + * @param queryId + * The {@link UUID} of the {@link IRunningQuery} where you need + * to look to find the data (required). + * @param namedSet + * The application level name for the named solution set + * (required). + * @param joinVars + * The join variables (required, but may be an empty array). + */ + @SuppressWarnings("rawtypes") + NamedSolutionSetRef(// + final UUID queryId, // + final String namedSet,// + final IVariable[] joinVars// + ) { + + if (queryId == null) + throw new IllegalArgumentException(); + + if (namedSet == null) + throw new IllegalArgumentException(); + + if (joinVars == null) + throw new IllegalArgumentException(); + + this.queryId = queryId; + + this.namespace = null; + + // Note: This should be IGNORED since the [namespace] is null. + this.timestamp = ITx.READ_COMMITTED; + + this.localName = namedSet; + + this.joinVars = joinVars; + + } + + /** + * + * @param namespace + * The namespace of the KB view. + * @param timestamp + * The timestamp associated with the KB view. + * @param localName + * The application level name for the named solution set + * (required). + * @param joinVars + * The join variables (required, but may be an empty array). + */ + @SuppressWarnings("rawtypes") + NamedSolutionSetRef(// + final String namespace, // + final long timestamp,// + final String localName,// + final IVariable[] joinVars// + ) { + + if (namespace == null) + throw new IllegalArgumentException(); + + if (localName == null) + throw new IllegalArgumentException(); + + if (joinVars == null) + throw new IllegalArgumentException(); + + this.queryId = null; + + this.namespace = namespace; + + this.timestamp = timestamp; + + this.localName = localName; + + this.joinVars = joinVars; + + } + + private transient volatile String fqn; + + public String getFQN() { + + if (fqn == null) { + + synchronized (localName) { + + if (namespace == null) { + + fqn = localName; + + } else { + + fqn = NamedSolutionSetRefUtility.getFQN( + namespace, localName, joinVars); + + } + + } + + } + + return fqn; + + } + + @Override + public int hashCode() { + if (h == 0) { + // TODO Review this for effectiveness. + h = (queryId == null ? namespace.hashCode() + (int) timestamp + : queryId.hashCode()) + + localName.hashCode() + + Arrays.hashCode(joinVars); + } + return h; + } + + private transient int h; + + @Override + public boolean equals(final Object o) { + + if (this == o) + return true; + + if (!(o instanceof NamedSolutionSetRef)) + return false; + + final NamedSolutionSetRef t = (NamedSolutionSetRef) o; + + if (queryId != null) { + + if (!queryId.equals(t.queryId)) + return false; + + } else { + + if (!namespace.equals(t.namespace)) + return false; + + if (timestamp != t.timestamp) + return false; + + } + + if (!localName.equals(t.localName)) + return false; + + if (!Arrays.equals(joinVars, t.joinVars)) + return false; + + return true; + + } + + @Override + public String toString() { + + final StringBuilder sb = new StringBuilder(); + + sb.append(getClass().getSimpleName()); + sb.append("{localName=").append(localName); + if (queryId == null) { + sb.append(",namespace=").append(namespace); + sb.append(",timestamp=").append(timestamp); + } else { + sb.append(",queryId=").append(queryId); + } + sb.append(",joinVars=").append(Arrays.toString(joinVars)); + sb.append("}"); + + return sb.toString(); + + } + +} Added: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java 2012-08-15 18:46:55 UTC (rev 6447) @@ -0,0 +1,561 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Aug 15, 2012 + */ +package com.bigdata.bop; + +import java.util.Arrays; +import java.util.UUID; + +import com.bigdata.bop.controller.INamedSolutionSetRef; +import com.bigdata.bop.engine.IRunningQuery; +import com.bigdata.bop.solutions.ISolutionSet; +import com.bigdata.btree.IIndex; +import com.bigdata.btree.ISimpleIndexAccess; +import com.bigdata.journal.AbstractJournal; +import com.bigdata.journal.ITx; +import com.bigdata.journal.TimestampUtility; +import com.bigdata.rdf.sparql.ast.ISolutionSetStats; +import com.bigdata.rdf.sparql.ast.cache.ISolutionSetCache; +import com.bigdata.rdf.store.AbstractTripleStore; +import com.bigdata.striterator.Chunkerator; +import com.bigdata.striterator.ICloseableIterator; + +/** + * Utility class for {@link INamedSolutionSetRef}s. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public class NamedSolutionSetRefUtility { + + /** + * Factory for {@link INamedSolutionSetRef}s that will be resolved against + * the {@link IRunningQuery} identified by the specified <i>queryId</i>. + * + * @param queryId + * The {@link UUID} of the {@link IRunningQuery} where you need + * to look to find the data (required). + * @param namedSet + * The application level name for the named solution set + * (required). + * @param joinVars + * The join variables (required, but may be an empty array). + */ + @SuppressWarnings("rawtypes") + public static INamedSolutionSetRef newInstance(// + final UUID queryId, // + final String namedSet,// + final IVariable[] joinVars// + ) { + + // Note: checked by the constructor. + +// if (queryId == null) +// throw new IllegalArgumentException(); +// +// if (namedSet == null) +// throw new IllegalArgumentException(); +// +// if (joinVars == null) +// throw new IllegalArgumentException(); + + return new NamedSolutionSetRef(queryId, namedSet, joinVars); + + } + + /** + * Factory for {@link INamedSolutionSetRef}s that will be resolved against a + * KB view identified by a <i>namespace</i> and <i>timestamp</i>. + * + * @param namespace + * The bigdata namespace of the {@link AbstractTripleStore} where + * you need to look to find the data (required). + * @param timestamp + * The timestamp of the view. + * @param localName + * The application level name for the named solution set + * (required). + * @param joinVars + * The join variables (required, but may be an empty array). + */ + @SuppressWarnings("rawtypes") + public static INamedSolutionSetRef newInstance(// + final String namespace, // + final long timestamp,// + final String localName,// + final IVariable[] joinVars// + ) { + + // Note: checked by the constructor. + +// if (namespace == null) +// throw new IllegalArgumentException(); +// +// if (namedSet == null) +// throw new IllegalArgumentException(); +// +// if (joinVars == null) +// throw new IllegalArgumentException(); + + return new NamedSolutionSetRef(namespace, timestamp, localName, + joinVars); + + } + + /** + * Parses the {@link INamedSolutionSetRef#toString()} representation, + * returning an instance of that interface. + * + * @see NamedSolutionSetRef#toString() + */ + public static INamedSolutionSetRef valueOf(final String s) { + + final String namedSet; + { + + final int posNamedSet = assertIndex(s, s.indexOf("localName=")); + + final int posNamedSetEnd = assertIndex(s, + s.indexOf(",", posNamedSet)); + + namedSet = s.substring(posNamedSet + 10, posNamedSetEnd); + + } + + final IVariable[] joinVars; + { + + final int posJoinVars = assertIndex(s, s.indexOf("joinVars=[")); + + final int posJoinVarsEnd = assertIndex(s, + s.indexOf("]", posJoinVars)); + + final String joinVarsStr = s.substring(posJoinVars + 10, + posJoinVarsEnd); + + final String[] a = joinVarsStr.split(", "); + + joinVars = new IVariable[a.length]; + + for (int i = 0; i < a.length; i++) { + + joinVars[i] = Var.var(a[i]); + + } + } + + if (s.indexOf("queryId") != -1) { + + final int posQueryId = assertIndex(s, s.indexOf("queryId=")); + final int posQueryIdEnd = assertIndex(s, s.indexOf(",", posQueryId)); + final String queryIdStr = s.substring(posQueryId + 8, posQueryIdEnd); + final UUID queryId = UUID.fromString(queryIdStr); + + return NamedSolutionSetRefUtility.newInstance(queryId, namedSet, joinVars); + + } else { + + final String namespace; + { + final int posNamespace = assertIndex(s, s.indexOf("namespace=")); + final int posNamespaceEnd = assertIndex(s, + s.indexOf(",", posNamespace)); + namespace = s.substring(posNamespace + 10, posNamespaceEnd); + } + + final long timestamp; + { + final int posTimestamp = assertIndex(s, s.indexOf("timestamp=")); + final int posTimestampEnd = assertIndex(s, + s.indexOf(",", posTimestamp)); + final String timestampStr = s.substring(posTimestamp + 10, + posTimestampEnd); + timestamp = Long.valueOf(timestampStr); + } + + return NamedSolutionSetRefUtility.newInstance(namespace, timestamp, + namedSet, joinVars); + + } + + } + + static private int assertIndex(final String s, final int index) { + + if (index >= 0) + return index; + + throw new IllegalArgumentException(s); + + } + + /** + * Return the fully qualified name for a named solution set NOT attached to + * a query. + * <p> + * Note: this includes the namespace (to keep the named solution sets + * distinct for different KB instances) and the ordered list of key + * components (so we can identify different index orders for the same + * solution set). + * <p> + * Note: This does not allow duplicate indices of different types (BTree + * versus HTree) for the same key orders as their FQNs would collide. + * <p> + * Note: All index orders for the same "namedSet" will share a common + * prefix. + * <P> + * Note: All named solution set for the same KB will share a common prefix, + * and that prefix will be distinct from any other index. + * + * @param namespace + * The KB namespace. + * @param localName + * The local (aka application) name for the named solution set. + * @param joinVars + * The ordered set of key components (differentiates among + * different indices for the same named solution set). + * + * @return The fully qualified name. + */ + public static String getFQN(// + final String namespace,// + final String localName, // + final IVariable[] joinVars// + ) { + + if (namespace == null) + throw new IllegalArgumentException(); + + if (localName == null) + throw new IllegalArgumentException(); + + if (joinVars == null) + throw new IllegalArgumentException(); + + final StringBuilder sb = getPrefix(namespace, localName); + + if (joinVars.length != 0) + sb.append("."); + + boolean first = true; + + for (IVariable<?> v : joinVars) { + + if (first) { + + first = false; + + } else { + + sb.append("-"); + + } + + sb.append(v.getName()); + } + + return sb.toString(); + + } + + /** + * The prefix that may be used to identify all named solution sets belonging + * to the specified KB namespace. + * + * @param namespace + * The KB namespace. + * + * @return The prefix shared by all solution sets for that KB namespace. + */ + public static StringBuilder getPrefix(final String namespace) { + + final StringBuilder sb = new StringBuilder(96); + + sb.append(namespace); + + sb.append(".solutionSets"); + + return sb; + + } + + /** + * The prefix that may be used to identify all named solution sets belonging + * to the specified KB namespace and having the specified localName. This + * may be used to find the different indices over the same named solution + * set when there is more than one index order for that named solution set. + * + * @param namespace + * The KB namespace. + * @param localName + * The application name for the named solution set. + * + * @return The prefix shared by all solution sets for that KB namespace and + * localName. + */ + public static StringBuilder getPrefix(final String namespace, + final String localName) { + + final StringBuilder sb = getPrefix(namespace); + + sb.append("."); + + sb.append(localName); + + return sb; + + } + +// /** +// * Resolve the pre-existing named solution set returning its +// * {@link ISolutionSetStats}. +// * +// * @param sparqlCache +// * @param localIndexManager +// * @return The {@link ISolutionSetStats} +// * +// * @throws RuntimeException +// * if the named solution set can not be found. +// */ +// public static ISolutionSetStats getSolutionSetStats( +// final ISparqlCache sparqlCache,// +// final AbstractJournal localIndexManager, // +// final INamedSolutionSetRef namedRef) { +// +// return getSolutionSetStats(sparqlCache, localIndexManager, +// namedRef.getNamespace(), namedRef.getTimestamp(), +// namedRef.getLocalName(), namedRef.getJoinVars()); +// +// } +// +// /** +// * Resolve the pre-existing named solution set returning an iterator that +// * will visit the solutions (access path scan). +// * +// * @return An iterator that will visit the solutions in the named solution +// * set. +// * @throws RuntimeException +// * if the named solution set can not be found. +// */ +// public static ICloseableIterator<IBindingSet[]> getSolutionSet( +// final ISparqlCache sparqlCache,// +// final AbstractJournal localIndexManager,// +// final INamedSolutionSetRef namedRef,// +// final int chunkCapacity// +// ) { +// +// return getSolutionSet(sparqlCache, localIndexManager, +// namedRef.getNamespace(), namedRef.getTimestamp(), +// namedRef.getLocalName(), namedRef.getJoinVars(), chunkCapacity); +// +// } + + /** + * Resolve the pre-existing named solution set returning its + * {@link ISolutionSetStats}. + * + * @param sparqlCache + * @param localIndexManager + * @param namespace + * @param timestamp + * @param localName + * @param joinVars + * @return The {@link ISolutionSetStats} + * + * @throws RuntimeException + * if the named solution set can not be found. + * + * FIXME Drop joinVars here and just do a Name2Addr scan on the + * value returned by {@link #getPrefix(String, String)} to see + * if we can locate an index (regardless of the join variables). + * It does not matter *which* index we find, as long as it is + * the same data. + */ + public static ISolutionSetStats getSolutionSetStats(// + final ISolutionSetCache sparqlCache,// + final AbstractJournal localIndexManager, // + final String namespace,// + final long timestamp,// + final String localName,// + final IVariable[] joinVars// + ) { + + if (localName == null) + throw new IllegalArgumentException(); + + if (sparqlCache != null) { + + final ISolutionSetStats stats = sparqlCache + .getSolutionSetStats(localName); + + if (stats != null) { + + return stats; + + } + + } + + final String fqn = getFQN(namespace, localName, joinVars); + + final AbstractJournal localJournal = (AbstractJournal) localIndexManager; + + final ISimpleIndexAccess index; + + if (timestamp == ITx.UNISOLATED) { + + /* + * FIXME We may need to wrap this with the lock provided by + * UnisolatedReadWriteIndex. + * + * TODO A read-committed view would be Ok here (as long as + * the data were committed and not written on by the current + * SPARQL UPDATE request). + */ + index = localJournal.getUnisolatedIndex(fqn); + + } else if (TimestampUtility.isReadOnly(timestamp)) { + + index = localJournal.getIndexLocal(fqn, timestamp); + + } else { + + index = null; + + } + + if (index == null) + throw new RuntimeException("Unresolved solution set: namespace=" + + namespace + ", timestamp=" + timestamp + ", localName=" + + localName + ", joinVars=" + Arrays.toString(joinVars)); + + return ((ISolutionSet)index).getStats(); + + } + + /** + * Resolve the pre-existing named solution set returning an iterator that + * will visit the solutions (access path scan). + * <p> + * This method MUST NOT be used if the named solution set is hung off of an + * {@link IRunningQuery}. In that case, you need to resolve the + * {@link IRunningQuery} using {@link INamedSolutionSetRef#getQueryId()} and + * then resolve the solution set on the {@link IQueryAttributes} associated + * with that {@link IRunningQuery}. + * + * @return An iterator that will visit the solutions in the named solution + * set. + * @throws RuntimeException + * if the named solution set can not be found. + * + * FIXME Drop joinVars here and just do a Name2Addr scan on the + * value returned by {@link #getPrefix(String, String)} to see + * if we can locate an index (regardless of the join variables). + * It does not matter *which* index we find, as long as it is + * the same data. + * + * TODO Provide federation-wide access to a durable named index? + * The concept would need to be developed further. Would this be + * a local index exposed to other nodes in the federation? A + * hash partitioned index? An remote view of a global + * {@link IIndex}? + */ + public static ICloseableIterator<IBindingSet[]> getSolutionSet( + final ISolutionSetCache sparqlCache,// + final AbstractJournal localIndexManager,// + final String namespace,// + final long timestamp,// + final String localName,// + final IVariable[] joinVars,// + final int chunkCapacity// + ) { + + /* + * We will now look for an index (BTree, HTree, Stream, etc) having + * Fully Qualified Name associated with this reference. + * + * The search order is CACHE, local Journal, federation. + * + * TODO We might need/want to explicitly identify the conceptual + * location of the named solution set (cache, local index manager, + * federation) when the query is compiled so we only look in the right + * place at when the operator is executing. That could decrease latency + * for operators which execute multiple times, report errors early if + * something can not be resolved, and eliminate some overhead with + * testing remote services during operator evaluation (if the cache is + * non-local). + */ + + if (sparqlCache != null && sparqlCache.existsSolutions(localName)) { + + return sparqlCache.getSolutions(localName); + + } + + final String fqn = getFQN(namespace, localName, joinVars); + + final AbstractJournal localJournal = (AbstractJournal) localIndexManager; + + final ISimpleIndexAccess index; + + if (timestamp == ITx.UNISOLATED) { + + /* + * FIXME We may need to wrap this with the lock provided by + * UnisolatedReadWriteIndex. + */ + index = localJournal.getUnisolatedIndex(fqn); + + } else if (TimestampUtility.isReadOnly(timestamp)) { + + index = localJournal.getIndexLocal(fqn, timestamp); + + } else { + + /* + * Note: This is here to catch assumptions about the timestamp. For + * example, we might see read/write txIds here. That could be Ok, + * but it needs to be handled correctly. + */ + + throw new AssertionError("localName=" + localName); + + } + + if (index == null) + throw new RuntimeException("Unresolved solution set: namespace=" + + namespace + ", timestamp=" + timestamp + ", localName=" + + localName + ", joinVars=" + Arrays.toString(joinVars)); + + // Iterator visiting the solution set. + final ICloseableIterator<IBindingSet> src = (ICloseableIterator<IBindingSet>) index + .scan(); + + return new Chunkerator<IBindingSet>(src, chunkCapacity, + IBindingSet.class); + + } + +} Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/HTreeNamedSubqueryOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/HTreeNamedSubqueryOp.java 2012-08-14 20:36:29 UTC (rev 6446) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/HTreeNamedSubqueryOp.java 2012-08-15 18:46:55 UTC (rev 6447) @@ -185,7 +185,7 @@ private final PipelineOp subquery; /** Metadata to identify the named solution set. */ - private final NamedSolutionSetRef namedSetRef; + private final INamedSolutionSetRef namedSetRef; /** * The {@link IQueryAttributes} for the {@link IRunningQuery} off which @@ -218,7 +218,7 @@ this.subquery = (PipelineOp) op .getRequiredProperty(Annotations.SUBQUERY); - this.namedSetRef = (NamedSolutionSetRef) op + this.namedSetRef = (INamedSolutionSetRef) op .getRequiredProperty(Annotations.NAMED_SET_REF); { @@ -232,7 +232,7 @@ // Lookup the attributes for the query on which we will hang the // solution set. - attrs = context.getQueryAttributes(namedSetRef.queryId); + attrs = context.getQueryAttributes(namedSetRef.getQueryId()); HTreeHashJoinUtility state = (HTreeHashJoinUtility) attrs .get(namedSetRef); @@ -243,8 +243,8 @@ * Note: This operator does not support optional semantics. */ state = new HTreeHashJoinUtility( - context.getMemoryManager(namedSetRef.queryId), op, - JoinTypeEnum.Normal); + context.getMemoryManager(namedSetRef.getQueryId()), + op, JoinTypeEnum.Normal); if (attrs.putIfAbsent(namedSetRef, state) != null) throw new AssertionError(); Added: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/INamedSolutionSetRef.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/INamedSolutionSetRef.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/controller/INamedSolutionSetRef.java 2012-08-15 18:46:55 UTC (rev 6447) @@ -0,0 +1,110 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Aug 15, 2012 + */ +package com.bigdata.bop.controller; + +import java.io.Serializable; +import java.util.UUID; + +import com.bigdata.bop.IQueryAttributes; +import com.bigdata.bop.IVariable; +import com.bigdata.bop.NamedSolutionSetRefUtility; +import com.bigdata.bop.engine.IRunningQuery; + +/** + * An interface specifying the information required to locate a named solution + * set. + * <p> + * Note: There are two basic ways to locate named solution sets. Either they are + * attached to the {@link IQueryAttributes} of an {@link IRunningQuery} (query + * local) -or- they are located using the <em>namespace</em> and + * <em>timestamp</em> of an MVCC view (this works for both cached and durable + * named solution sets). Either {@link #getQueryId()} will be non- + * <code>null</code> or {@link #getNamespace()} will be non-<code>null</code> , + * but not both. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a>... [truncated message content] |
From: <tho...@us...> - 2012-08-17 21:28:06
|
Revision: 6452 http://bigdata.svn.sourceforge.net/bigdata/?rev=6452&view=rev Author: thompsonbry Date: 2012-08-17 21:27:56 +0000 (Fri, 17 Aug 2012) Log Message: ----------- - Added indexNameScan(prefix:String) to IIndexStore. This allows us to dynamically discover all named indices spanned by some prefix. The IIndexStore interface is extended by the IBigdataFederation, so this capability also extends to scale out. The requirement for enumerating the named indices spanned by a namespace exists for the named SOLUTION SET cache. If those solution sets are durable, then it also exists for AbstractTripleStore#destroy() since there may be named indices that are not explicitly part of either the SPORelation or the LexiconRelation. Modified ListIndexPartitions to use indexNameScan() (DumpFederation). Modified IndexManager.listIndexPartitions() to use indexNameScan(). Modified DumpJournal to use indexNameScan(). Modified CompactTask to use indexNameScan(). - Modified the SolutionSetCache to use resolution through Name2Addr. - Modified AbstractRelation#destroy() to destroy all named indices spanned by the namespace of the relation. - Modified LexiconRelation to invoke super.destroy() after it destroys its own indices. This change was necessary since those indices are now destroyed automatically by the base class when the relation is destroyed. - Exposed the readsOnCommitTime on ITx. This was already available on the Tx class. - Fixed bug where getIndexWithCheckpointAddr(long) could allow the unisolated view of an index to be exposed. This issue was specific to the RWStore and was an error in immediateFree(). See https://sourceforge.net/apps/trac/bigdata/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) - Modified some methods in AbstractJournal that returned a ReadOnlyIndex to internally simply return a read-only BTree object. Specifially, this pattern was changed in getName2Addr(commitTime) and getName2Addr(). The resulting BTree is guaranteed to be read-only by getIndexWithCheckpointAddr() which was already being used by both methods. - Deprecated AbstractJournal#registerIndex(final String name, final IndexMetadata metadata) in favor of register(name,metadata). - Moved the initialization of the DescribeServletFactory into the ServiceRegistry. It is still conditional on the query hint. However, the service is actually registered so it now notices updates and uses them to invalidate the cache. - Added unit test to verify that the DESCRIBE cache is invalidated based on IChangeLog notices. See https://sourceforge.net/apps/trac/bigdata/ticket/584 (DESCRIBE CACHE) See https://sourceforge.net/apps/trac/bigdata/ticket/531 (SOLUTION SET CACHE) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/engine/QueryEngine.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/fed/DelegateIndexManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/solutions/SolutionSetStream.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/Checkpoint.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/CompactTask.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IBTreeManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITx.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/TemporaryStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Tx.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/AbstractResource.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/JoinTaskFactoryTask.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/resources/IndexManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/sector/MemoryManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/AbstractFederation.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/AbstractScaleOutFederation.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/ListIndicesTask.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/stream/Stream.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestJournalBasics.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestName2Addr.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestNamedIndices.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/resources/AbstractResourceManagerTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/service/TestBasicIndexStuff.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/service/TestEventReceiver.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/service/TestMove.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/service/TestOverflow.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/util/DumpFederation.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/CacheConnectionFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/CacheConnectionImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/DescribeCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/DescribeServiceFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/ICacheConnection.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/IDescribeCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/ISolutionSetCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/cache/SolutionSetCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpContext.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/CustomServiceFactory.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/ServiceRegistry.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/AbstractASTEvaluationTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/cache/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/cache/TestSolutionSetCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestDescribe.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestDumpJournal.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/NamedSolutionSetRefUtility.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -436,6 +436,14 @@ */ index = localJournal.getUnisolatedIndex(fqn); + } else if(TimestampUtility.isReadWriteTx(timestamp)) { + + final long readsOnCommitTime = localJournal + .getLocalTransactionManager().getTx(timestamp) + .getReadsOnCommitTime(); + + index = localJournal.getIndexLocal(fqn, readsOnCommitTime); + } else if (TimestampUtility.isReadOnly(timestamp)) { index = localJournal.getIndexLocal(fqn, timestamp); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/engine/QueryEngine.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/engine/QueryEngine.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/engine/QueryEngine.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -62,7 +62,9 @@ import com.bigdata.concurrent.FutureTaskMon; import com.bigdata.counters.CounterSet; import com.bigdata.counters.ICounterSetAccess; +import com.bigdata.journal.ConcurrencyManager; import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.Journal; import com.bigdata.rawstore.IRawStore; import com.bigdata.rdf.sail.webapp.client.DefaultClientConnectionManagerFactory; import com.bigdata.resources.IndexManager; @@ -444,15 +446,15 @@ } -// /** -// * Return the {@link ConcurrencyManager} for the {@link #getIndexManager() -// * local index manager}. -// */ -// public ConcurrencyManager getConcurrencyManager() { -// -// return ((Journal) localIndexManager).getConcurrencyManager(); -// -// } + /** + * Return the {@link ConcurrencyManager} for the {@link #getIndexManager() + * local index manager}. + */ + public ConcurrencyManager getConcurrencyManager() { + + return ((Journal) localIndexManager).getConcurrencyManager(); + + } /** * The RMI proxy for this {@link QueryEngine} when used as a query controller. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/fed/DelegateIndexManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/fed/DelegateIndexManager.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/fed/DelegateIndexManager.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -1,5 +1,6 @@ package com.bigdata.bop.fed; +import java.util.Iterator; import java.util.concurrent.ExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; @@ -8,13 +9,13 @@ import com.bigdata.btree.BTree; import com.bigdata.btree.IIndex; import com.bigdata.btree.IndexMetadata; -import com.bigdata.journal.ConcurrencyManager; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.IIndexStore; import com.bigdata.journal.IResourceLockService; import com.bigdata.journal.TemporaryStore; import com.bigdata.relation.locator.IResourceLocator; import com.bigdata.relation.rule.eval.pipeline.DistributedJoinTask; +import com.bigdata.relation.rule.eval.pipeline.JoinTaskFactoryTask; import com.bigdata.resources.IndexManager; import com.bigdata.resources.StoreManager.ManagedJournal; import com.bigdata.service.DataService; @@ -43,6 +44,9 @@ * and disallows {@link #dropIndex(String)} and * {@link #registerIndex(IndexMetadata)} in an attempt to stay out of * trouble. That may be enough reason to keep it private. + * + * TODO Is this an exact functional duplicate of the class by the same + * name in the {@link JoinTaskFactoryTask}? */ class DelegateIndexManager implements IIndexManager { @@ -168,4 +172,15 @@ } + /** + * {@inheritDoc} + * + * TODO Implement. Probably delegate to the local DS n2a index so this + * does a DS local n2a scan. + */ + @Override + public Iterator<String> indexNameScan(String prefix, long timestamp) { + throw new UnsupportedOperationException(); + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/solutions/SolutionSetStream.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/solutions/SolutionSetStream.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/bop/solutions/SolutionSetStream.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -124,6 +124,9 @@ * and reporting fields which are specific to derived classes, not just * BTree, HTree, and Stream. Or we need to add a general concept of a * "summary statistics object" for a persistent data structure. + * + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/585"> GIST + * </a> */ private MySolutionSetStats solutionSetStats; @@ -160,6 +163,12 @@ * Create a stream for an ordered solution set. * <p> * {@inheritDoc} + * + * FIXME This is not setting the SolutionSetStream class when invoked by + * {@link Checkpoint#create(IRawStore, IndexMetadata)} since Stream.create() + * is being invoked rather than SolutionSetStream.create(). + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/585 (GIST) */ public static SolutionSetStream create(final IRawStore store, final StreamIndexMetadata metadata) { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/Checkpoint.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/Checkpoint.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/Checkpoint.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -35,6 +35,7 @@ import com.bigdata.journal.Name2Addr; import com.bigdata.rawstore.IRawStore; import com.bigdata.stream.Stream; +import com.bigdata.stream.Stream.StreamIndexMetadata; /** * A checkpoint record is written each time the btree is flushed to the @@ -862,6 +863,16 @@ case HTree: ndx = HTree.create(store, (HTreeIndexMetadata) metadata); break; + case Stream: + /* + * FIXME GIST : This is not setting the SolutionSetStream class + * since Stream.create() is being invoked rather than + * SolutionSetStream.create() + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/585 (GIST) + */ + ndx = Stream.create(store, (StreamIndexMetadata) metadata); + break; default: throw new AssertionError("Unknown: " + metadata.getIndexType()); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -38,6 +38,8 @@ import java.nio.channels.Channel; import java.nio.channels.FileChannel; import java.util.Iterator; +import java.util.LinkedList; +import java.util.List; import java.util.Properties; import java.util.UUID; import java.util.concurrent.Callable; @@ -526,12 +528,37 @@ * the current name2Addr object. */ final BTree btree = (BTree) getIndexWithCheckpointAddr(checkpointAddr); + + // References must be distinct. + if (_name2Addr == btree) + throw new AssertionError(); - /* - * Wrap up in a read-only view since writes MUST NOT be allowed. - */ - return new ReadOnlyIndex(btree); +// /* +// * Wrap up in a read-only view since writes MUST NOT be allowed. +// */ +// return new ReadOnlyIndex(btree); + /* + * Set the last commit time on the Name2Addr view. + * + * TODO We can not reliably set the lastCommitTime on Name2Addr in + * this method. It will typically be the commitTime associated last + * commit point on the store, but it *is* possible to have a commit + * on the database where no named indices were dirty and hence no + * update was made to Name2Addr. In this (rare, but real) case, the + * factual Name2Addr lastCommitTime would be some commit time + * earlier than the lastCommitTime reported by the Journal. + */ + final long lastCommitTime = getLastCommitTime(); + + if (lastCommitTime != 0L) { + + btree.setLastCommitTime(lastCommitTime); + + } + + return btree; + } finally { lock.unlock(); @@ -564,11 +591,31 @@ final long checkpointAddr = commitRecord.getRootAddr(ROOT_NAME2ADDR); - return new ReadOnlyIndex( - (IIndex) getIndexWithCheckpointAddr(checkpointAddr)); + final Name2Addr n2a = + (Name2Addr) getIndexWithCheckpointAddr(checkpointAddr); - } +// return new ReadOnlyIndex(n2a); + /* + * Set the last commit time on the Name2Addr index view. + * + * TODO We can not reliably set the lastCommitTime on Name2Addr in this + * method. It will typically be the commitTime associated with the + * [commitRecord] that we resolved above, but it *is* possible to have a + * commit on the database where no named indices were dirty and hence no + * update was made to Name2Addr. In this (rare, but real) case, the + * factual Name2Addr lastCommitTime would be some commit time earlier + * than the commitTime reported by the [commitRecord]. + */ + + final long commitTime2 = commitRecord.getTimestamp(); + + n2a.setLastCommitTime(commitTime2); + + return n2a; + + } + /** * Return the root block view associated with the commitRecord for the * provided commit time. This requires accessing the next commit record @@ -1154,10 +1201,15 @@ * allow the removal of cached data when it is available for recycling. */ if (_bufferStrategy instanceof IRWStrategy) { - ((IRWStrategy) _bufferStrategy).registerExternalCache(historicalIndexCache, - getByteCount(_commitRecordIndex.getCheckpoint().getCheckpointAddr())); - } + final int checkpointRecordSize = getByteCount(_commitRecordIndex + .getCheckpoint().getCheckpointAddr()); + + ((IRWStrategy) _bufferStrategy).registerExternalCache( + historicalIndexCache, checkpointRecordSize); + + } + // new or re-load from the store. this._icuVersionRecord = _getICUVersionRecord(); @@ -2115,10 +2167,12 @@ for (int i = 0; i < _committers.length; i++) { - if (_committers[i] == null) + final ICommitter committer = _committers[i]; + + if (committer == null) continue; - final long addr = _committers[i].handleCommit(commitTime); + final long addr = committer.handleCommit(commitTime); rootAddrs[i] = addr; @@ -3617,7 +3671,8 @@ if (commitTime == ITx.UNISOLATED || commitTime == ITx.READ_COMMITTED || TimestampUtility.isReadWriteTx(commitTime)) { - throw new UnsupportedOperationException(); + throw new UnsupportedOperationException("name=" + name + + ",commitTime=" + TimestampUtility.toString(commitTime)); } @@ -3768,13 +3823,26 @@ * {@link ICheckpointProtocol#getLastCommitTime(long)} will report the value * associated with {@link Entry#commitTime} for the historical * {@link Name2Addr} instance for that {@link ICommitRecord}. + * <p> + * Note: This method should be preferred to + * {@link #getIndexWithCheckpointAddr(long)} for read-historical indices + * since it will explicitly mark the index as read-only and specifies the + * <i>lastCommitTime</i> on the returned index based on + * {@link Name2Addr.Entry#commitTime}, which is the actual commit time for + * the last update to the index. * * @return The named index -or- <code>null</code> iff the named index did * not exist as of that commit record. */ - public ICheckpointProtocol getIndexWithCommitRecord(final String name, - final ICommitRecord commitRecord) { + final public ICheckpointProtocol getIndexWithCommitRecord( + final String name, final ICommitRecord commitRecord) { + if (name == null) + throw new IllegalArgumentException(); + + if (commitRecord == null) + throw new IllegalArgumentException(); + final ReadLock lock = _fieldReadWriteLock.readLock(); lock.lock(); @@ -3783,12 +3851,6 @@ assertOpen(); - if (name == null) - throw new IllegalArgumentException(); - - if (commitRecord == null) - throw new IllegalArgumentException(); - /* * The address of an historical Name2Addr mapping used to resolve * named indices for the historical state associated with this @@ -3834,14 +3896,14 @@ * the address on which it was written in the store. */ - final ICheckpointProtocol btree = getIndexWithCheckpointAddr(entry.checkpointAddr); + final ICheckpointProtocol index = getIndexWithCheckpointAddr(entry.checkpointAddr); assert entry.commitTime != 0L : "Entry=" + entry; // Set the last commit time on the btree. - btree.setLastCommitTime(entry.commitTime); + index.setLastCommitTime(entry.commitTime); - return btree; + return index; } finally { @@ -3931,8 +3993,8 @@ * @param checkpointAddr * The address of the {@link Checkpoint} record. * - * @return The persistence capable data structure associated with that - * {@link Checkpoint}. + * @return The read-only persistence capable data structure associated with + * that {@link Checkpoint}. * * @see Options#HISTORICAL_INDEX_CACHE_CAPACITY */ @@ -4039,6 +4101,8 @@ * <p> * Note: You MUST {@link #commit()} before the registered index will be * either restart-safe or visible to new transactions. + * + * @deprecated by {@link #register(String, IndexMetadata)} */ final public BTree registerIndex(final String name, final IndexMetadata metadata) { @@ -4183,6 +4247,72 @@ } + public Iterator<String> indexNameScan(final String prefix, + final long timestamp) { + + if (timestamp == ITx.UNISOLATED) { + + /* + * For the live Name2Addr index, we get the necessary locks to avoid + * concurrent modifications, fully materialize the iterator into a + * collection, and then return an iterator over that collection. + * This is safe, but not as scaleable. + */ + + final ReadLock lock = _fieldReadWriteLock.readLock(); + + lock.lock(); + + try { + + final List<String> names = new LinkedList<String>(); + + synchronized (_name2Addr) { + + final Iterator<String> itr = Name2Addr.indexNameScan( + prefix, _name2Addr); + + while (itr.hasNext()) { + + names.add(itr.next()); + + } + + } + + return names.iterator(); + + } finally { + + lock.unlock(); + + } + + } + + final IIndex n2a; + + if (timestamp == ITx.READ_COMMITTED) { + + n2a = getName2Addr(); + + } else if (TimestampUtility.isReadWriteTx(timestamp)) { + + final long readsOnCommitTime = getLocalTransactionManager().getTx( + timestamp).getReadsOnCommitTime(); + + n2a = getName2Addr(readsOnCommitTime); + + } else { + + n2a = getName2Addr(timestamp); + + } + + return Name2Addr.indexNameScan(prefix, n2a); + + } + /** * Return the mutable view of the named index (aka the "live" or * {@link ITx#UNISOLATED} index). This object is NOT thread-safe. You MUST Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -2701,6 +2701,11 @@ return delegate.getHttpdPort(); } + @Override + public Iterator<String> indexNameScan(String prefix, long timestamp) { + throw new UnsupportedOperationException(); + } + } // class IsolatatedActionJournal /** @@ -2752,9 +2757,11 @@ */ /** + * {@inheritDoc} + * <p> * Note: Does not allow access to {@link ITx#UNISOLATED} indices. */ - public IIndex getIndex(String name, long timestamp) { + public IIndex getIndex(final String name, final long timestamp) { if (timestamp == ITx.UNISOLATED) throw new UnsupportedOperationException(); @@ -2781,6 +2788,23 @@ } /** + * {@inheritDoc} + * <p> + * Note: Does not allow access to {@link ITx#UNISOLATED} indices. + */ + @Override + public Iterator<String> indexNameScan(final String prefix, + final long timestamp) { + + if (timestamp == ITx.UNISOLATED) + throw new UnsupportedOperationException(); + + // to the backing journal. + return delegate.indexNameScan(prefix, timestamp); + + } + + /** * Note: Not supported since this method returns the * {@link ITx#UNISOLATED} index. */ @@ -3208,6 +3232,11 @@ public int getHttpdPort() { return delegate.getHttpdPort(); } + + @Override + public Iterator<String> indexNameScan(String prefix, long timestamp) { + throw new UnsupportedOperationException(); + } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/CompactTask.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/CompactTask.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/CompactTask.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -24,9 +24,9 @@ package com.bigdata.journal; import java.io.File; +import java.util.Iterator; import java.util.Properties; import java.util.concurrent.Callable; -import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; @@ -39,14 +39,9 @@ import com.bigdata.btree.BTree; import com.bigdata.btree.Checkpoint; import com.bigdata.btree.IOverflowHandler; -import com.bigdata.btree.ITuple; -import com.bigdata.btree.ITupleIterator; import com.bigdata.btree.IndexMetadata; import com.bigdata.btree.IndexSegmentBuilder; -import com.bigdata.io.DataInputBuffer; import com.bigdata.journal.Journal.Options; -import com.bigdata.journal.Name2Addr.Entry; -import com.bigdata.journal.Name2Addr.EntrySerializer; import com.bigdata.resources.OverflowManager; import com.bigdata.util.concurrent.DaemonThreadFactory; import com.bigdata.util.concurrent.ShutdownHelper; @@ -104,6 +99,12 @@ /** The caller specified commit time. */ final protected long commitTime; + /** + * The {@link ICommitRecord} corresponding to the caller specified commit + * time. + */ + final protected ICommitRecord commitRecord; + // cause from first task to error. final protected AtomicReference<Throwable> firstCause = new AtomicReference<Throwable>(); @@ -155,6 +156,8 @@ this.outFile = outFile; this.commitTime = commitTime; + + this.commitRecord = src.getCommitRecord(commitTime); } @@ -259,14 +262,17 @@ final long begin = System.currentTimeMillis(); - // using read-committed view of Name2Addr + // using snapshot isolation view of Name2Addr final int nindices = (int) oldJournal.getName2Addr(commitTime) .rangeCount(null, null); - // using read-committed view of Name2Addr - final ITupleIterator itr = oldJournal.getName2Addr(commitTime) - .rangeIterator(null, null); + final Iterator<String> nitr = oldJournal.indexNameScan( + null/* prefix */, commitTime); +// // using read-committed view of Name2Addr +// final ITupleIterator itr = oldJournal.getName2Addr(commitTime) +// .rangeIterator(null, null); + /* * This service will limit the #of indices that we process in parallel. * @@ -286,15 +292,17 @@ final ThreadPoolExecutor service = (ThreadPoolExecutor)Executors.newFixedThreadPool( 3/* maxParallel */, DaemonThreadFactory.defaultThreadFactory()); - while (itr.hasNext()) { + while (nitr.hasNext()) { - final ITuple tuple = itr.next(); +// final ITuple tuple = itr.next(); +// +// final Entry entry = EntrySerializer.INSTANCE +// .deserialize(new DataInputBuffer(tuple.getValue())); - final Entry entry = EntrySerializer.INSTANCE - .deserialize(new DataInputBuffer(tuple.getValue())); - + final String name = nitr.next(); + // Submit task to copy the index to the new journal. - service.submit(new CopyIndexTask(newJournal, entry)); + service.submit(new CopyIndexTask(newJournal, name)); } @@ -345,29 +353,30 @@ /** The new journal. */ protected final Journal newJournal; - /** - * An {@link Entry} from the {@link Name2Addr} index for an index - * defined on the {@link #oldJournal}. - */ - protected final Entry entry; +// /** +// * An {@link Entry} from the {@link Name2Addr} index for an index +// * defined on the {@link #oldJournal}. +// */ +// protected final Entry entry; + private final String name; + /** * @param newJournal * The new journal. - * @param entry - * An {@link Entry} from the {@link Name2Addr} index for an - * index defined on the {@link #oldJournal}. + * @param name The name of an index to be copied. */ - public CopyIndexTask(final Journal newJournal, final Entry entry) { + public CopyIndexTask(final Journal newJournal, final String name) { if (newJournal == null) throw new IllegalArgumentException(); - if (entry == null) + + if (name == null) throw new IllegalArgumentException(); this.newJournal = newJournal; - this.entry = entry; + this.name = name; } @@ -383,12 +392,21 @@ startCount.incrementAndGet(); if (INFO) - log.info("Start: name=" + entry.name); + log.info("Start: name=" + name); // source index. +// final BTree oldBTree = (BTree) oldJournal +// .getIndexWithCheckpointAddr(entry.checkpointAddr); + + /* + * This only supports the BTree class. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/585 + * (GIST) + */ final BTree oldBTree = (BTree) oldJournal - .getIndexWithCheckpointAddr(entry.checkpointAddr); - + .getIndexWithCommitRecord(name, commitRecord); + // #of index entries on the old index. final long entryCount = oldBTree.rangeCount(); @@ -414,7 +432,7 @@ final long oldCounter = oldBTree.getCounter().get(); if (INFO) - log.info("name=" + entry.name // + log.info("name=" + name // + ", entryCount=" + entryCount// + ", checkpoint=" + oldBTree.getCheckpoint()// ); @@ -452,7 +470,7 @@ */ if (DEBUG) - log.debug("Copying data to new journal: name=" + entry.name + log.debug("Copying data to new journal: name=" + name + ", entryCount=" + entryCount); newBTree.rangeCopy(oldBTree, null, null, true/* overflow */); @@ -460,10 +478,10 @@ /* * Register the new B+Tree on the new journal. */ - newJournal.registerIndex(entry.name, newBTree); + newJournal.registerIndex(name, newBTree); if (DEBUG) - log.debug("Done with index: name=" + entry.name); + log.debug("Done with index: name=" + name); doneCount.incrementAndGet(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -27,28 +27,26 @@ package com.bigdata.journal; -import java.io.ByteArrayInputStream; -import java.io.DataInputStream; import java.io.File; +import java.nio.ByteBuffer; import java.util.Date; +import java.util.Iterator; import java.util.Map; import java.util.Properties; import java.util.TreeMap; import org.apache.log4j.Logger; +import com.bigdata.btree.AbstractBTree; import com.bigdata.btree.BTree; -import com.bigdata.btree.Checkpoint; import com.bigdata.btree.DumpIndex; +import com.bigdata.btree.DumpIndex.PageStats; import com.bigdata.btree.ICheckpointProtocol; import com.bigdata.btree.IIndex; import com.bigdata.btree.ITupleIterator; -import com.bigdata.btree.DumpIndex.PageStats; -import com.bigdata.io.SerializerUtil; -import com.bigdata.journal.Name2Addr.Entry; -import com.bigdata.journal.Name2Addr.EntrySerializer; import com.bigdata.rawstore.Bytes; import com.bigdata.rwstore.RWStore; +import com.bigdata.util.ChecksumUtility; import com.bigdata.util.InnerCause; /** @@ -66,7 +64,8 @@ * @todo add an option to copy off data from one or more indices as of a * specified commit time? * - * @todo add an option to restrict the names of the indices to be dumped (-name=<regex>). + * @todo add an option to restrict the names of the indices to be dumped + * (-name=<regex>). * * @todo allow dump even on a journal that is open (e.g., only request a read * lock or do not request a lock). An error is reported when you actually @@ -75,6 +74,11 @@ * root blocks must be consistent when they are read, a reader would have * to have a lock at the moment that it read the root blocks... * + * TODO GIST : Support all types of indices. + * + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/585"> GIST + * </a> + * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ @@ -82,9 +86,9 @@ private static final Logger log = Logger.getLogger(DumpJournal.class); - public DumpJournal() { - - } +// public DumpJournal() { +// +// } /** * Dump one or more journal files. @@ -170,8 +174,65 @@ try { - dumpJournal(file,dumpHistory,dumpPages,dumpIndices,showTuples); + /* + * Stat the file and report on its size, etc. + */ + { + + System.out.println("File: "+file); + + if(!file.exists()) { + + System.err.println("No such file"); + + System.exit(1); + + } + + if(!file.isFile()) { + + System.err.println("Not a regular file"); + + System.exit(1); + + } + + System.out.println("Length: "+file.length()); + + System.out.println("Last Modified: "+new Date(file.lastModified())); + + } + final Properties properties = new Properties(); + + { + + properties.setProperty(Options.FILE, file.toString()); + + properties.setProperty(Options.READ_ONLY, "" + true); + + properties.setProperty(Options.BUFFER_MODE, + BufferMode.Disk.toString()); + + } + + System.out.println("Opening (read-only): " + file); + + final Journal journal = new Journal(properties); + + try { + + final DumpJournal dumpJournal = new DumpJournal(journal); + + dumpJournal.dumpJournal(dumpHistory, dumpPages, + dumpIndices, showTuples); + + } finally { + + journal.close(); + + } + } catch( RuntimeException ex) { ex.printStackTrace(); @@ -186,97 +247,120 @@ } - public static void dumpJournal(File file,boolean dumpHistory,boolean dumpPages,boolean dumpIndices,boolean showTuples) { - - /* - * Stat the file and report on its size, etc. - */ - { + /** + * + * @param dumpHistory + * Dump metadata for indices in all commit records (default only + * dumps the metadata for the indices as of the most current + * committed state). + * @param dumpPages + * Dump the pages of the indices and reports some information on + * the page size. + * @param dumpIndices + * Dump the indices (does not show the tuples by default). + * @param showTuples + * Dump the records in the indices. + */ + public void dumpJournal(final boolean dumpHistory, final boolean dumpPages, + final boolean dumpIndices, final boolean showTuples) { + + try { - System.out.println("File: "+file); + final FileMetadata fmd = journal.getFileMetadata(); - if(!file.exists()) { + if (fmd != null) { + + /* + * Note: The FileMetadata is only available on a re-open of an + * existing Journal. + */ - System.err.println("No such file"); - - System.exit(1); - - } - - if(!file.isFile()) { - - System.err.println("Not a regular file"); - - System.exit(1); - - } - - System.out.println("Length: "+file.length()); + // dump the MAGIC and VERSION. + System.out.println("magic=" + Integer.toHexString(fmd.magic)); + System.out.println("version=" + + Integer.toHexString(fmd.version)); - System.out.println("Last Modified: "+new Date(file.lastModified())); - - } - - final Properties properties = new Properties(); + /* + * Report on: + * + * - the length of the journal. - the #of bytes available for + * user data in the journal. - the offset at which the next + * record would be written. - the #of bytes remaining in the + * user extent. + */ - { - - properties.setProperty(Options.FILE, file.toString()); - - properties.setProperty(Options.READ_ONLY, "" + true); - - properties.setProperty(Options.BUFFER_MODE,BufferMode.Disk.toString()); - - } - - System.out.println("Opening (read-only): "+file); - - final Journal journal = new Journal(properties); + final long bytesAvailable = (fmd.userExtent - fmd.nextOffset); - try { - - final FileMetadata fmd = journal.getFileMetadata(); + System.out.println("extent=" + fmd.extent + "(" + fmd.extent + / Bytes.megabyte + "M)" + ", userExtent=" + + fmd.userExtent + "(" + fmd.userExtent + / Bytes.megabyte + "M)" + ", bytesAvailable=" + + bytesAvailable + "(" + bytesAvailable + / Bytes.megabyte + "M)" + ", nextOffset=" + + fmd.nextOffset); - // dump the MAGIC and VERSION. - System.out.println("magic="+Integer.toHexString(fmd.magic)); - System.out.println("version="+Integer.toHexString(fmd.version)); - - // dump the root blocks. - System.out.println(fmd.rootBlock0.toString()); - System.out.println(fmd.rootBlock1.toString()); + } - // report on which root block is the current root block. - System.out.println("The current root block is #" - + (journal.getRootBlockView().isRootBlock0() ? 0 : 1)); - - /* - * Report on: - * - * - the length of the journal. - * - the #of bytes available for user data in the journal. - * - the offset at which the next record would be written. - * - the #of bytes remaining in the user extent. - */ + { - final long bytesAvailable = (fmd.userExtent - fmd.nextOffset); - - System.out.println("extent="+fmd.extent+"("+fmd.extent/Bytes.megabyte+"M)"+ - ", userExtent="+fmd.userExtent+"("+fmd.userExtent/Bytes.megabyte+"M)"+ - ", bytesAvailable="+bytesAvailable+"("+bytesAvailable/Bytes.megabyte+"M)"+ - ", nextOffset="+fmd.nextOffset); + /* + * Dump the root blocks. + * + * Note: This uses the IBufferStrategy to access the root + * blocks. The code used to use the FileMetadata, but that was + * only available for a re-opened journal. This approach works + * for a new Journal as well. + */ + { + final ByteBuffer rootBlock0 = journal.getBufferStrategy() + .readRootBlock(true/* rootBlock0 */); + + if (rootBlock0 != null) { + + System.out.println(new RootBlockView( + true/* rootBlock0 */, rootBlock0, + new ChecksumUtility()).toString()); + + } + + } + + { + + final ByteBuffer rootBlock1 = journal.getBufferStrategy() + .readRootBlock(false/* rootBlock0 */); + + if (rootBlock1 != null) { + + System.out.println(new RootBlockView( + false/* rootBlock0 */, rootBlock1, + new ChecksumUtility()).toString()); + + } + + } + // System.out.println(fmd.rootBlock0.toString()); + // System.out.println(fmd.rootBlock1.toString()); + + // report on which root block is the current root block. + System.out.println("The current root block is #" + + (journal.getRootBlockView().isRootBlock0() ? 0 : 1)); + + } + final IBufferStrategy strategy = journal.getBufferStrategy(); - + if (strategy instanceof RWStrategy) { - + final RWStore store = ((RWStrategy) strategy).getStore(); - + final StringBuilder sb = new StringBuilder(); - + store.showAllocators(sb); - + System.out.println(sb); - + } final CommitRecordIndex commitRecordIndex = journal @@ -322,8 +406,9 @@ System.out.println(commitRecord.toString()); - dumpNamedIndicesMetadata(journal,commitRecord,dumpPages,dumpIndices,showTuples); - + dumpNamedIndicesMetadata(commitRecord, dumpPages, + dumpIndices, showTuples); + } } else { @@ -336,7 +421,8 @@ System.out.println(commitRecord.toString()); - dumpNamedIndicesMetadata(journal,commitRecord,dumpPages,dumpIndices,showTuples); + dumpNamedIndicesMetadata(commitRecord, dumpPages, dumpIndices, + showTuples); } @@ -347,32 +433,42 @@ } } + + private final Journal journal; + + public DumpJournal(final Journal journal) { + + if (journal == null) + throw new IllegalArgumentException(); + + this.journal = journal; + + } - /** - * Get the {@link Checkpoint} record for a named index. - * - * @param journal - * The journal. - * @param name2Addr - * Some {@link Name2Addr} instance (might not implement - * {@link Name2Addr}). - * @param name - * The index name. - * @return The {@link Checkpoint} record. - */ - private static Checkpoint getCheckpoint(final Journal journal, - final IIndex name2Addr, final String name) { - final byte[] val = name2Addr.lookup(name2Addr.getIndexMetadata() - .getTupleSerializer().getKeyBuilder().reset().append(name) - .getKey()); - assert val != null; - final Entry e = EntrySerializer.INSTANCE - .deserialize(new DataInputStream(new ByteArrayInputStream(val))); - assert e.name.equals(name); - final Checkpoint checkpoint = (Checkpoint) SerializerUtil - .deserialize(journal.read(e.checkpointAddr)); - return checkpoint; - } +// /** +// * Get the {@link Checkpoint} record for a named index. +// * +// * @param journal +// * The journal. +// * @param name2Addr +// * Some {@link Name2Addr} instance (might not implement +// * {@link Name2Addr}). +// * @param name +// * The index name. +// * @return The {@link Checkpoint} record. +// */ +// private Checkpoint getCheckpoint(final IIndex name2Addr, final String name) { +// final byte[] val = name2Addr.lookup(name2Addr.getIndexMetadata() +// .getTupleSerializer().getKeyBuilder().reset().append(name) +// .getKey()); +// assert val != null; +// final Entry e = EntrySerializer.INSTANCE +// .deserialize(new DataInputStream(new ByteArrayInputStream(val))); +// assert e.name.equals(name); +// final Checkpoint checkpoint = (Checkpoint) SerializerUtil +// .deserialize(journal.read(e.checkpointAddr)); +// return checkpoint; +// } /** * Dump metadata about each named index as of the specified commit record. @@ -380,33 +476,38 @@ * @param journal * @param commitRecord */ - private static void dumpNamedIndicesMetadata(AbstractJournal journal, - ICommitRecord commitRecord, boolean dumpPages, boolean dumpIndices, boolean showTuples) { + private void dumpNamedIndicesMetadata(final ICommitRecord commitRecord, + final boolean dumpPages, final boolean dumpIndices, + final boolean showTuples) { // view as of that commit record. - final IIndex name2Addr = journal.getName2Addr(commitRecord.getTimestamp()); + final IIndex name2Addr = journal.getName2Addr(commitRecord + .getTimestamp()); - final ITupleIterator itr = name2Addr.rangeIterator(null,null); - - final Map<String, PageStats> pageStats = dumpPages ? new TreeMap<String, PageStats>() - : null; + final Iterator<String> nitr = journal.indexNameScan(null/* prefix */, + commitRecord.getTimestamp()); + +// final ITupleIterator<?> itr = name2Addr.rangeIterator(null, null); + + final Map<String, PageStats> pageStats = dumpPages ? new TreeMap<String, PageStats>() + : null; - while (itr.hasNext()) { + while (nitr.hasNext()) { - // a registered index. - final Name2Addr.Entry entry = Name2Addr.EntrySerializer.INSTANCE - .deserialize(itr.next().getValueStream()); +// // a registered index. +// final Name2Addr.Entry entry = Name2Addr.EntrySerializer.INSTANCE + // .deserialize(itr.next().getValueStream()); - System.out.println("name=" + entry.name + ", addr=" - + journal.toString(entry.checkpointAddr)); + final String name = nitr.next(); + + System.out.println("name=" + name); - // load B+Tree from its checkpoint record. - final BTree ndx; + // load index from its checkpoint record. + final ICheckpointProtocol ndx; try { - - ndx = (BTree) journal - .getIndexWithCheckpointAddr(entry.checkpointAddr); + ndx = journal.getIndexWithCommitRecord(name, commitRecord); + } catch (Throwable t) { if (InnerCause.isInnerCause(t, ClassNotFoundException.class)) { @@ -435,20 +536,24 @@ // show metadata record. System.out.println("\t" + ndx.getIndexMetadata()); - if (pageStats != null) { - - final PageStats stats = DumpIndex - .dumpPages(ndx, false/* dumpNodeState */); + if (ndx instanceof AbstractBTree) { - System.out.println("\t" + stats); + if (pageStats != null) { - pageStats.put(entry.name, stats); + final PageStats stats = DumpIndex.dumpPages( + (AbstractBTree) ndx, false/* dumpNodeState */); - } + System.out.println("\t" + stats); - if (dumpIndices) - DumpIndex.dumpIndex(ndx, showTuples); + pageStats.put(name, stats); + } + + if (dumpIndices) + DumpIndex.dumpIndex((AbstractBTree) ndx, showTuples); + + } + } if (pageStats != null) { @@ -502,10 +607,13 @@ if (!(tmp instanceof BTree)) { /* - * FIXME Handle other type of named indices here in a more - * graceful manner. They probably need to be grouped by type - * since each type will require a different output format - * for the metadata that we are writing out. + * FIXME GIST : Handle other type of named indices here in a + * more graceful manner. They probably need to be grouped by + * type since each type will require a different output + * format for the metadata that we are writing out. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/585 + * (GIST) */ System.out.println("name: " + name + ", class=" Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IBTreeManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IBTreeManager.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IBTreeManager.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -44,7 +44,9 @@ * @todo change registerIndex() methods to return void and have people use * {@link #getIndex(String)} to obtain the view after they have registered * the index. This will make it somewhat easier to handle things like the - * registration of an index partition reading from multiple resources. + * registration of an index partition reading from multiple resources or + * the registration of indices that are not {@link BTree}s (HTree, Stream, + * etc). * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexManager.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexManager.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -75,5 +75,5 @@ * if <i>name</i> does not identify a registered index. */ public void dropIndex(String name); - + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexStore.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IIndexStore.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -23,6 +23,7 @@ */ package com.bigdata.journal; +import java.util.Iterator; import java.util.concurrent.ExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; @@ -58,6 +59,19 @@ public IIndex getIndex(String name, long timestamp); /** + * Iterator visits the names of all indices spanned by the given prefix. + * + * @param prefix + * The prefix (optional). + * @param timestamp + * A timestamp which represents either a possible commit time on + * the store or a read-only transaction identifier. + * + * @return An iterator visiting those index names. + */ + public Iterator<String> indexNameScan(String prefix, long timestamp); + + /** * Return an unisolated view of the global {@link SparseRowStore} used to * store named property sets. * Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITx.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITx.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITx.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -96,6 +96,24 @@ */ public long getStartTimestamp(); + /** + * The timestamp of the commit point against which this transaction is + * reading. + * <p> + * Note: This is not currently available on a cluster. In that context, we + * wind up with the same timestamp for {@link #startTime} and + * {@link #readsOnCommitTime} which causes cache pollution for things which + * cache based on {@link #readsOnCommitTime}. + * + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/266"> + * Refactor native long tx id to thin object</a> + * + * @see <a href="http://sourceforge.net/apps/trac/bigdata/ticket/546" > Add + * cache for access to historical index views on the Journal by name + * and commitTime. </a> + */ + public long getReadsOnCommitTime(); + // /** // * Return the timestamp assigned to this transaction by a centralized // * transaction manager service during its prepare+commit protocol. This Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -264,4 +264,9 @@ public int getHttpdPort() { return delegate.getHttpdPort(); } + + @Override + public Iterator<String> indexNameScan(String prefix, long timestamp) { + return delegate.indexNameScan(prefix, timestamp); + } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2012-08-17 21:07:52 UTC (rev 6451) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2012-08-17 21:27:56 UTC (rev 6452) @@ -44,14 +44,22 @@ import org.apache.log4j.Logger; import com.bigdata.btree.BTree; +import com.bigdata.btree.BytesUtil; import com.bigdata.btree.Checkpoint; import com.bigdata.btree.DefaultTupleSerializer; import com.bigdata.btree.ICheckpointProtocol; import com.bigdata.btree.IDirtyListener; +import com.bigdata.btree.IIndex; import com.bigdata.btree.ITuple; +import com.bigdata.btree.ITupleIte... [truncated message content] |
From: <tho...@us...> - 2012-08-20 14:32:08
|
Revision: 6457 http://bigdata.svn.sourceforge.net/bigdata/?rev=6457&view=rev Author: thompsonbry Date: 2012-08-20 14:31:58 +0000 (Mon, 20 Aug 2012) Log Message: ----------- Modified the object manager implementations to close the query connection (when one exists) and to ensure that the query result is closed. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/striterator/CloseableIteratorWrapper.java branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/NanoSparqlObjectManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/ObjectManager.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/striterator/CloseableIteratorWrapper.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/striterator/CloseableIteratorWrapper.java 2012-08-20 14:03:40 UTC (rev 6456) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/striterator/CloseableIteratorWrapper.java 2012-08-20 14:31:58 UTC (rev 6457) @@ -48,9 +48,16 @@ this.src = src; } - - /** NOP. */ + + /** Delegate to the source iff the source implements {@link ICloseable}. */ public void close() { + + if (src instanceof ICloseable) { + + ((ICloseable) src).close(); + + } + } public boolean hasNext() { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/NanoSparqlObjectManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/NanoSparqlObjectManager.java 2012-08-20 14:03:40 UTC (rev 6456) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/NanoSparqlObjectManager.java 2012-08-20 14:31:58 UTC (rev 6457) @@ -36,12 +36,16 @@ import org.openrdf.query.GraphQueryResult; import org.openrdf.query.MalformedQueryException; import org.openrdf.query.QueryEvaluationException; +import org.openrdf.query.QueryLanguage; +import org.openrdf.query.TupleQuery; import org.openrdf.query.TupleQueryResult; import org.openrdf.repository.RepositoryException; import com.bigdata.gom.gpo.GPO; import com.bigdata.gom.gpo.IGPO; import com.bigdata.rdf.model.BigdataValueFactoryImpl; +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.Sesame2BigdataIterator; import com.bigdata.rdf.sail.webapp.client.IPreparedGraphQuery; import com.bigdata.rdf.sail.webapp.client.IPreparedTupleQuery; import com.bigdata.rdf.sail.webapp.client.RemoteRepository; @@ -79,48 +83,27 @@ // // m_repo.close(); // } - @Override - public ICloseableIterator<BindingSet> evaluate(final String query) { - try { - final IPreparedTupleQuery q = m_repo.prepareTupleQuery(query); - final TupleQueryResult res = q.evaluate(); - return new CloseableIteratorWrapper<BindingSet>(new Iterator<BindingSet>() { + @Override + public ICloseableIterator<BindingSet> evaluate(final String query) { - @Override - public boolean hasNext() { - try { - return res.hasNext(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + try { - @Override - public BindingSet next() { - try { - return res.next(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + // Setup the query. + final IPreparedTupleQuery q = m_repo.prepareTupleQuery(query); - @Override - public void remove() { - throw new UnsupportedOperationException(); - } - - }); - } catch (RepositoryException e1) { - e1.printStackTrace(); - } catch (MalformedQueryException e1) { - e1.printStackTrace(); - } catch (QueryEvaluationException e) { - e.printStackTrace(); - } catch (Exception e) { - e.printStackTrace(); - } - - return null; + // Note: evaluate() runs asynchronously and must be closed(). + final TupleQueryResult res = q.evaluate(); + + // Will close the TupleQueryResult. + return new Sesame2BigdataIterator<BindingSet, QueryEvaluationException>( + res); + + } catch (Exception ex) { + + throw new RuntimeException(ex); + + } + } @Override @@ -163,47 +146,26 @@ } @Override - public ICloseableIterator<Statement> evaluateGraph(final String query) { - try { - final IPreparedGraphQuery q = m_repo.prepareGraphQuery(query); - final GraphQueryResult res = q.evaluate(); - return new CloseableIteratorWrapper<Statement>(new Iterator<Statement>() { + public ICloseableIterator<Statement> evaluateGraph(final String query) { - @Override - public boolean hasNext() { - try { - return res.hasNext(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + try { - @Override - public Statement next() { - try { - return res.next(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + // Setup the query. + final IPreparedGraphQuery q = m_repo.prepareGraphQuery(query); - @Override - public void remove() { - throw new UnsupportedOperationException(); - } - - }); - } catch (RepositoryException e1) { - e1.printStackTrace(); - } catch (MalformedQueryException e1) { - e1.printStackTrace(); - } catch (QueryEvaluationException e) { - e.printStackTrace(); - } catch (Exception e) { - e.printStackTrace(); - } - - return null; + // Note: evaluate() runs asynchronously and must be closed(). + final GraphQueryResult res = q.evaluate(); + + // Will close the GraphQueryResult. + return new Sesame2BigdataIterator<Statement, QueryEvaluationException>( + res); + + } catch (Exception ex) { + + throw new RuntimeException(ex); + + } + } @Override Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/ObjectManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/ObjectManager.java 2012-08-20 14:03:40 UTC (rev 6456) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-gom/src/java/com/bigdata/gom/om/ObjectManager.java 2012-08-20 14:31:58 UTC (rev 6457) @@ -55,6 +55,7 @@ import com.bigdata.rdf.model.BigdataValueFactory; import com.bigdata.rdf.sail.BigdataSailRepository; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.Sesame2BigdataIterator; import com.bigdata.rdf.sparql.ast.cache.CacheConnectionFactory; import com.bigdata.rdf.sparql.ast.cache.ICacheConnection; import com.bigdata.rdf.sparql.ast.cache.IDescribeCache; @@ -75,7 +76,7 @@ final private BigdataSailRepository m_repo; final private boolean readOnly; - private IDescribeCache m_describeCache; + final private IDescribeCache m_describeCache; /** * @@ -94,21 +95,34 @@ final AbstractTripleStore tripleStore = cxn.getDatabase(); this.readOnly = tripleStore.isReadOnly(); - - final QueryEngine queryEngine = QueryEngineFactory.getStandaloneQueryController((Journal) m_repo.getDatabase().getIndexManager()); - final ICacheConnection cacheConn = CacheConnectionFactory - .getExistingCacheConnection(queryEngine); + /* + * FIXME The DESCRIBE cache feature is not yet finished. This code will + * not obtain a connection to the DESCRIBE cache unless an unisolated + * query or update operation has already run against the query engine. + * This is a known bug and will be resolved as we work through the MVCC + * cache coherence for the DESCRIBE cache. + */ + { - if (cacheConn != null) { + final QueryEngine queryEngine = QueryEngineFactory + .getStandaloneQueryController((Journal) m_repo + .getDatabase().getIndexManager()); - m_describeCache = cacheConn.getDescribeCache( - tripleStore.getNamespace(), 0 /*tripleStore.getTimestamp()*/); + final ICacheConnection cacheConn = CacheConnectionFactory + .getExistingCacheConnection(queryEngine); - } else { + if (cacheConn != null) { - m_describeCache = null; + m_describeCache = cacheConn.getDescribeCache( + tripleStore.getNamespace(), tripleStore.getTimestamp()); + } else { + + m_describeCache = null; + + } + } } @@ -133,119 +147,100 @@ } @Override - public ICloseableIterator<BindingSet> evaluate(final String query) { + public ICloseableIterator<BindingSet> evaluate(final String query) { - BigdataSailRepositoryConnection cxn = null; - - try { + final BigdataSailRepositoryConnection cxn; + try { + cxn = getQueryConnection(); + } catch (RepositoryException e1) { + throw new RuntimeException(e1); + } - cxn = getQueryConnection(); - - final TupleQuery q = cxn.prepareTupleQuery(QueryLanguage.SPARQL, + try { + + // Setup the query. + final TupleQuery q = cxn.prepareTupleQuery(QueryLanguage.SPARQL, query); - - final TupleQueryResult res = q.evaluate(); - - return new CloseableIteratorWrapper<BindingSet>( - new Iterator<BindingSet>() { - @Override - public boolean hasNext() { - try { - return res.hasNext(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + // Note: evaluate() runs asynchronously and must be closed(). + final TupleQueryResult res = q.evaluate(); - @Override - public BindingSet next() { - try { - return res.next(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + // Will close the TupleQueryResult. + return new Sesame2BigdataIterator<BindingSet, QueryEvaluationException>( + res) { + public void close() { + // Close the TupleQueryResult. + super.close(); + try { + // Close the connection. + cxn.close(); + } catch (RepositoryException e) { + throw new RuntimeException(e); + } + } + }; - @Override - public void remove() { - throw new UnsupportedOperationException(); - } - - }); + } catch (Throwable t) { - } catch (Exception ex) { - - throw new RuntimeException(ex); - - } finally { - - if (cxn != null) { - try { - cxn.close(); - } catch (RepositoryException e) { - log.error(e, e); - } + // Error preparing the query. + try { + // Close the connection + cxn.close(); + } catch (RepositoryException e) { + log.error(e, e); } - + + throw new RuntimeException(t); + } - + } public ICloseableIterator<Statement> evaluateGraph(final String query) { - BigdataSailRepositoryConnection cxn = null; - - try { - - cxn = getQueryConnection(); - - final GraphQuery q = cxn.prepareGraphQuery(QueryLanguage.SPARQL, - query); - - final GraphQueryResult res = q.evaluate(); - - return new CloseableIteratorWrapper<Statement>(new Iterator<Statement>() { + final BigdataSailRepositoryConnection cxn; + try { + cxn = getQueryConnection(); + } catch (RepositoryException e1) { + throw new RuntimeException(e1); + } - @Override - public boolean hasNext() { - try { - return res.hasNext(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + try { - @Override - public Statement next() { - try { - return res.next(); - } catch (QueryEvaluationException e) { - throw new RuntimeException(e); - } - } + // Setup the query. + final GraphQuery q = cxn.prepareGraphQuery(QueryLanguage.SPARQL, + query); - @Override - public void remove() { - throw new UnsupportedOperationException(); - } - - }); + // Note: evaluate() runs asynchronously and must be closed(). + final GraphQueryResult res = q.evaluate(); - } catch (Exception t) { - - throw new RuntimeException(t); - - } finally { - - if (cxn != null) { - try { - cxn.close(); - } catch (RepositoryException e) { - log.error(e, e); + // Will close the TupleQueryResult. + return new Sesame2BigdataIterator<Statement, QueryEvaluationException>( + res) { + public void close() { + // Close the TupleQueryResult. + super.close(); + try { + // Close the connection. + cxn.close(); + } catch (RepositoryException e) { + throw new RuntimeException(e); + } } + }; + + } catch (Throwable t) { + + // Error preparing the query. + try { + // Close the connection + cxn.close(); + } catch (RepositoryException e) { + log.error(e, e); } - + + throw new RuntimeException(t); + } } @@ -271,21 +266,21 @@ ((GPO) gpo).dematerialize(); - if (m_describeCache == null) { - AbstractTripleStore store = m_repo.getDatabase(); - final QueryEngine queryEngine = QueryEngineFactory.getStandaloneQueryController((Journal) store.getIndexManager()); - - final ICacheConnection cacheConn = CacheConnectionFactory - .getExistingCacheConnection(queryEngine); - - if (cacheConn != null) { - - m_describeCache = cacheConn.getDescribeCache( - store.getNamespace(), store.getTimestamp()); - - } - - } +// if (m_describeCache == null) { +// AbstractTripleStore store = m_repo.getDatabase(); +// final QueryEngine queryEngine = QueryEngineFactory.getStandaloneQueryController((Journal) store.getIndexManager()); +// +// final ICacheConnection cacheConn = CacheConnectionFactory +// .getExistingCacheConnection(queryEngine); +// +// if (cacheConn != null) { +// +// m_describeCache = cacheConn.getDescribeCache( +// store.getNamespace(), store.getTimestamp()); +// +// } +// +// } /* * At present the DESCRIBE query will simply return a set of statements This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2012-08-21 09:23:59
|
Revision: 6467 http://bigdata.svn.sourceforge.net/bigdata/?rev=6467&view=rev Author: mrpersonick Date: 2012-08-21 09:23:49 +0000 (Tue, 21 Aug 2012) Log Message: ----------- added a "search in search" service that let's you use the full text index as a filtering mechanism Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/accesspath/ThickCloseableIterator.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/ServiceRegistry.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/SearchInSearchServiceFactory.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/accesspath/ThickCloseableIterator.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/accesspath/ThickCloseableIterator.java 2012-08-21 09:14:56 UTC (rev 6466) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/accesspath/ThickCloseableIterator.java 2012-08-21 09:23:49 UTC (rev 6467) @@ -64,6 +64,11 @@ private final E[] a; /** + * The number of elements to be visited by the iterator. + */ + private final int len; + + /** * Create a thick iterator. * * @param a @@ -79,14 +84,43 @@ throw new IllegalArgumentException(); this.a = a; + this.len = a.length; lastIndex = -1; } + /** + * Create a thick iterator. + * + * @param a + * The array of elements to be visited by the iterator (may be + * empty, but may not be <code>null</code>). + * @param len + * The number of elements to be visited by the iterator. Must be + * less than the length of the array. + * + * @throws IllegalArgumentException + * if <i>a</i> is <code>null</code>. + */ + public ThickCloseableIterator(final E[] a, final int len) { + + if (a == null) + throw new IllegalArgumentException(); + + if (len > a.length) + throw new IllegalArgumentException(); + + this.a = a; + this.len = len; + + lastIndex = -1; + + } + public boolean hasNext() { - if(open && lastIndex + 1 < a.length) + if(open && lastIndex + 1 < len) return true; close(); Added: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/SearchInSearchServiceFactory.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/SearchInSearchServiceFactory.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/SearchInSearchServiceFactory.java 2012-08-21 09:23:49 UTC (rev 6467) @@ -0,0 +1,717 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Sep 9, 2011 + */ + +package com.bigdata.rdf.sparql.ast.eval; + +import java.io.Serializable; +import java.util.Arrays; +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.LinkedHashSet; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.log4j.Logger; +import org.openrdf.model.Literal; +import org.openrdf.model.URI; + +import com.bigdata.bop.BOp; +import com.bigdata.bop.IBindingSet; +import com.bigdata.bop.IVariable; +import com.bigdata.bop.Var; +import com.bigdata.btree.IIndex; +import com.bigdata.btree.keys.IKeyBuilder; +import com.bigdata.btree.keys.KeyBuilder; +import com.bigdata.btree.keys.SuccessorUtil; +import com.bigdata.cache.ConcurrentWeakValueCacheWithTimeout; +import com.bigdata.rdf.internal.IV; +import com.bigdata.rdf.lexicon.ITextIndexer; +import com.bigdata.rdf.sparql.ast.ConstantNode; +import com.bigdata.rdf.sparql.ast.GroupNodeBase; +import com.bigdata.rdf.sparql.ast.IGroupMemberNode; +import com.bigdata.rdf.sparql.ast.StatementPatternNode; +import com.bigdata.rdf.sparql.ast.TermNode; +import com.bigdata.rdf.sparql.ast.VarNode; +import com.bigdata.rdf.sparql.ast.service.BigdataNativeServiceOptions; +import com.bigdata.rdf.sparql.ast.service.BigdataServiceCall; +import com.bigdata.rdf.sparql.ast.service.IServiceOptions; +import com.bigdata.rdf.sparql.ast.service.ServiceCallCreateParams; +import com.bigdata.rdf.sparql.ast.service.ServiceFactory; +import com.bigdata.rdf.sparql.ast.service.ServiceNode; +import com.bigdata.rdf.spo.ISPO; +import com.bigdata.rdf.spo.SPOKeyOrder; +import com.bigdata.rdf.store.AbstractTripleStore; +import com.bigdata.rdf.store.BD; +import com.bigdata.relation.accesspath.EmptyCloseableIterator; +import com.bigdata.relation.accesspath.ThickCloseableIterator; +import com.bigdata.search.Hiterator; +import com.bigdata.search.IHit; +import com.bigdata.striterator.ICloseableIterator; + +/** + * A factory for a "search in search" service. + * It accepts a group that have a single triple pattern in it: + * + * service bd:searchInSearch { + * ?s bd:searchInSearch "search" . + * } + * + * This service will then use the full text index to filter out incoming + * bindings for ?s that do not link to a Literal that is found via the full + * text index with the supplied search string. If there are no incoming + * bindings (or none that have ?s bound), this service will produce no output. + */ +public class SearchInSearchServiceFactory implements ServiceFactory { + + private static final Logger log = Logger + .getLogger(SearchInSearchServiceFactory.class); + + /* + * Note: This could extend the base class to allow for search service + * configuration options. + */ + private final BigdataNativeServiceOptions serviceOptions; + + public SearchInSearchServiceFactory() { + + serviceOptions = new BigdataNativeServiceOptions(); + +// serviceOptions.setRunFirst(true); + + } + + @Override + public BigdataNativeServiceOptions getServiceOptions() { + + return serviceOptions; + + } + + public BigdataServiceCall create(final ServiceCallCreateParams params) { + + if (params == null) + throw new IllegalArgumentException(); + + final AbstractTripleStore store = params.getTripleStore(); + + if (store == null) + throw new IllegalArgumentException(); + + final ServiceNode serviceNode = params.getServiceNode(); + + if (serviceNode == null) + throw new IllegalArgumentException(); + + /* + * Validate the search predicates for a given search variable. + */ + + final Map<IVariable<?>, Map<URI, StatementPatternNode>> map = verifyGraphPattern( + store, serviceNode.getGraphPattern()); + + if (map == null) + throw new RuntimeException("Not a search request."); + + if (map.size() != 1) + throw new RuntimeException( + "Multiple search requests may not be combined."); + + final Map.Entry<IVariable<?>, Map<URI, StatementPatternNode>> e = map + .entrySet().iterator().next(); + + final IVariable<?> searchVar = e.getKey(); + + final Map<URI, StatementPatternNode> statementPatterns = e.getValue(); + + validateSearch(searchVar, statementPatterns); + + /* + * Create and return the ServiceCall object which will execute this + * query. + */ + + return new SearchCall(store, searchVar, statementPatterns, + getServiceOptions()); + + } + + /** + * Validate the search request. This looks for search magic predicates and + * returns them all. It is an error if anything else is found in the group. + * All such search patterns are reported back by this method, but the + * service can only be invoked for one a single search variable at a time. + * The caller will detect both the absence of any search and the presence of + * more than one search and throw an exception. + */ + private Map<IVariable<?>, Map<URI, StatementPatternNode>> verifyGraphPattern( + final AbstractTripleStore database, + final GroupNodeBase<IGroupMemberNode> group) { + + // lazily allocate iff we find some search predicates in this group. + Map<IVariable<?>, Map<URI, StatementPatternNode>> tmp = null; + + final int arity = group.arity(); + + for (int i = 0; i < arity; i++) { + + final BOp child = group.get(i); + + if (child instanceof GroupNodeBase<?>) { + + throw new RuntimeException("Nested groups are not allowed."); + + } + + if (child instanceof StatementPatternNode) { + + final StatementPatternNode sp = (StatementPatternNode) child; + + final TermNode p = sp.p(); + + if (!p.isConstant()) + throw new RuntimeException("Expecting search predicate: " + + sp); + + final URI uri = (URI) ((ConstantNode) p).getValue(); + + if (!uri.stringValue().startsWith(BD.SEARCH_NAMESPACE)) + throw new RuntimeException("Expecting search predicate: " + + sp); + + /* + * Some search predicate. + */ + + if (!ASTSearchOptimizer.searchUris.contains(uri) && + !BD.SEARCH_IN_SEARCH.equals(uri)) { + throw new RuntimeException("Unknown search predicate: " + + uri); + } + + final TermNode s = sp.s(); + + if (!s.isVariable()) + throw new RuntimeException( + "Subject of search predicate is constant: " + sp); + + final IVariable<?> searchVar = ((VarNode) s) + .getValueExpression(); + + // Lazily allocate map. + if (tmp == null) { + + tmp = new LinkedHashMap<IVariable<?>, Map<URI, StatementPatternNode>>(); + + } + + // Lazily allocate set for that searchVar. + Map<URI, StatementPatternNode> statementPatterns = tmp + .get(searchVar); + + if (statementPatterns == null) { + + tmp.put(searchVar, + statementPatterns = new LinkedHashMap<URI, StatementPatternNode>()); + + } + + // Add search predicate to set for that searchVar. + statementPatterns.put(uri, sp); + + } + + } + + return tmp; + + } + + /** + * Validate the search. There must be exactly one {@link BD#SEARCH} + * predicate. There should not be duplicates of any of the search predicates + * for a given searchVar. + */ + private void validateSearch(final IVariable<?> searchVar, + final Map<URI, StatementPatternNode> statementPatterns) { + + final Set<URI> uris = new LinkedHashSet<URI>(); + + for(StatementPatternNode sp : statementPatterns.values()) { + + final URI uri = (URI)(sp.p()).getValue(); + + if (!uris.add(uri)) + throw new RuntimeException( + "Search predicate appears multiple times for same search variable: predicate=" + + uri + ", searchVar=" + searchVar); + + if (uri.equals(BD.SEARCH_IN_SEARCH)) { + + assertObjectIsLiteral(sp); + + } else if (uri.equals(BD.RELEVANCE) || uri.equals(BD.RANK)) { + + assertObjectIsVariable(sp); + + } else if(uri.equals(BD.MIN_RANK) || uri.equals(BD.MAX_RANK)) { + + assertObjectIsLiteral(sp); + + } else if (uri.equals(BD.MIN_RELEVANCE) || uri.equals(BD.MAX_RELEVANCE)) { + + assertObjectIsLiteral(sp); + + } else if(uri.equals(BD.MATCH_ALL_TERMS)) { + + assertObjectIsLiteral(sp); + + } else if(uri.equals(BD.MATCH_EXACT)) { + + assertObjectIsLiteral(sp); + + } else if(uri.equals(BD.SEARCH_TIMEOUT)) { + + assertObjectIsLiteral(sp); + + } else if(uri.equals(BD.MATCH_REGEX)) { + + // a variable for the object is equivalent to regex = null +// assertObjectIsLiteral(sp); + + } else { + + throw new AssertionError("Unverified search predicate: " + sp); + + } + + } + + if (!uris.contains(BD.SEARCH_IN_SEARCH)) { + throw new RuntimeException("Required search predicate not found: " + + BD.SUBJECT_SEARCH + " for searchVar=" + searchVar); + } + + } + + private void assertObjectIsLiteral(final StatementPatternNode sp) { + + final TermNode o = sp.o(); + + if (!o.isConstant() + || !(((ConstantNode) o).getValue() instanceof Literal)) { + + throw new IllegalArgumentException("Object is not literal: " + sp); + + } + + } + + private void assertObjectIsVariable(final StatementPatternNode sp) { + + final TermNode o = sp.o(); + + if (!o.isVariable()) { + + throw new IllegalArgumentException("Object must be variable: " + sp); + + } + + } + + /** + * + * Note: This has the {@link AbstractTripleStore} reference attached. This + * is not a {@link Serializable} object. It MUST run on the query + * controller. + */ + private static class SearchCall implements BigdataServiceCall { + + private final AbstractTripleStore store; + private final IIndex osp; + private final IServiceOptions serviceOptions; + private final Literal query; + private final IVariable<?>[] vars; + private final Literal minRank; + private final Literal maxRank; + private final Literal minRelevance; + private final Literal maxRelevance; + private final boolean matchAllTerms; + private final boolean matchExact; + private final Literal searchTimeout; + private final Literal matchRegex; + + public SearchCall( + final AbstractTripleStore store, + final IVariable<?> searchVar, + final Map<URI, StatementPatternNode> statementPatterns, + final IServiceOptions serviceOptions) { + + if(store == null) + throw new IllegalArgumentException(); + + if(searchVar == null) + throw new IllegalArgumentException(); + + if(statementPatterns == null) + throw new IllegalArgumentException(); + + if(serviceOptions == null) + throw new IllegalArgumentException(); + + this.store = store; + this.osp = store.getSPORelation().getIndex(SPOKeyOrder.OSP); + + this.serviceOptions = serviceOptions; + + /* + * Unpack the "search" magic predicate: + * + * [?searchVar bd:search objValue] + */ + final StatementPatternNode sp = statementPatterns.get(BD.SEARCH_IN_SEARCH); + + query = (Literal) sp.o().getValue(); + + /* + * Unpack the search service request parameters. + */ + + IVariable<?> relVar = null; + IVariable<?> rankVar = null; + Literal minRank = null; + Literal maxRank = null; + Literal minRelevance = null; + Literal maxRelevance = null; + boolean matchAllTerms = false; + boolean matchExact = false; + Literal searchTimeout = null; + Literal matchRegex = null; + + for (StatementPatternNode meta : statementPatterns.values()) { + + final URI p = (URI) meta.p().getValue(); + + final Literal oVal = meta.o().isConstant() ? (Literal) meta.o() + .getValue() : null; + + final IVariable<?> oVar = meta.o().isVariable() ? (IVariable<?>) meta + .o().getValueExpression() : null; + + if (BD.RELEVANCE.equals(p)) { + relVar = oVar; + } else if (BD.RANK.equals(p)) { + rankVar = oVar; + } else if (BD.MIN_RANK.equals(p)) { + minRank = (Literal) oVal; + } else if (BD.MAX_RANK.equals(p)) { + maxRank = (Literal) oVal; + } else if (BD.MIN_RELEVANCE.equals(p)) { + minRelevance = (Literal) oVal; + } else if (BD.MAX_RELEVANCE.equals(p)) { + maxRelevance = (Literal) oVal; + } else if (BD.MATCH_ALL_TERMS.equals(p)) { + matchAllTerms = ((Literal) oVal).booleanValue(); + } else if (BD.MATCH_EXACT.equals(p)) { + matchExact = ((Literal) oVal).booleanValue(); + } else if (BD.SEARCH_TIMEOUT.equals(p)) { + searchTimeout = (Literal) oVal; + } else if (BD.MATCH_REGEX.equals(p)) { + matchRegex = (Literal) oVal; + } + } + + this.vars = new IVariable[] {// + searchVar,// + relVar == null ? Var.var() : relVar,// must be non-null. + rankVar == null ? Var.var() : rankVar // must be non-null. + }; + + this.minRank = minRank; + this.maxRank = maxRank; + this.minRelevance = minRelevance; + this.maxRelevance = maxRelevance; + this.matchAllTerms = matchAllTerms; + this.matchExact = matchExact; + this.searchTimeout = searchTimeout; + this.matchRegex = matchRegex; + + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + private Hiterator<IHit<?>> getHiterator() { + +// final IValueCentricTextIndexer<IHit> textIndex = (IValueCentricTextIndexer) store +// .getLexiconRelation().getSearchEngine(); + + final ITextIndexer<IHit> textIndex = (ITextIndexer) + store.getLexiconRelation().getSearchEngine(); + + if (textIndex == null) + throw new UnsupportedOperationException("No free text index?"); + + String s = query.getLabel(); + final boolean prefixMatch; + if (s.indexOf('*') >= 0) { + prefixMatch = true; + s = s.replaceAll("\\*", ""); + } else { + prefixMatch = false; + } + + return (Hiterator) textIndex.search(s,// + query.getLanguage(),// + prefixMatch,// + minRelevance == null ? BD.DEFAULT_MIN_RELEVANCE : minRelevance.doubleValue()/* minCosine */, + maxRelevance == null ? BD.DEFAULT_MAX_RELEVANCE : maxRelevance.doubleValue()/* maxCosine */, + minRank == null ? BD.DEFAULT_MIN_RANK/*1*/ : minRank.intValue()/* minRank */, + maxRank == null ? BD.DEFAULT_MAX_RANK/*Integer.MAX_VALUE*/ : maxRank.intValue()/* maxRank */, + matchAllTerms, + matchExact, + searchTimeout == null ? BD.DEFAULT_TIMEOUT/*0L*/ : searchTimeout.longValue()/* timeout */, + TimeUnit.MILLISECONDS, + matchRegex == null ? null : matchRegex.stringValue()); + + } + + private static final ConcurrentWeakValueCacheWithTimeout<String, Set<IV>> cache = + new ConcurrentWeakValueCacheWithTimeout<String, Set<IV>>( + 10, 1000*60); + + private Set<IV> getSubjects() { + + final String s = query.getLabel(); + + if (cache.containsKey(s)) { + + return cache.get(s); + + } + + if (log.isInfoEnabled()) { + log.info("entering full text search..."); + } + + // query the full text index + final Hiterator<IHit<?>> src = getHiterator(); + + if (log.isInfoEnabled()) { + log.info("done with full text search."); + } + + if (log.isInfoEnabled()) { + log.info("starting subject collection..."); + } + + final Set<IV> subjects = new LinkedHashSet<IV>(); + + while (src.hasNext()) { + + final IV o = (IV) src.next().getDocId(); + + final Iterator<ISPO> it = + store.getAccessPath(null, null, o).iterator(); + + while (it.hasNext()) { + + subjects.add(it.next().s()); + + } + + } + + if (log.isInfoEnabled()) { + log.info("done with subject collection: " + subjects.size()); + } + + cache.put(s, subjects); + + return subjects; + + } + + /** + * {@inheritDoc} + * + * Iterate the incoming binding set. If it does not contain a binding + * for the searchVar, prune it. Then iterate the full text search + * results. For each result (O binding), test the incoming binding sets + * to see if there is a link between the binding for the searchVar and + * the O. If there is, add the binding set to the output and remove it + * from the set to be tested against subsequent O bindings. + */ + @Override + public ICloseableIterator<IBindingSet> call( + final IBindingSet[] bindingsClause) { + + if (log.isInfoEnabled()) { + log.info(bindingsClause.length); + log.info(Arrays.toString(bindingsClause)); + } + + final IVariable<?> searchVar = vars[0]; + +// final IBindingSet[] tmp = new IBindingSet[bindingsClause.length]; +// System.arraycopy(bindingsClause, 0, tmp, 0, bindingsClause.length); + +// final boolean[] tmp = new boolean[bindingsClause.length]; + + boolean foundOne = false; + + /* + * We are filtering out incoming binding sets that don't have a + * binding for the search var + */ + for (int i = 0; i < bindingsClause.length; i++) { + + final IBindingSet bs = bindingsClause[i]; + + if (bs.isBound(searchVar)) { + + // we need to test this binding set +// tmp[i] = true; + + // we have at least one binding set to test + foundOne = true; + + } + + } + + // filtered everything out + if (!foundOne) { + + return new EmptyCloseableIterator<IBindingSet>(); + + } + + final IBindingSet[] out = new IBindingSet[bindingsClause.length]; + + int numAccepted = 0; + +// if (log.isInfoEnabled()) { +// log.info("entering full text search..."); +// } +// +// // query the full text index +// final Hiterator<IHit<?>> src = getHiterator(); +// +// if (log.isInfoEnabled()) { +// log.info("done with full text search."); +// } +// +// while (src.hasNext()) { +// +// final IV o = (IV) src.next().getDocId(); +// +// for (int i = 0; i < bindingsClause.length; i++) { +// +// /* +// * The binding set has either been filtered out or already +// * accepted. +// */ +// if (!tmp[i]) +// continue; +// +// /* +// * We know it's bound. If it weren't it would have been +// * filtered out above. +// */ +// final IV s = (IV) bindingsClause[i].get(searchVar).get(); +// +// final IKeyBuilder kb = KeyBuilder.newInstance(); +// o.encode(kb); +// s.encode(kb); +// +// final byte[] fromKey = kb.getKey(); +// final byte[] toKey = SuccessorUtil.successor(fromKey.clone()); +// +// if (log.isInfoEnabled()) { +// log.info("starting range count..."); +// } +// +// final long rangeCount = osp.rangeCount(fromKey, toKey); +// +// if (log.isInfoEnabled()) { +// log.info("done with range count: " + rangeCount); +// } +// +// /* +// * Test the OSP index to see if we have a link. +// */ +//// if (!store.getAccessPath(s, null, o).isEmpty()) { +// if (rangeCount > 0) { +// +// // add the binding set to the output +// out[numAccepted++] = bindingsClause[i]; +// +// // don't need to test this binding set again +// tmp[i] = false; +// +// } +// +// } +// +// } + + final Set<IV> subjects = getSubjects(); + + for (int i = 0; i < bindingsClause.length; i++) { + + /* + * We know it's bound. If it weren't it would have been + * filtered out above. + */ + final IV s = (IV) bindingsClause[i].get(searchVar).get(); + + if (subjects.contains(s)) { + + // add the binding set to the output + out[numAccepted++] = bindingsClause[i]; + + } + + } + + if (log.isInfoEnabled()) { + log.info("finished search in search."); + } + + return new ThickCloseableIterator<IBindingSet>(out, numAccepted); + + } + + @Override + public IServiceOptions getServiceOptions() { + + return serviceOptions; + + } + + } + +} Property changes on: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/SearchInSearchServiceFactory.java ___________________________________________________________________ Added: svn:mime-type + text/plain Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/ServiceRegistry.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/ServiceRegistry.java 2012-08-21 09:14:56 UTC (rev 6466) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/ServiceRegistry.java 2012-08-21 09:23:49 UTC (rev 6467) @@ -10,6 +10,7 @@ import org.openrdf.model.URI; import org.openrdf.model.impl.URIImpl; +import com.bigdata.rdf.sparql.ast.eval.SearchInSearchServiceFactory; import com.bigdata.rdf.sparql.ast.QueryHints; import com.bigdata.rdf.sparql.ast.cache.DescribeServiceFactory; import com.bigdata.rdf.sparql.ast.eval.SearchServiceFactory; @@ -74,6 +75,9 @@ // Add the Bigdata search service. add(BD.SEARCH, new SearchServiceFactory()); + // Add the Bigdata search in search service. + add(BD.SEARCH_IN_SEARCH, new SearchInSearchServiceFactory()); + if (QueryHints.DEFAULT_DESCRIBE_CACHE) { add(new URIImpl(BD.NAMESPACE + "describe"), Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java 2012-08-21 09:14:56 UTC (rev 6466) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java 2012-08-21 09:23:49 UTC (rev 6467) @@ -358,6 +358,12 @@ final boolean DEFAULT_SUBJECT_SEARCH = false; /** + * Magic predicate used for the "search in search" service. Also serves + * as the identifier for the service itself. + */ + final URI SEARCH_IN_SEARCH = new URIImpl(SEARCH_NAMESPACE+"searchInSearch"); + + /** * Magic predicate used to query for free text search metadata. Use * in conjunction with {@link #SEARCH} as follows: * <p> Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java 2012-08-21 09:14:56 UTC (rev 6466) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java 2012-08-21 09:23:49 UTC (rev 6467) @@ -495,6 +495,12 @@ } } + + public long size() throws Exception { + + return rangeCount(null, null, null, null); + + } /** * Adds RDF data to the remote repository. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-08-26 14:02:56
|
Revision: 6481 http://bigdata.svn.sourceforge.net/bigdata/?rev=6481&view=rev Author: thompsonbry Date: 2012-08-26 14:02:49 +0000 (Sun, 26 Aug 2012) Log Message: ----------- Added support for gzip (.gz) files to SPARQL UPDATE's "LOAD" command and unit tests for this feature. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestSparqlUpdate.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLUpdateTest.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.gz Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java 2012-08-25 17:45:42 UTC (rev 6480) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java 2012-08-26 14:02:49 UTC (rev 6481) @@ -30,6 +30,7 @@ import info.aduna.iteration.CloseableIteration; import java.io.IOException; +import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; import java.net.URLConnection; @@ -41,6 +42,7 @@ import java.util.Set; import java.util.UUID; import java.util.concurrent.atomic.AtomicLong; +import java.util.zip.GZIPInputStream; import org.apache.log4j.Logger; import org.openrdf.model.Resource; @@ -1356,15 +1358,40 @@ final String contentType = hconn.getContentType(); + // The baseURL (passed to the parser). + final String baseURL = sourceURL.toExternalForm(); + + // The file path. + final String n = sourceURL.getFile(); + + // Attempt to obtain the format from the Content-Type. RDFFormat format = RDFFormat.forMIMEType(contentType); if (format == null) { - // Try to get the RDFFormat from the URL's file path. - format = RDFFormat.forFileName(sourceURL.getFile(), - RDFFormat.RDFXML// fallback - ); + + /* + * Try to get the RDFFormat from the URL's file path. + */ + + RDFFormat fmt = RDFFormat.forFileName(n); + + if (fmt == null && n.endsWith(".zip")) { + fmt = RDFFormat.forFileName(n.substring(0, n.length() - 4)); + } + + if (fmt == null && n.endsWith(".gz")) { + fmt = RDFFormat.forFileName(n.substring(0, n.length() - 3)); + } + + if (fmt == null) { + // Default format. + fmt = RDFFormat.RDFXML; + } + + format = fmt; + } - + if (format == null) throw new UnknownContentTypeException(contentType); @@ -1391,12 +1418,52 @@ defactoContext)); /* + * Setup the input stream. + */ + InputStream is = hconn.getInputStream(); + + try { + + /* + * Setup decompression. + */ + + if (n.endsWith(".gz")) { + + is = new GZIPInputStream(is); + +// } else if (n.endsWith(".zip")) { +// +// /* +// * TODO This will not process all entries in a zip input +// * stream, just the first. +// */ +// is = new ZipInputStream(is); + + } + + } catch (Throwable t) { + + if (is != null) { + + try { + is.close(); + } catch (Throwable t2) { + log.warn(t2, t2); + } + + throw new RuntimeException(t); + + } + + } + + /* * Run the parser, which will cause statements to be * inserted. */ - rdfParser.parse(hconn.getInputStream(), sourceURL - .toExternalForm()/* baseURL */); + rdfParser.parse(is, baseURL); } finally { Added: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.gz =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.gz ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestSparqlUpdate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestSparqlUpdate.java 2012-08-25 17:45:42 UTC (rev 6480) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestSparqlUpdate.java 2012-08-26 14:02:49 UTC (rev 6481) @@ -1715,6 +1715,40 @@ } + /** + * Verify ability to load data from a gzip resource. + */ + public void testLoadGZip() + throws Exception + { + final String update = "LOAD <file:bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.gz>"; + + final String ns = "http://bigdata.com/test/data#"; + + m_repo.prepareUpdate(update).evaluate(); + + assertTrue(hasStatement(f.createURI(ns, "mike"), RDFS.LABEL, + f.createLiteral("Michael Personick"), true)); + + } + +// /** +// * Verify ability to load data from a gzip resource. +// */ +// public void testLoadZip() +// throws Exception +// { +// final String update = "LOAD <file:bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.zip>"; +// +// final String ns = "http://bigdata.com/test/data#"; +// +// m_repo.prepareUpdate(update).evaluate(); +// +// assertTrue(hasStatement(f.createURI(ns, "mike"), RDFS.LABEL, +// f.createLiteral("Michael Personick"), true)); +// +// } + // //@Test // public void testUpdateSequenceInsertDeleteExample9() // throws Exception Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLUpdateTest.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLUpdateTest.java 2012-08-25 17:45:42 UTC (rev 6480) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLUpdateTest.java 2012-08-26 14:02:49 UTC (rev 6481) @@ -1770,6 +1770,40 @@ } + /** + * Verify ability to load data from a gzip resource. + */ + public void testLoadGZip() + throws Exception + { + final String update = "LOAD <file:bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.gz>"; + + final String ns = "http://bigdata.com/test/data#"; + + con.prepareUpdate(QueryLanguage.SPARQL, update).execute(); + + assertTrue(con.hasStatement(f.createURI(ns, "mike"), RDFS.LABEL, + f.createLiteral("Michael Personick"), true)); + + } + +// /** +// * Verify ability to load data from a zip resource. +// */ +// public void testLoadZip() +// throws Exception +// { +// final String update = "LOAD <file:bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf.zip>"; +// +// final String ns = "http://bigdata.com/test/data#"; +// +// con.prepareUpdate(QueryLanguage.SPARQL, update).execute(); +// +// assertTrue(con.hasStatement(f.createURI(ns, "mike"), RDFS.LABEL, +// f.createLiteral("Michael Personick"), true)); +// +// } + /* protected methods */ protected void loadDataset(final String datasetFile) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-08-26 20:32:50
|
Revision: 6483 http://bigdata.svn.sourceforge.net/bigdata/?rev=6483&view=rev Author: thompsonbry Date: 2012-08-26 20:32:41 +0000 (Sun, 26 Aug 2012) Log Message: ----------- The IRDFParserOptions interface was lifted out of RDFParserOptions. The SPARQL grammar was modified to accept options that can be specified in the same position as the SILENT keyword and a test suite was written to verify the correct interpretation of this extension to the SPARQL UPDATE grammar. For example, you can now specify: {{{ LOAD verifyData=true ... LOAD stopAtFirstError=false ... etc. }}} The historical value for verifyData when the zero argument version of the RDFParserOptions construct was used was TRUE when it should have been the same as the value specified for RDFParserOptions.Options.DEFAULT_VERIFY_DATA. This has been fixed. It now defaults to FALSE. This has the same semantics as the default for RDFParserBase. Note: The default for stopAtFirstError is NOT the same as the default for RDFParserBase. We should probably change this. @see https://sourceforge.net/apps/trac/bigdata/ticket/591 (SPARQL UPDATE "LOAD" extensions) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/update/ParseOp.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/RDFParserOptions.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/LoadGraph.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/VocabBuilder.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ASTVisitorBase.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/UpdateExprBuilder.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTLoad.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilder.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilderConstants.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilderTokenManager.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/sparql.jj branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/sparql.jjt branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/com/bigdata/rdf/sail/sparql/TestUpdateExprBuilder.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/com/bigdata/rdf/stress/LoadClosureAndQueryTest.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLUpdateTest.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRDFParserOptions.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/update/ParseOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/update/ParseOp.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/update/ParseOp.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -71,6 +71,7 @@ import com.bigdata.rdf.internal.impl.TermId; import com.bigdata.rdf.lexicon.LexiconRelation; import com.bigdata.rdf.model.BigdataValue; +import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.rio.PresortRioLoader; import com.bigdata.rdf.rio.RDFParserOptions; import com.bigdata.rdf.rio.StatementBuffer; @@ -340,7 +341,7 @@ * TODO Javadoc for annotations (which need to be defined) and * interaction with the triple store properties. */ - private final RDFParserOptions parserOptions; + private final IRDFParserOptions parserOptions; /** * The {@link AbstractTripleStore} on which the statements will Added: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRDFParserOptions.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRDFParserOptions.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRDFParserOptions.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -0,0 +1,59 @@ +package com.bigdata.rdf.rio; + +import org.openrdf.rio.RDFParser; +import org.openrdf.rio.RDFParser.DatatypeHandling; +import org.openrdf.rio.helpers.RDFParserBase; + +/** + * Instances of this interface may be used to configure options on an + * {@link RDFParser}. The options all have the defaults specified by + * {@link RDFParserBase}. + */ +public interface IRDFParserOptions { + + /** + * Return <code>true</code> if the parser should verify the data. + */ + boolean getVerifyData(); + + /** + * Return <code>true</code> if the parser should stop at the first error and + * <code>false</code> if it should continue processing. + */ + boolean getStopAtFirstError(); + + /** + * Return <code>true</code> if the parser should preserve blank node IDs. + */ + boolean getPreserveBNodeIDs(); + + /** + * Return the {@link DatatypeHandling} mode for the parser. + */ + DatatypeHandling getDatatypeHandling(); + + /** + * Sets the datatype handling mode (default is + * {@link DatatypeHandling#VERIFY}). + */ + void setDatatypeHandling(final DatatypeHandling datatypeHandling); + + /** + * Set whether the parser should preserve bnode identifiers specified in the + * source (default is <code>false</code>). + */ + void setPreserveBNodeIDs(final boolean preserveBNodeIDs); + + /** + * Sets whether the parser should stop immediately if it finds an error in + * the data (default value is <code>true</code>). + */ + void setStopAtFirstError(final boolean stopAtFirstError); + + /** + * Sets whether the parser should verify the data it parses (default value + * is <code>true</code>). + */ + void setVerifyData(final boolean verifyData); + +} \ No newline at end of file Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/RDFParserOptions.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/RDFParserOptions.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/rio/RDFParserOptions.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -46,7 +46,7 @@ * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ -public class RDFParserOptions implements Serializable { +public class RDFParserOptions implements Serializable, IRDFParserOptions { private static final Logger log = Logger.getLogger(RDFParserOptions.class); @@ -59,7 +59,7 @@ /** * Optional boolean property may be used to turn on data verification in - * the RIO parser (default is {@value #DEFAULT_PARSER_VERIFY_DATA}). + * the RIO parser (default is {@value #DEFAULT_VERIFY_DATA}). */ String VERIFY_DATA = RDFParserOptions.class.getName() + ".verifyData"; @@ -100,24 +100,40 @@ private DatatypeHandling datatypeHandling = DatatypeHandling.VERIFY; - private boolean preserveBNodeIDs = false; + private boolean preserveBNodeIDs = Boolean.valueOf(Options.DEFAULT_PRESERVE_BNODE_IDS); - private boolean stopAtFirstError = true; + private boolean stopAtFirstError = Boolean.valueOf(Options.DEFAULT_STOP_AT_FIRST_ERROR); - private boolean verifyData = true; + private boolean verifyData = Boolean.valueOf(Options.DEFAULT_VERIFY_DATA); + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#getVerifyData() + */ + @Override synchronized public boolean getVerifyData() { return verifyData; } + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#getStopAtFirstError() + */ + @Override synchronized public boolean getStopAtFirstError() { return stopAtFirstError; } + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#getPreserveBNodeIDs() + */ + @Override synchronized public boolean getPreserveBNodeIDs() { return preserveBNodeIDs; } + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#getDatatypeHandling() + */ + @Override synchronized public DatatypeHandling getDatatypeHandling() { return datatypeHandling; } @@ -182,16 +198,16 @@ public synchronized String toString() { return super.toString() + // "{verifyData=" + verifyData + // - ",preserveBNodeIDS=" + preserveBNodeIDs + // + ",preserveBNodeIDs=" + preserveBNodeIDs + // ",stopAtFirstError=" + stopAtFirstError + // ",datatypeHandling=" + datatypeHandling + // "}"; } - /** - * Sets the datatype handling mode (default is - * {@link DatatypeHandling#VERIFY}). + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#setDatatypeHandling(org.openrdf.rio.RDFParser.DatatypeHandling) */ + @Override synchronized public void setDatatypeHandling( final DatatypeHandling datatypeHandling) { @@ -202,26 +218,26 @@ } - /** - * Set whether the parser should preserve bnode identifiers specified in the - * source (default is <code>false</code>). + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#setPreserveBNodeIDs(boolean) */ + @Override synchronized public void setPreserveBNodeIDs(final boolean preserveBNodeIDs) { this.preserveBNodeIDs = preserveBNodeIDs; } - /** - * Sets whether the parser should stop immediately if it finds an error in - * the data (default value is <code>true</code>). + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#setStopAtFirstError(boolean) */ + @Override synchronized public void setStopAtFirstError(final boolean stopAtFirstError) { this.stopAtFirstError = stopAtFirstError; } - /** - * Sets whether the parser should verify the data it parses (default value - * is <code>true</code>). + /* (non-Javadoc) + * @see com.bigdata.rdf.rio.IRDFParserOptions#setVerifyData(boolean) */ + @Override synchronized public void setVerifyData(final boolean verifyData) { this.verifyData = verifyData; } @@ -239,4 +255,30 @@ p.setVerifyData(verifyData); } + public boolean equals(final Object o) { + + if (this == o) + return true; + + if (!(o instanceof IRDFParserOptions)) + return false; + + final IRDFParserOptions t = (IRDFParserOptions) o; + + if (verifyData != t.getVerifyData()) + return false; + + if (preserveBNodeIDs != t.getPreserveBNodeIDs()) + return false; + + if (stopAtFirstError != t.getStopAtFirstError()) + return false; + + if (!datatypeHandling.equals(getDatatypeHandling())) + return false; + + return true; + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/LoadGraph.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/LoadGraph.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/LoadGraph.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -30,8 +30,12 @@ import java.util.Map; import org.openrdf.model.URI; +import org.openrdf.rio.RDFParser; +import org.openrdf.rio.helpers.RDFParserBase; import com.bigdata.bop.BOp; +import com.bigdata.rdf.rio.IRDFParserOptions; +import com.bigdata.rdf.rio.RDFParserOptions; /** * The LOAD operation reads an RDF document from a IRI and inserts its triples @@ -52,6 +56,22 @@ */ private static final long serialVersionUID = 1L; + /** + * Adds options to control the behavior of the {@link RDFParser}. + * + * @see RDFParserOptions + * @see RDFParserBase + * @see RDFParser + */ + public interface Annotations extends GraphUpdate.Annotations{ + + /** + * {@link RDFParserOptions} (optional). + */ + String OPTIONS = "options"; + + } + public LoadGraph() { super(UpdateType.Load); @@ -131,6 +151,30 @@ } + /** + * Return the {@link RDFParserOptions}. + * + * @return The {@link RDFParserOptions} -or- <code>null</code> if the + * options were not configured. + */ + public IRDFParserOptions getRDFParserOptions() { + + return (IRDFParserOptions) getProperty(Annotations.OPTIONS); + + } + + /** + * Set the {@link RDFParserOptions}. + * + * @param options + * The options (may be <code>null</code>). + */ + public void setRDFParserOptions(final IRDFParserOptions options) { + + setProperty(Annotations.OPTIONS, options); + + } + // LOAD ( SILENT )? IRIref_from ( INTO GRAPH IRIref_to )? public String toString(final int indent) { @@ -140,13 +184,21 @@ sb.append(getUpdateType()); - if (isSilent()) - sb.append(" SILENT"); - + final boolean silent = isSilent(); + final ConstantNode sourceGraph = getSourceGraph(); final ConstantNode targetGraph = getTargetGraph(); + final IRDFParserOptions rdfParserOptions = getRDFParserOptions(); + + if(silent) + sb.append(" SILENT"); + + if (rdfParserOptions != null) { + sb.append(" OPTIONS=" + rdfParserOptions); + } + if (sourceGraph != null) { sb.append("\n"); sb.append(indent(indent + 1)); @@ -158,7 +210,7 @@ sb.append(indent(indent + 1)); sb.append("target=" + targetGraph); } - + sb.append("\n"); return sb.toString(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -91,6 +91,7 @@ import com.bigdata.rdf.lexicon.LexiconRelation; import com.bigdata.rdf.model.BigdataStatement; import com.bigdata.rdf.model.BigdataURI; +import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.Sesame2BigdataIterator; @@ -1233,7 +1234,7 @@ + defaultContext); doLoad(context.conn.getSailConnection(), sourceURL, - defaultContext, nmodified); + defaultContext, op.getRDFParserOptions(), nmodified); } catch (Throwable t) { @@ -1333,6 +1334,7 @@ */ private static void doLoad(final BigdataSailConnection conn, final URL sourceURL, final URI defaultContext, + final IRDFParserOptions parserOptions, final AtomicLong nmodified) throws IOException, RDFParseException, RDFHandlerException { @@ -1404,16 +1406,20 @@ final RDFParser rdfParser = rdfParserFactory .getParser(); - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); + rdfParser.setValueFactory(conn.getTripleStore().getValueFactory()); - rdfParser.setVerifyData(true); + /* + * Apply the RDF parser options. + */ - rdfParser.setStopAtFirstError(true); + rdfParser.setVerifyData(parserOptions.getVerifyData()); - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + rdfParser.setPreserveBNodeIDs(parserOptions.getPreserveBNodeIDs()); + rdfParser.setStopAtFirstError(parserOptions.getStopAtFirstError()); + + rdfParser.setDatatypeHandling(parserOptions.getDatatypeHandling()); + rdfParser.setRDFHandler(new AddStatementHandler(conn, nmodified, defactoContext)); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -81,6 +81,7 @@ import com.bigdata.rdf.model.StatementEnum; import com.bigdata.rdf.rio.AsynchronousStatementBufferFactory; import com.bigdata.rdf.rio.BasicRioLoader; +import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.rio.IStatementBuffer; import com.bigdata.rdf.rio.RDFParserOptions; import com.bigdata.rdf.rio.StatementBuffer; @@ -298,7 +299,7 @@ /* * The defaults here are intended to facilitate splitting. */ - final RDFParserOptions defaultParserOptions = new RDFParserOptions(); + final IRDFParserOptions defaultParserOptions = new RDFParserOptions(); // Blank node IDs should be preserved. defaultParserOptions.setPreserveBNodeIDs(true); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/VocabBuilder.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/VocabBuilder.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/util/VocabBuilder.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -52,6 +52,7 @@ import org.openrdf.rio.helpers.RDFHandlerBase; import com.bigdata.rdf.ServiceProviderHook; +import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.rio.RDFParserOptions; import com.bigdata.rdf.vocab.VocabularyDecl; @@ -69,7 +70,7 @@ private static final Logger log = Logger.getLogger(VocabBuilder.class); - private final RDFParserOptions parserOptions; + private final IRDFParserOptions parserOptions; private final Map<URI, P> preds = new LinkedHashMap<URI, P>(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ASTVisitorBase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ASTVisitorBase.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ASTVisitorBase.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -134,12 +134,12 @@ return node.childrenAccept(this, data); } - public Object visit(ASTLoad node, Object data) - throws VisitorException - { - return node.childrenAccept(this, data); - } - + public Object visit(ASTLoad node, Object data) + throws VisitorException + { + return node.childrenAccept(this, data); + } + public Object visit(ASTModify node, Object data) throws VisitorException { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/UpdateExprBuilder.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/UpdateExprBuilder.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/UpdateExprBuilder.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -44,6 +44,7 @@ import com.bigdata.rdf.model.BigdataURI; import com.bigdata.rdf.model.BigdataValue; import com.bigdata.rdf.model.StatementEnum; +import com.bigdata.rdf.rio.RDFParserOptions; import com.bigdata.rdf.sail.sparql.ast.ASTAdd; import com.bigdata.rdf.sail.sparql.ast.ASTClear; import com.bigdata.rdf.sail.sparql.ast.ASTCopy; @@ -255,6 +256,35 @@ if (node.isSilent()) op.setSilent(true); + final RDFParserOptions options = new RDFParserOptions( + context.tripleStore.getProperties()); + + if (node.verifyData != null) { + + options.setVerifyData(node.verifyData); + + } + + if (node.stopAtFirstError != null) { + + options.setStopAtFirstError(node.stopAtFirstError); + + } + + if (node.preserveBNodeIDs != null) { + + options.setPreserveBNodeIDs(node.preserveBNodeIDs); + + } + + if (node.datatypeHandling != null) { + + options.setDatatypeHandling(node.datatypeHandling); + + } + + op.setRDFParserOptions(options); + if (node.jjtGetNumChildren() > 1) { final ConstantNode targetGraph = (ConstantNode) node.jjtGetChild(1) Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTLoad.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTLoad.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/ASTLoad.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -2,35 +2,53 @@ /* JavaCCOptions:MULTI=true,NODE_USES_PARSER=false,VISITOR=true,TRACK_TOKENS=false,NODE_PREFIX=AST,NODE_EXTENDS=,NODE_FACTORY=,SUPPORT_CLASS_VISIBILITY_PUBLIC=true */ package com.bigdata.rdf.sail.sparql.ast; -import com.bigdata.rdf.sail.sparql.ast.ASTUpdate; -import com.bigdata.rdf.sail.sparql.ast.SyntaxTreeBuilder; -import com.bigdata.rdf.sail.sparql.ast.SyntaxTreeBuilderVisitor; -import com.bigdata.rdf.sail.sparql.ast.VisitorException; +import org.openrdf.rio.RDFParser.DatatypeHandling; +import com.bigdata.rdf.rio.RDFParserOptions; + public class ASTLoad extends ASTUpdate { - private boolean silent; -public ASTLoad(int id) { - super(id); - } + private boolean silent; + + /* + * Note: These values default to [null] so we can inherit the default + * behavior as configured for the KB when an option is not explicitly + * specified. + */ + + public DatatypeHandling datatypeHandling = null; - public ASTLoad(SyntaxTreeBuilder p, int id) { - super(p, id); - } + public Boolean preserveBNodeIDs = null; + public Boolean stopAtFirstError = null; - /** Accept the visitor. **/ - public Object jjtAccept(SyntaxTreeBuilderVisitor visitor, Object data) throws VisitorException { - return visitor.visit(this, data); - } - - public void setSilent(boolean silent) { - this.silent = silent; - } + public Boolean verifyData = null; - public boolean isSilent() { - return this.silent; - } + public ASTLoad(int id) { + super(id); + } + + public ASTLoad(SyntaxTreeBuilder p, int id) { + super(p, id); + } + + /** Accept the visitor. **/ + public Object jjtAccept(SyntaxTreeBuilderVisitor visitor, Object data) + throws VisitorException { + return visitor.visit(this, data); + } + + public void setSilent(final boolean silent) { + this.silent = silent; + } + + public boolean isSilent() { + return this.silent; + } + } -/* JavaCC - OriginalChecksum=b83ece3152041c4178153a0f76debe55 (do not edit this line) */ +/* + * JavaCC - OriginalChecksum=b83ece3152041c4178153a0f76debe55 (do not edit this + * line) + */ Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilder.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilder.java 2012-08-26 14:06:13 UTC (rev 6482) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/sparql/ast/SyntaxTreeBuilder.java 2012-08-26 20:32:41 UTC (rev 6483) @@ -8,6 +8,7 @@ import org.openrdf.model.vocabulary.XMLSchema; import org.openrdf.query.algebra.Compare.CompareOp; import org.openrdf.query.algebra.MathExpr.MathOp; +import org.openrdf.rio.RDFParser.DatatypeHandling; public class SyntaxTreeBuilder/*@bgen(jjtree)*/implements SyntaxTreeBuilderTreeConstants, SyntaxTreeBuilderConstants {/*@bgen(jjtree)*/ protected JJTSyntaxTreeBuilderState jjtree = new JJTSyntaxTreeBuilderState(); @@ -7468,19 +7469,106 @@ final public void Load() throws ParseException { /*@bgen(jjtree) Load */ - ASTLoad jjtn000 = new ASTLoad(JJTLOAD); - boolean jjtc000 = true; - jjtree.openNodeScope(jjtn000); + ASTLoad jjtn000 = new ASTLoad(JJTLOAD); + boolean jjtc000 = true; + jjtree.openNodeScope(jjtn000);Token t = null; try { jj_consume_token(LOAD); - switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { - case SILENT: - jj_consume_token(SILENT); - jjtn000.setSilent(true); - break; - default: - jj_la1[160] = jj_gen; - ; + label_34: + while (true) { + switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { + case VERIFY_DATA: + case PRESERVE_BNODE_IDS: + case STOP_AT_FIRST_ERROR: + case DATATYPE_HANDLING: + case SILENT: + ; + break; + default: + jj_la1[160] = jj_gen; + break label_34; + } + switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { + case SILENT: + jj_consume_token(SILENT); + jjtn000.setSilent(true); + break; + case VERIFY_DATA: + jj_consume_token(VERIFY_DATA); + jj_consume_token(EQ); + switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { + case TRUE: + t = jj_consume_token(TRUE); + break; + case FALSE: + t = jj_consume_token(FALSE); + break; + default: + jj_la1[161] = jj_gen; + jj_consume_token(-1); + throw new ParseException(); + } + jjtn000.verifyData=Boolean.valueOf(t.image); + break; + case PRESERVE_BNODE_IDS: + jj_consume_token(PRESERVE_BNODE_IDS); + jj_consume_token(EQ); + switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { + case TRUE: + t = jj_consume_token(TRUE); + break; + case FALSE: + t = jj_consume_token(FALSE); + break; + default: + jj_la1[162] = jj_gen; + jj_consume_token(-1); + throw new ParseException(); + } + jjtn000.preserveBNodeIDs=Boolean.valueOf(t.image); + break; + case STOP_AT_FIRST_ERROR: + jj_consume_token(STOP_AT_FIRST_ERROR); + jj_consume_token(EQ); + switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { + case TRUE: + t = jj_consume_token(TRUE); + break; + case FALSE: + t = jj_consume_token(FALSE); + break; + default: + jj_la1[163] = jj_gen; + jj_consume_token(-1); + throw new ParseException(); + } + jjtn000.stopAtFirstError=Boolean.valueOf(t.image); + break; + case DATATYPE_HANDLING: + jj_consume_token(DATATYPE_HANDLING); + jj_consume_token(EQ); + switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { + case IGNORE: + t = jj_consume_token(IGNORE); + break; + case VERIFY: + t = jj_consume_token(VERIFY); + break; + case NORMALIZE: + t = jj_consume_token(NORMALIZE); + break; + default: + jj_la1[164] = jj_gen; + jj_consume_token(-1); + throw new ParseException(); + } + jjtn000.datatypeHandling=DatatypeHandling.valueOf(t.image.toUpperCase()); + break; + default: + jj_la1[165] = jj_gen; + jj_consume_token(-1); + throw new ParseException(); + } } IRIref(); switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { @@ -7489,7 +7577,7 @@ GraphRef(); break; default: - jj_la1[161] = jj_gen; + jj_la1[166] = jj_gen; ; } } catch (Throwable jjte000) { @@ -7526,7 +7614,7 @@ jjtn000.setSilent(true); break; default: - jj_la1[162] = jj_gen; + jj_la1[167] = jj_gen; ; } GraphRefAll(); @@ -7564,7 +7652,7 @@ jjtn000.setSilent(true); break; default: - jj_la1[163] = jj_gen; + jj_la1[168] = jj_gen; ; } GraphRefAll(); @@ -7602,7 +7690,7 @@ jjtn000.setSilent(true); break; default: - jj_la1[164] = jj_gen; + jj_la1[169] = jj_gen; ; } GraphOrDefault(); @@ -7642,7 +7730,7 @@ jjtn000.setSilent(true); break; default: - jj_la1[165] = jj_gen; + jj_la1[170] = jj_gen; ; } GraphOrDefault(); @@ -7682,7 +7770,7 @@ jjtn000.setSilent(true); break; default: - jj_la1[166] = jj_gen; + jj_la1[171] = jj_gen; ; } GraphOrDefault(); @@ -7723,7 +7811,7 @@ jjtn000.setSilent(true); break; default: - jj_la1[167] = jj_gen; + jj_la1[172] = jj_gen; ; } switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { @@ -7737,12 +7825,12 @@ QuadData(); break; default: - jj_la1[168] = jj_gen; + jj_la1[173] = jj_gen; ; } break; default: - jj_la1[169] = jj_gen; + jj_la1[174] = jj_gen; jj_consume_token(-1); throw new ParseException(); } @@ -7877,7 +7965,7 @@ QuadData(); break; default: - jj_la1[170] = jj_gen; + jj_la1[175] = jj_gen; jj_consume_token(-1); throw new ParseException(); } @@ -7922,7 +8010,7 @@ QuadData(); break; default: - jj_la1[171] = jj_gen; + jj_la1[176] = jj_gen; jj_consume_token(-1); throw new ParseException(); } @@ -7960,7 +8048,7 @@ jjtn000.setNamed(true); break; default: - jj_la1[172] = jj_gen; + jj_la1[177] = jj_gen; ; } IRIref(); @@ -7997,7 +8085,7 @@ IRIref(); break; default: - jj_la1[173] = jj_gen; + jj_la1[178] = jj_gen; ; } switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { @@ -8008,7 +8096,7 @@ InsertClause(); break; default: - jj_la1[174] = jj_gen; + jj_la1[179] = jj_gen; ; } break; @@ -8016,19 +8104,19 @@ InsertClause(); break; default: - jj_la1[175] = jj_gen; + jj_la1[180] = jj_gen; jj_consume_token(-1); throw new ParseException(); } - label_34: + label_35: while (true) { switch ((jj_ntk==-1)?jj_ntk():jj_ntk) { case USING: ; break; default: - jj_la1[176] = jj_gen; - break label_34; + jj_la1[181] = jj_gen; + break label_35; } UsingClause(); } @@ -8083,7 +8171,7 @@ } break; default: - jj_la1[177] = jj_gen; + jj_la1[182] = jj_gen; jj_consume_token(-1); throw new ParseException(); } @@ -8123,7 +8211,7 @@ TRefPattern(); break; default: - jj_la1[178] = jj_gen; + jj_la1[183] = jj_gen; jj_consume_token(-1); throw new ParseException(); } @@ -8232,113 +8320,93 @@ finally { jj_save(9, xla); } } - private boolean jj_3R_73() { - if (jj_3R_83()) return true; + private boolean jj_3R_56() { + if (jj_3R_67()) return true; return false; } - private boolean jj_3R_59() { - Token xsp; - xsp = jj_scanpos; - if (jj_3R_72()) { - jj_scanpos = xsp; - if (jj_3R_73()) return true; - } + private boolean jj_3R_86() { + if (jj_scan_token(LPAREN)) return true; return false; } - private boolean jj_3R_72() { - if (jj_3R_82()) return true; - return false; - } - private boolean jj_3R_85() { - if (jj_scan_token(LPAREN)) return true; + if (jj_scan_token(TREF_OPEN)) return true; return false; } - private boolean jj_3R_55() { - if (jj_3R_66()) return true; - return false; - } - - private boolean jj_3R_86() { + private boolean jj_3R_87() { if (jj_scan_token(LBRACK)) return true; return false; } private boolean jj_3_6() { - if (jj_3R_40()) return true; + if (jj_3R_41()) return true; return false; } - private boolean jj_3R_84() { - if (jj_scan_token(TREF_OPEN)) return true; + private boolean jj_3R_70() { + if (jj_3R_85()) return true; return false; } - private boolean jj_3R_71() { - if (jj_3R_86()) return true; + private boolean jj_3R_69() { + if (jj_3R_84()) return true; return false; } private boolean jj_3R_58() { Token xsp; xsp = jj_scanpos; - if (jj_3R_70()) { + if (jj_3R_68()) { jj_scanpos = xsp; - if (jj_3R_71()) return true; + if (jj_3R_69()) { + jj_scanpos = xsp; + if (jj_3R_70()) return true; } + } return false; } - private boolean jj_3R_70() { - if (jj_3R_85()) return true; + private boolean jj_3R_68() { + if (jj_3R_83()) return true; return false; } - private boolean jj_3R_69() { - if (jj_3R_84()) return true; + private boolean jj_3R_72() { + if (jj_3R_87()) return true; return false; } - private boolean jj_3R_76() { - if (jj_scan_token(Q_IRI_REF)) return true; - return false; - } - - private boolean jj_3R_68() { - if (jj_3R_83()) return true; - return false; - } - - private boolean jj_3R_57() { + private boolean jj_3R_59() { Token xsp; xsp = jj_scanpos; - if (jj_3R_67()) { + if (jj_3R_71()) { jj_scanpos = xsp; - if (jj_3R_68()) { - jj_scanpos = xsp; - if (jj_3R_69()) return true; + if (jj_3R_72()) return true; } - } return false; } - private boolean jj_3R_67() { - if (jj_3R_82()) return true; + private boolean jj_3R_71() { + if (jj_3R_86()) return true; return false; } - private boolean jj_3R_107() { + private boolean jj_3R_77() { + if (jj_scan_token(Q_IRI_REF)) return true; + return false; + } + + private boolean jj_3R_108() { if (jj_scan_token(BLANK_NODE_LABEL)) return true; return false; } - private boolean jj_3R_99() { + private boolean jj_3R_100() { Token xsp; xsp = jj_scanpos; - if (jj_3R_107()) { + if (jj_3R_108()) { jj_scanpos = xsp; if (jj_scan_token(31)) return true; } @@ -8347,262 +8415,262 @@ private boolean jj_3_3() { if (jj_scan_token(DOT)) return true; - if (jj_3R_37()) return true; + if (jj_3R_38()) return true; return false; } + private boolean jj_3R_55() { + if (jj_3R_66()) return true; + return false; + } + + private boolean jj_3R_54() { + if (jj_scan_token(WITH)) return true; + if (jj_3R_52()) return true; + return false; + } + private boolean jj_3_2() { if (jj_scan_token(DOT)) return true; - if (jj_3R_36()) return true; + if (jj_3R_37()) return true; return false; } - private boolean jj_3R_77() { + private boolean jj_3R_78() { Token xsp; xsp = jj_scanpos; - if (jj_scan_token(153)) { + if (jj_scan_token(160)) { jj_scanpos = xsp; - if (jj_scan_token(152)) return true; + if (jj_scan_token(159)) return true; } return false; } - private boolean jj_3R_63() { - if (jj_3R_77()) return true; + private boolean jj_3R_45() { + Token xsp; + xsp = jj_scanpos; + if (jj_3R_54()) jj_scanpos = xsp; + xsp = jj_scanpos; + if (jj_3R_55()) { + jj_scanpos = xsp; + if (jj_3R_56()) return true; + } return false; } - private boolean jj_3R_54() { - if (jj_3R_65()) return true; + private boolean jj_3R_81() { + if (jj_scan_token(INTO)) return true; return false; } - private boolean jj_3R_75() { - if (jj_3R_82()) return true; + private boolean jj_3R_64() { + if (jj_3R_78()) return true; return false; } - private boolean jj_3R_51() { + private boolean jj_3R_76() { + if (jj_3R_83()) return true; + return false; + } + + private boolean jj_3R_52() { Token xsp; xsp = jj_scanpos; - if (jj_3R_62()) { + if (jj_3R_63()) { jj_scanpos = xsp; - if (jj_3R_63()) return true; + if (jj_3R_64()) return true; } return false; } - private boolean jj_3R_62() { - if (jj_3R_76()) return true; + private boolean jj_3R_63() { + if (jj_3R_77()) return true; return false; } - private boolean jj_3R_53() { - if (jj_scan_token(WITH)) return true; - if (jj_3R_51()) return true; + private boolean jj_3R_82() { + if (jj_3R_89()) return true; return false; } - private boolean jj_3R_44() { + private boolean jj_3_5() { + if (jj_3R_40()) return true; + return false; + } + + private boolean jj_3R_67() { + if (jj_scan_token(INSERT)) return true; Token xsp; xsp = jj_scanpos; - if (jj_3R_53()) jj_scanpos = xsp; - xsp = jj_scanpos; - if (jj_3R_54()) { + if (jj_3R_81()) { jj_scanpos = xsp; - if (jj_3R_55()) return true; + if (jj_3R_82()) return true; } return false; } - private boolean jj_3_5() { - if (jj_3R_39()) return true; - return false; - } - - private boolean jj_3R_80() { - if (jj_scan_token(INTO)) return true; - return false; - } - - private boolean jj_3R_110() { + private boolean jj_3R_111() { Token xsp; xsp = jj_scanpos; - if (jj_scan_token(175)) { + if (jj_scan_token(182)) { jj_scanpos = xsp; - if (jj_scan_token(176)) return true; + if (jj_scan_token(183)) return true; } return false; } - private boolean jj_3R_109() { + private boolean jj_3R_110() { Token xsp; xsp = jj_scanpos; - if (jj_scan_token(173)) { + if (jj_scan_token(180)) { jj_scanpos = xsp; - if (jj_scan_token(174)) return true; + if (jj_scan_token(181)) return true; } return false; } - private boolean jj_3R_101() { + private boolean jj_3R_102() { Token xsp; xsp = jj_scanpos; - if (jj_3R_109()) { + if (jj_3R_110()) { jj_scanpos = xsp; - if (jj_3R_110()) return true; + if (jj_3R_111()) return true; } return false; } - private boolean jj_3R_81() { - if (jj_3R_88()) return true; + private boolean jj_3R_80() { + if (jj_3R_89()) return true; return false; } + private boolean jj_3R_79() { + if (jj_scan_token(FROM)) return true; + return false; + } + + private boolean jj_3R_107() { + if (jj_scan_token(FALSE)) return true; + return false; + } + private boolean jj_3R_66() { - if (jj_scan_token(INSERT)) return true; + if (jj_scan_token(DELETE)) return true; Token xsp; xsp = jj_scanpos; - if (jj_3R_80()) { + if (jj_3R_79()) { jj_scanpos = xsp; - if (jj_3R_81()) return true; + if (jj_3R_80()) return true; } return false; } private boolean jj_3R_106() { - if (jj_scan_token(FALSE)) return true; - return false; - } - - private boolean jj_3R_105() { if (jj_scan_token(TRUE)) return true; return false; } - private boolean jj_3R_98() { + private boolean jj_3R_99() { Token xsp; xsp = jj_scanpos; - if (jj_3R_105()) { + if (jj_3R_106()) { jj_scanpos = xsp; - if (jj_3R_106()) return true; + if (jj_3R_107()) return true; } return false; } - private boolean jj_3R_56() { - if (jj_3R_36()) return true; + private boolean jj_3R_57() { + if (jj_3R_37()) return true; return false; } - private boolean jj_3R_79() { - if (jj_3R_88()) return true; + private boolean jj_3R_125() { + if (jj_scan_token(DOUBLE_NEGATIVE)) return true; return false; } - private boolean jj_3R_78() { - if (jj_scan_token(FROM)) return true; + private boolean jj_3R_44() { + if (jj_scan_token(DELETE)) return true; + if (jj_scan_token(WHERE)) return true; return false; } private boolean jj_3R_124() { - if (jj_scan_token(DOUBLE_NEGATIVE)) return true; - return false; - } - - private boolean jj_3R_123() { if (jj_scan_token(DECIMAL_NEGATIVE)) return true; return false; } - private boolean jj_3R_38() { - if (jj_3R_50()) return true; + private boolean jj_3R_39() { + if (jj_3R_51()) return true; return false; } - private boolean jj_3R_122() { + private boolean jj_3R_123() { if (jj_scan_token(INTEGER_NEGATIVE)) return true; return false; } - private boolean jj_3R_65() { - if (jj_scan_token(DELETE)) return true; - Token xsp; - xsp = jj_scanpos; - if (jj_3R_78()) { - jj_scanpos = xsp; - if (jj_3R_79()) return true; - } - return false; - } - private boolean jj_3R_43() { if (jj_scan_token(DELETE)) return true; - if (jj_scan_token(WHERE)) return true; + if (jj_scan_token(DATA)) return true; return false; } - private boolean jj_3R_113() { + private boolean jj_3R_114() { Token xsp; xsp = jj_scanpos; - if (jj_3R_122()) { - jj_scanpos = xsp; if (jj_3R_123()) { jj_scanpos = xsp; - if (jj_3R_124()) return true; + if (jj_3R_124()) { + jj_scanpos = xsp; + if (jj_3R_125()) return true; } } return false; } - private boolean jj_3R_121() { - if (jj_scan_token(DOUBLE_POSITIVE)) return true; + private boolean jj_3R_42() { + if (jj_scan_token(INSERT)) return true; + if (jj_scan_token(DATA)) return true; return false; } - private boolean jj_3R_42() { - if (jj_scan_token(DELETE)) return true; - if (jj_scan_token(DATA)) return true; + private boolean jj_3R_122() { + if (jj_scan_token(DOUBLE_POSITIVE)) return true; return false; } - private boolean jj_3R_120() { + private boolean jj_3R_121() { if (jj_scan_token(DECIMAL_POSITIVE)) return true; return false; } - private boolean jj_3R_119() { + private boolean jj_3R_120() { if (jj_scan_token(INTEGER_POSITIVE)) return true; return false; } - private boolean jj_3R_41() { - if (jj_scan_token(INSERT)) return true; - if (jj_scan_token(DATA)) return true; - return false; - } - - private boolean jj_3R_112() { + private boolean jj_3R_113() { Token xsp; xsp = jj_scanpos; - if (jj_3R_119()) { - jj_scanpos = xsp; if (jj_3R_120()) { jj_scanpos = xsp; - if (jj_3R_121()) return true; + if (jj_3R_121()) { + jj_scanpos = xsp; + if (jj_3R_122()) return true; } } return false; } - private boolean jj_3R_128() { + private boolean jj_3R_129() { if (jj_scan_token(LPAREN)) return true; return false; } - private boolean jj_3R_45() { - if (jj_3R_56()) return true; + private boolean jj_3R_46() { + if (jj_3R_57()) return true; return false; } @@ -8610,214 +8678,219 @@ if (jj_scan_token(SEMICOLON)) return true; Token xsp; xsp = jj_scanpos; - if (jj_3R_38()) jj_scanpos = xsp; + if (jj_3R_39()) jj_scanpos = xsp; return false; } private boolean jj_3_1() { - if (jj_3R_35()) return true; + if (jj_3R_36()) return true; return false; } - private boolean jj_3R_127() { + private boolean jj_3R_128() { if (jj_scan_token(NOT)) return true; return false; } - private boolean jj_3R_118() { + private boolean jj_3R_119() { if (jj_scan_token(DOUBLE)) return true; return false; } - private boolean jj_3R_126() { + private boolean jj_3R_127() { if (jj_scan_token(IS_A)) return true; return false; } - private boolean jj_3R_117() { + private boolean jj_3R_118() { if (jj_scan_token(DECIMAL)) return true; return false; } - private boolean jj_3R_116() { + private boolean jj_3R_117() { if (jj_scan_token(INTEGER)) return true; return false; } - private boolean jj_3R_125() { - if (jj_3R_51()) return true; + private boolean jj_3R_126() { + if (jj_3R_52()) return true; return false; } - private boolean jj_3R_115() { + private boolean jj_3R_116() { Token xsp; xsp = jj_scanpos; - if (jj_3R_125()) { - jj_scanpos = xsp; if (jj_3R_126()) { jj_scanpos = xsp; if (jj_3R_127()) { jj_scanpos = xsp; - if (jj_3R_128()) return true; + if (jj_3R_128()) { + jj_scanpos = xsp; + if (jj_3R_129()) return true; } } } return false; } - private boolean jj_3R_35() { + private boolean jj_3R_36() { if (jj_scan_token(LBRACE)) return true; Token xsp; xsp = jj_scanpos; - if (jj_3R_45()) jj_scanpos = xsp; + if (jj_3R_46()) jj_scanpos = xsp; if (jj_scan_token(RBRACE)) return true; return false; } - private boolean jj_3R_114() { + private boolean jj_3R_115() { if (jj_scan_token(INVERSE)) return true; return false; } - private boolean jj_3R_108() { + private boolean jj_3R_109() { Token xsp; xsp = jj_scanpos; - if (jj_3R_114()) jj_scanpos = xsp; - if (jj_3R_115()) return true; + if (jj_3R_115()) jj_scanpos = xsp; + if (jj_3R_116()) return true; return false; } - private boolean jj_3R_104() { - if (jj_3R_113()) return true; + private boolean jj_3R_105() { + if (jj_3R_114()) return true; return false; } - private boolean jj_3R_111() { + private boolean jj_3R_112() { Token xsp; xsp = jj_scanpos; - if (jj_3R_116()) { - jj_scanpos = xsp; if (jj_3R_117()) { jj_scanpos = xsp; - if (jj_3R_118()) return true; + if (jj_3R_118()) { + jj_scanpos = xsp; + if (jj_3R_119()) return true; } } return false; } - private boolean jj_3R_103() { - if (jj_3R_112()) return true; + private boolean jj_3R_104() { + if (jj_3R_113()) return true; return false; } - private boolean jj_3R_102() { - if (jj_3R_111()) return true; + private boolean jj_3R_103() { + if (jj_3R_112()) return true; return false; } - private boolean jj_3R_100() { - if (jj_3R_108()) return true; + private boolean jj_3R_101() { + if (jj_3R_109()) return true; return false; } - private boolean jj_3R_97() { + private boolean jj_3R_98() { Token xsp; xsp = jj_scanpos; - if (jj_3R_102()) { - jj_scanpos = xsp; if (jj_3R_103()) { jj_scanpos = xsp; - if (jj_3R_104()) return true; + if (jj_3R_104()) { + jj_scanpos = xsp; + if (jj_3R_105()) return true; } } return false; } - private boolean jj_3R_95() { - if (jj_3R_100()) return true; + private boolean jj_3R_96() { + if (jj_3R_101()) return true; return false; } - private boolean jj_3R_61() { - if (jj_3R_75()) return true; + private boolean jj_3R_62() { + if (jj_3R_76()) return true; return false; } - private boolean jj_3R_87() { - if (jj_3R_95()) return true; + private boolean jj_3R_88() { + if (jj_3R_96()) return true; return false; } - private boolean jj_3R_96() { - if (jj_3R_101()) return true; + private boolean jj_3R_97() { + if (jj_3R_102()) return true; return false; } - private boolean jj_3R_74() { - if (jj_3R_87()) return true; + private boolean jj_3R_75() { + if (jj_3R_88()) return true; return false; } - private boolean jj_3R_60() { - if (jj_3R_74()) return true; + private boolean jj_3R_61() { + if (jj_3R_75()) return true; return false; } - private boolean jj_3R_50() { + private boolean jj_3R_51() { Token xsp; xsp = jj_scanpos; - if (jj_3R_60()) { + if (jj_3R_61()) { jj_scanpos = xsp; - if (jj_3R_61()) return true; + if (jj_3R_62()) return true; } return false; } private boolean jj_3_10() { - if (jj_3R_44()) return true; + if (jj_3R_45()) return true; return false; } private boolean jj_3_9() { - if (jj_3R_43()) return true; + if (jj_3R_44()) return true; return false; } private boolean jj_3_8() { - if (jj_3R_42()) return true; + if (jj_3R_43()) return true; return false; } private boolean jj_3_7() { - if (jj_3R_41()) return true; + if (jj_3R_42()) return true; return false; } - private boolean jj_3R_47() { - if (jj_3R_58()) return true; + private boolean jj_3R_48() { + if (jj_3R_59()) return true; return false; } - private boolean jj_3R_46() { - if (jj_3R_57()) return true; + private boolean jj_3R_47() { + if (jj_3R_58()) return true; return false; } - private boolean jj_3R_36() { + private boolean jj_3R_37() { Token xsp; xsp = jj_scanpos; - if (jj_3R_46()) { + if (jj_3R_47()) { jj_scanpos = xsp; - if (jj_3R_47()) return true; + if (jj_3R_48()) return true; } return false; } - private boolean jj_3R_94() { + private boolean jj_3R_95() { if (jj_scan_token(NIL)) return true; return false; } + private boolean jj_3R_94() { + if (jj_3R_100()) return true; + return false; + } + private boolean jj_3R_93() { if (jj_3R_99()) return true; return false; @@ -8833,16 +8906,9 @@ return false; } - private boolean jj_3R_90() { - if (jj_3R_96()) return true; - return false; - } - - private boolean jj_3R_83() { + private boolean jj_3R_84() { Token xsp; xsp = jj_scanpos; - if (jj_3R_89()) { - jj_scanpos = xsp; if (jj_3R_90()) { jj_scanpos = xsp; if (jj_3R_91()) { @@ -8851,7 +8917,9 @@ jj_scanpos = xsp; if (jj_3R_93()) { jj_scanpos = xsp; - if (jj_3R_94()) return true; + if (jj_3R_94()) { + jj_scanpos = xsp; + if (jj_3R_95()) return true; } } } @@ -8860,73 +8928,93 @@ return false; } - private boolean jj_3R_89() { - if (jj_3R_51()) return true; + private boolean jj_3R_90() { + if (jj_3R_52()) return true; return false; } - private boolean jj_3R_49() { - if (jj_3R_58()) return true; + private boolean jj_3R_50() { + if (jj_3R_59()) return true; return false; } - private boolean jj_3R_48() { - if (jj_3R_59()) return true; + private boolean jj_3R_49() { + if (jj_3R_60()) return true; return false; } - private boolean jj_3R_37() { + private boolean jj_3R_38() { Token xsp; xsp = jj_scanpos; - if (jj_3R_48()) { + if (jj_3R_49()) { jj_scanpos = xsp; - if (jj_3R_49()) return true; + if (jj_3R_50()) return true; } return false; } - private boolean jj_3R_64() { + private boolean jj_3R_65() { if (jj_scan_token(LPAREN)) return true; return false; } - private boolean jj_3R_82() { + private boolean jj_3R_83() { Token xsp; xsp = jj_scanpos; - if (jj_scan_token(155)) { + if (jj_scan_token(162)) { jj_scanpos = xsp; - if (jj_scan_token(156)) return true; + if (jj_scan_token(163)) return true; } return false; } - private boolean jj_3R_88() { + private boolean jj_3R_89() { if (jj_scan_token(LBRACE)) return true; return false; } - private boolean jj_3R_52() { + private boolean jj_3R_53() { Token xsp; xsp = jj_scanpos; if (jj_scan_token(30)) { jj_scanpos = xsp; - if (jj_3R_64()) return true; + if (jj_3R_65()) return true; } return false; } - private boolean jj_3R_40() { + private boolean jj_3R_41() { if (jj_scan_token(SOLUTIONS)) return true; if (jj_scan_token(VAR3)) return true; return false; } - private boolean jj_3R_39() { - if (jj_3R_51()) return true; + private boolean jj_3R_40() { if (jj_3R_52()) return true; + if (jj_3R_53()) return true; return false; } + private boolean jj_3R_74() { + if (jj_3R_84()) return true; + return false; + } + + private boolean jj_3R_60() { + Token xsp; + xsp = jj_scanpos; + if (jj_3R_73()) { + jj_scanpos = xsp; + if (jj_3R_74()) return true; + } + return false; + } + + private boolean jj_3R_73() { + if (jj_3R_83()) return true; + return false; + } + /** Generated Token Manager. */ public SyntaxTreeBuilderTokenManager token_source; JavaCharStream jj_input_stream; @@ -8938,7 +9026,7 @@ private Token jj_scanpos, jj_lastpos; private int jj_la; private int jj_gen; - final private int[] jj_la1 = new int[179]; + final private int[] jj_la1 = new int[184]; static private int[] jj_la1_0; static private int[] jj_la1_1; static private int[] jj_la1_2; @@ -8956,25 +9044,25 @@ jj_la1_init_6(); } private static void jj_la1_init_0() { - jj_la1_0 = new int[] {0x400,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x10,0x1000010,0x10,0x0,0x0,0x0,0xc0000110,0x0,0x0,0x40,0x0,0x0,0x1000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x40000010,0x0,0x40000010,0x0,0x0,0x0,0x0,0x0,0x10,0x10,0x0,0x10,0x0,0x0,0x10,0x0,0x0,0x0,0x0,0xc0000110,0x1000,0x40,0x0,0x1000,0xc0000110,0x1000,0xc0000110,0x0,0xc0000110,0x0,0x800,0x40000010,0x1000,0x1000,0x40,0x0,0x0,0x0,0x10,0x800,0x40000010,0x0,0xc0000110,0x0,0x400,0x800,0x10080010,0xc0000110,0x10080010,0x10080010,0x8000000,0x4000000,0x10000000,0x3400040,0x80010,0x8000000,0x10000000,0x10000010,0x0,0x10000000,0x80,0x880,0x800,0x3400040,0xc0000110,0x0,0x110,0xc0000110,0xc0000110,0xc0000000,0x0,0x0,0xc0000000,0x100000,0x200000,0x7e000,0x7e000,0xc00000,0xc00000,0x5000000,0x5000000,0x400000,0xc80010,0x10,0x0,0x0,0x0,0x1c80010,0x0,0x0,0x0,0x0,0x0,0x0,0x400,0x0,0x0,0x0,0x0,0x0,0x0,0x800,0x800,0x800,0x40000010,0xc80010,0x800,0x20000000,0x20000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x80000000,0x0,0x0,0x0,0x0,0xc0000110,0x0,0x1000,0xc0000110,0xc0000110,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x40,0x0,0x40,0x40,0x0,0x0,0x0,0x0,0x0,0x0,0xc0000000,}; + jj_la1_0 = new int[] {0x400,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x10,0x1000010,0x10,0x0,0x0,0x0,0xc0000110,0x0,0x0,0x40,0x0,0x0,0x1000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x40000010,0x0,0x40000010,0x0,0x0,0x0,0x0,0x0,0x10,0x10,0x0,0x10,0x0,0x0,0x10,0x0,0x0,0x0,0x0,0xc0000110,0x1000,0x40,0x0,0x1000,0xc0000110,0x1000,0xc0000110,0x0,0xc0000110,0x0,0x800,0x40000010,0x1000,0x1000,0x40,0x0,0x0,0x0,0x10,0x800,0x40000010,0x0,0xc0000110,0x0,0x400,0x800,0x10080010,0xc0000110,0x10080010,0x10080010,0x8000000,0x4000000,0x10000000,0x3400040,0x80010,0x8000000,0x10000000,0x10000010,0x0,0x10000000,0x80,0x880,0x800,0x3400040,0xc0000110,0x0,0x110,0xc0000110,0xc0000110,0xc0000000,0x0,0x0,0xc0000000,0x100000,0x200000,0x7e000,0x7e000,0xc00000,0xc00000,0x5000000,0x5000000,0x400000,0xc80010,0x10,0x0,0x0,0x0,0x1c80010,0x0,0x0,0x0,0x0,0x0,0x0,0x400,0x0,0x0,0x0,0x0,0x0,0x0,0x800,0x800,0x800,0x40000010,0xc80010,0x800,0x20000000,0x20000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x80000000,0x0,0x0,0x0,0x0,0xc0000110,0x0,0x1000,0xc0000110,0xc0000110,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x40,0x0,0x40,0x40,0x0,0x0,0x0,0x0,0x0,0x0,0xc0000000,}; } private static void jj_la1_init_1() { - jj_la1_1 = new int[] {0x0,0x6,0x6,0x6,0x78,0x400,0x0,0x0,0x180,0x180,0x0,0x0,0x0,0x400,0x0,0x0,0x0,0x400,0x0,0x1000,0x0,0x0,0x0,0x400,0x0,0x0,0x800,0x0,0x1000,0x0,0x0,0x0,0x0,0x0,0x4000,0x2000000,0x2000,0xc0000,0xfc000000,0xfc030000,0x200,0xfc000000,0x30000,0x30000,0xfc030000,0x80000,0x40000,0xc0000,0x8,0x1000000,0x0,0xb00000,0x1000000,0x0,0x0,0x0,0x0,0x1000000,0x1000000,0x1000000,0x0,0x0,0x0,0x0,0xb00000,0x8,0x400000,0x0,0xfc000000,0x0,0x0,0x1,0x0,0x1,0x0,0x0,0x1,0x0,0x1,0x1,0x0,0x0,0x0,0x0,0x1,0x0,0x1,0x1,0x1,0x1,0x0,0x0,0x0,0x0,0x0,0x1,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0xfc000000,0xfc000000,0x0,0x0,0x80,0xfc000000,0x80,0x80,0x80,0x80,0x80,0x80,0x0,0xfc000000,0x40000000,0x0,0xb0000000,0x0,0x0,0x0,0x0,0x0,0x0,0xfc000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x200800,0x0,0x200000,0x200000,0x0,0x200000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x200000,0x400,0x0,0x800,0x0,0x0,0x0,0x0,0x1,0x0,}; + jj_la1_1 = new int[] {0x0,0x6,0x6,0x6,0x78,0x400,0x0,0x0,0x180,0x180,0x0,0x0,0x0,0x400,0x0,0x0,0x0,0x400,0x0,0x1000,0x0,0x0,0x0,0x400,0x0,0x0,0x800,0x0,0x1000,0x0,0x0,0x0,0x0,0x0,0x4000,0x2000000,0x2000,0xc0000,0xfc000000,0xfc030000,0x200,0xfc000000,0x30000,0x30000,0xfc030000,0x80000,0x40000,0xc0000,0x8,0x1000000,0x0,0xb00000,0x1000000,0x0,0x0,0x0,0x0,0x1000000,0x1000000,0x1000000,0x0,0x0,0x0,0x0,0xb00000,0x8,0x400000,0x0,0xfc000000,0x0,0x0,0x1,0x0,0x1,0x0,0x0,0x1,0x0,0x1,0x1,0x0,0x0,0x0,0x0,0x1,0x0,0x1,0x1,0x1,0x1,0x0,0x0,0x0,0x0,0x0,0x1,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0xfc000000,0xfc000000,0x0,0x0,0x80,0xfc000000,0x80,0x80,0x80,0x80,0x80,0x80,0x0,0xfc000000,0x40000000,0x0,0xb0000000,0x0,0x0,0x0,0x0,0x0,0x0,0xfc000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x200800,0x0,0x200000,0x200000,0x0,0x200000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x200000,0x400,0x0,0x800,0x0,0x0,0x0,0x0,0x1,0x0,}; } private static void jj_la1_init_2() { - jj_la1_2 = new int[] {0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x8000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x8000000,0x1800000,0x0,0x0,0x0,0x8000000,0x0,0x0,0x0,0x0,0x8000000,0x0,0x0,0x0,0x0,0x0,0x11800000,0x0,0x11800000,0x0,0x0,0x0,0x0,0xe0400fff,0xe0400fff,0x0,0xe0400fff,0x0,0x0,0xe0400fff,0x0,0x0,0x0,0x0,0x3800000,0x0,0x4000000,0x2000000,0x0,0x1800000,0x0,0x1800000,0x2000000,0x3800000,0x2000000,0x0,0x0,0x0,0x0,0x4000000,0x0,0x0,0x0,0xe0400fff,0x0,0x0,0x0,0x1800000,0x0,0x0,0x0,0x0,0x1800000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x1800000,0x0,0x0,0x1800000,0x1800000,0x1800000,0x0,0x0,0x1800000,0x0,0x0,0x3000,0x3000,0x0,0x0,0x0,0x0,0x0,0xe1dfcfff,0xe0400fff,0x19fc000,0x1fc000,0x0,0xe1dfcfff,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0xe0400fff,0xe0400000,0x0,0x7bc,0x0,0x0,0x0,0x0,0x0,0x0,0xe1dfcfff,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x1800000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x1800000,0x0,0x0,0x1800000,0x1800000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x1800000,}; + jj_la1_2 = new int[] {0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x8000000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x8000000,0x1800000,0x0,0x0,0x0,0x8000000,0x0,0x0,0x0,0x0,0x8000000,0x0,0x0,0x0,0x0,0x0,0x11800000,0x0,0x11800000,0x0,0x0,0x0,0x0,0xe0400fff,0xe0400fff,0x0,0xe0400fff,0x0,0x0,0xe0400fff,0x0,0x0,0x0,0x0,0x3800000,0x0,0x4000000,0x2000000,0x0,0x1800000,0x0,0x1800000,0x2000000,0x3800000,0x2000000,0x0,0x0,0x0,0x0,0x4000000,0x0,0x0,0x0,0xe0400fff,0x0,0x0,0x0,0x1800000,0x0,0x0,0x0,0x0,0x1800000,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x1800000,0x0,0x0,0x1800000,0x1800000,0x1800000,0x0,0x0,0x1800000,0x0,0x0,0x3000,0x3000,0x0,0x0,0x0,0x0,0x0,0xe1dfcfff,0xe0400fff,0x19fc000,0x1fc000,0x0,0xe1dfcfff,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0xe0400fff,0xe0400000,0x0,0x7bc... [truncated message content] |
From: <tho...@us...> - 2012-08-31 20:34:37
|
Revision: 6502 http://bigdata.svn.sourceforge.net/bigdata/?rev=6502&view=rev Author: thompsonbry Date: 2012-08-31 20:34:28 +0000 (Fri, 31 Aug 2012) Log Message: ----------- Setup HAJournal and HAJournalServer classes. Created a simplified Jini/River configuration file. @see https://sourceforge.net/apps/trac/bigdata/ticket/589 (2-Phase Commit) @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueBase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAPipelineGlue.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/lookup/entry/ServiceUUID.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/GenUUID.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/StaticQuorum.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-08-31 15:08:31 UTC (rev 6501) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -42,7 +42,7 @@ * * @see http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A4 */ - public Future<Void> bounceZookeeperConnection(); + public Future<Void> bounceZookeeperConnection() throws IOException; /* * Synchronization. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueBase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueBase.java 2012-08-31 15:08:31 UTC (rev 6501) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueBase.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -27,6 +27,7 @@ package com.bigdata.ha; +import java.io.IOException; import java.rmi.Remote; import java.util.UUID; @@ -47,6 +48,6 @@ * @todo This should be handled as a smart proxy so this method does not * actually perform RMI. */ - UUID getServiceId(); + UUID getServiceId() throws IOException; } Added: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -0,0 +1,95 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.ha; + +import java.io.IOException; +import java.net.InetSocketAddress; +import java.util.UUID; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; + +import com.bigdata.journal.ha.HAWriteMessage; + +/** + * Delegation pattern. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public class HAGlueDelegate implements HAGlue { + + private final HAGlue delegate; + + public HAGlueDelegate(final HAGlue delegate) { + + if(delegate == null) + throw new IllegalArgumentException(); + + this.delegate = delegate; + + } + + public Future<Void> bounceZookeeperConnection() throws IOException { + return delegate.bounceZookeeperConnection(); + } + + public UUID getServiceId() throws IOException { + return delegate.getServiceId(); + } + + public Future<Boolean> prepare2Phase(boolean isRootBlock0, + byte[] rootBlock, long timeout, TimeUnit unit) throws IOException { + return delegate.prepare2Phase(isRootBlock0, rootBlock, timeout, unit); + } + + public Future<byte[]> readFromDisk(long token, UUID storeId, long addr) + throws IOException { + return delegate.readFromDisk(token, storeId, addr); + } + + public InetSocketAddress getWritePipelineAddr() throws IOException { + return delegate.getWritePipelineAddr(); + } + + public byte[] getRootBlock(UUID storeId) throws IOException { + return delegate.getRootBlock(storeId); + } + + public Future<Void> moveToEndOfPipeline() throws IOException { + return delegate.moveToEndOfPipeline(); + } + + public Future<Void> commit2Phase(long commitTime) throws IOException { + return delegate.commit2Phase(commitTime); + } + + public Future<Void> abort2Phase(long token) throws IOException { + return delegate.abort2Phase(token); + } + + public Future<Void> receiveAndReplicate(HAWriteMessage msg) + throws IOException { + return delegate.receiveAndReplicate(msg); + } + +} Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAPipelineGlue.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAPipelineGlue.java 2012-08-31 15:08:31 UTC (rev 6501) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAPipelineGlue.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -55,7 +55,7 @@ * @todo This should be handled as a smart proxy so this method does not * actually perform RMI. */ - InetSocketAddress getWritePipelineAddr(); + InetSocketAddress getWritePipelineAddr() throws IOException; /** * Instruct the service to move to the end of the write pipeline. The leader Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java 2012-08-31 15:08:31 UTC (rev 6501) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -307,7 +307,7 @@ try { // The address of the next service in the pipeline. final InetSocketAddress addrNext = newDownStreamId == null ? null - : member.getService(newDownStreamId).getWritePipelineAddr(); + : getAddrNext(newDownStreamId); if (sendService != null) { // Terminate the existing connection. sendService.terminate(); @@ -327,6 +327,27 @@ } } + private InetSocketAddress getAddrNext(final UUID downStreamId) { + + if (downStreamId == null) + return null; + + final S service = member.getService(downStreamId); + + try { + + final InetSocketAddress addrNext = service.getWritePipelineAddr(); + + return addrNext; + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + } + /** * Tear down any state associated with the {@link QuorumPipelineImpl}. This * implementation tears down the send/receive service and releases the @@ -402,7 +423,15 @@ final PipelineState<S> pipelineState = new PipelineState<S>(); - pipelineState.addr = nextService.getWritePipelineAddr(); + try { + + pipelineState.addr = nextService.getWritePipelineAddr(); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } pipelineState.service = nextService; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/lookup/entry/ServiceUUID.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/lookup/entry/ServiceUUID.java 2012-08-31 15:08:31 UTC (rev 6501) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/lookup/entry/ServiceUUID.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -64,9 +64,18 @@ public UUID serviceUUID; + /* + * De-Serialization constructor. + */ public ServiceUUID() { } + static public ServiceUUID fromString(final String name) { + + return new ServiceUUID(UUID.fromString(name)); + + } + public ServiceUUID(final UUID serviceUUID) { if (serviceUUID == null) Added: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java 2012-08-31 20:34:28 UTC (rev 6502) @@ -0,0 +1,1897 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Mar 18, 2007 + */ + +package com.bigdata.journal.jini.ha; + +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.File; +import java.io.FileFilter; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.RandomAccessFile; +import java.nio.channels.FileChannel; +import java.nio.channels.FileLock; +import java.rmi.Remote; +import java.rmi.server.ExportException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.LinkedList; +import java.util.List; +import java.util.UUID; +import java.util.concurrent.locks.Condition; +import java.util.concurrent.locks.ReentrantLock; + +import net.jini.admin.JoinAdmin; +import net.jini.config.Configuration; +import net.jini.config.ConfigurationException; +import net.jini.config.ConfigurationProvider; +import net.jini.core.discovery.LookupLocator; +import net.jini.core.entry.Entry; +import net.jini.core.lookup.ServiceID; +import net.jini.core.lookup.ServiceRegistrar; +import net.jini.discovery.DiscoveryEvent; +import net.jini.discovery.DiscoveryListener; +import net.jini.discovery.LookupDiscovery; +import net.jini.discovery.LookupDiscoveryManager; +import net.jini.export.Exporter; +import net.jini.jeri.BasicILFactory; +import net.jini.jeri.BasicJeriExporter; +import net.jini.jeri.tcp.TcpServerEndpoint; +import net.jini.lease.LeaseListener; +import net.jini.lease.LeaseRenewalEvent; +import net.jini.lease.LeaseRenewalManager; +import net.jini.lookup.JoinManager; +import net.jini.lookup.ServiceDiscoveryManager; +import net.jini.lookup.ServiceIDListener; +import net.jini.lookup.entry.Name; + +import org.apache.log4j.Logger; + +import com.bigdata.Banner; +import com.bigdata.counters.AbstractStatisticsCollector; +import com.bigdata.counters.PIDUtil; +import com.bigdata.jini.lookup.entry.Hostname; +import com.bigdata.jini.lookup.entry.ServiceUUID; +import com.bigdata.jini.util.JiniUtil; +import com.bigdata.service.AbstractService; +import com.bigdata.service.IService; +import com.bigdata.service.IServiceShutdown; +import com.bigdata.service.jini.DataServer.AdministrableDataService; +import com.bigdata.service.jini.FakeLifeCycle; +import com.sun.jini.admin.DestroyAdmin; +import com.sun.jini.start.LifeCycle; +import com.sun.jini.start.NonActivatableServiceDescriptor; +import com.sun.jini.start.ServiceDescriptor; +import com.sun.jini.start.ServiceStarter; + +/** + * <p> + * Abstract base class for configurable services discoverable using JINI. The + * recommended way to start a server is using the {@link ServiceStarter}. + * However, they may also be started from the command line using main(String[]). + * You must specify a policy file granting sufficient permissions for the server + * to start. + * + * <pre> + * java -Djava.security.policy=policy.all .... + * </pre> + * + * You must specify the JVM property + * <code>-Dcom.sun.jini.jeri.tcp.useNIO=true</code> to enable NIO. + * <p> + * + * Other command line options MAY be recommended depending on the JVM and the + * service that you are starting, e.g., <code>-server</code>. + * <p> + * The service may be <em>terminated</em> by terminating the server process. + * Termination implies that the server stops execution but that it MAY be + * restarted. A {@link Runtime#addShutdownHook(Thread) shutdown hook} is + * installed by the server so that you can also stop the server using ^C + * (Windows) and <code>kill</code> <i>pid</i> (Un*x). You can record the PID of + * the process running the server when you start it under Un*x using a shell + * script. Note that if you are starting multiple services at once with the + * {@link ServiceStarter} then these methods (^C or kill <i>pid</i>) will take + * down all servers running in the same VM. + * <p> + * Services may be <em>destroyed</em> using {@link DestroyAdmin}, e.g., through + * the Jini service browser. Note that all persistent data associated with that + * service is also destroyed! + * <p> + * Note: This class was cloned from the com.bigdata.service.jini package. + * Zookeeper support was stripped out and the class was made to align with a + * write replication pipeline for {@link HAJournal} rather than with a + * federation of bigdata services. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ +abstract public class AbstractServer implements Runnable, LeaseListener, + ServiceIDListener, DiscoveryListener { + + final static private Logger log = Logger.getLogger(AbstractServer.class); + + public interface ConfigurationOptions { + + /** + * The pathname of the service directory as a {@link File} (required). + */ + String SERVICE_DIR = "serviceDir"; + + /** + * A {@link String}[] whose values are the group(s) to be used for + * discovery (no default). Note that multicast discovery is always used + * if {@link LookupDiscovery#ALL_GROUPS} (a <code>null</code>) is + * specified. {@link LookupDiscovery#NO_GROUPS} is the symbolic constant + * for an empty String[]. + */ + String GROUPS = "groups"; + + /** + * An array of one or more {@link LookupLocator}s specifying unicast + * URIs of the form <code>jini://host/</code> or + * <code>jini://host:port/</code> (no default) -or- an empty array if + * you want to use multicast discovery <strong>and</strong> you have + * specified {@link #GROUPS} as {@link LookupDiscovery#ALL_GROUPS} (a + * <code>null</code>). + */ + String LOCATORS = "locators"; + + /** + * {@link Entry}[] attributes used to describe the client or service. + */ + String ENTRIES = "entries"; + + /** + * This object is used to export the service proxy. The choice here + * effects the protocol that will be used for communications between the + * clients and the service. The default {@link Exporter} if none is + * specified is a {@link BasicJeriExporter} using a + * {@link TcpServerEndpoint}. + */ + String EXPORTER = "exporter"; + + } + + /** + * The {@link ServiceID} for this server is either read from a local file, + * assigned by the registrar (if this is a new service instance), or given + * in a {@link ServiceUUID} entry in the {@link Configuration} (for either a + * new service or the restart of a persistent service). If the + * {@link ServiceID} is assigned by jini, then it is assigned the + * asynchronously after the service has discovered a + * {@link ServiceRegistrar}. + */ + private ServiceID serviceID; + + /** + * The directory for the service. This is the directory within which the + * {@link #serviceIdFile} exists. A service MAY have its own concept of a + * data directory, log directory, etc. which can be somewhere else. + */ + private File serviceDir; + + /** + * The file where the {@link ServiceID} will be written / read. + */ + private File serviceIdFile; + + /** + * The file on which the PID was written. + */ + private File pidFile; + + /** + * An attempt is made to obtain an exclusive lock on a file in the same + * directory as the {@link #serviceIdFile}. If the {@link FileLock} can be + * obtained then the reference for that {@link RandomAccessFile} is set on + * this field. If the lock is already held by another process then the + * server will refuse to start. Since some platforms (NFS volumes, etc.) do + * not support {@link FileLock} and the server WILL start anyway in those + * cases. The {@link FileLock} is automatically released if the JVM dies or + * if the {@link FileChannel} is closed. It is automatically released by + * {@link #run()} before the server exits or if the ctor fails. + */ + private RandomAccessFile lockFileRAF = null; + private FileLock fileLock; + private File lockFile; + + private LookupDiscoveryManager lookupDiscoveryManager; + + private ServiceDiscoveryManager serviceDiscoveryManager; + + /** + * Used to manage the join/leave of the service hosted by this server with + * Jini service registrar(s). + */ + private JoinManager joinManager; + + /** + * The {@link Configuration} read based on the args[] provided when the + * server is started. + */ + protected Configuration config; + + /** + * A configured name for the service -or- a default value if no {@link Name} + * was found in the {@link Configuration}. + */ + private String serviceName; + + /** + * The configured name for the service a generated service name if no + * {@link Name} was found in the {@link Configuration}. + * <p> + * Note: Concrete implementations MUST prefer to report this name in the + * {@link AbstractService#getServiceName()} of their service implementation + * class. E.g., {@link AdministrableDataService#getServiceName()}. + */ + final public String getServiceName() { + + return serviceName; + + } + + /** + * Responsible for exporting a proxy for the service. Note that the + * {@link Exporter} is paired to a single service instance. It CAN NOT be + * used to export more than one object at a time! Therefore the + * {@link Configuration} entry for the <code>exporter</code> only effects + * how <em>this</em> server exports its service. + */ + private Exporter exporter; + + /** + * The service implementation object. + */ + protected Remote impl; + + /** + * The exported proxy for the service implementation object. + */ + protected Remote proxy; + + /** + * The name of the host on which the server is running. + */ + protected String getHostName() { + + return hostname; + + } + private String hostname; + + /** + * The object used to inform the hosting environment that the server is + * unregistering (terminating). A fake object is used when the server is run + * from the command line, otherwise the object is supplied by the + * {@link NonActivatableServiceDescriptor}. + */ + private LifeCycle lifeCycle; + + /** + * The exported proxy for the service implementation object. + */ + public Remote getProxy() { + + return proxy; + + } + + /** + * Return the assigned {@link ServiceID}. If this is a new service and the + * {@link ServiceUUID} was not specified in the {@link Configuration} then + * the {@link ServiceID} will be <code>null</code> until it has been + * assigned by a {@link ServiceRegistrar}. + */ + public ServiceID getServiceID() { + + return serviceID; + + } + + /** + * <code>true</code> iff this is a persistent service (one that you can + * shutdown and restart). + */ + protected boolean isPersistent() { + + return true; + + } + + protected JoinManager getJoinManager() { + + return joinManager; + + } + + /** + * An object used to manage jini service registrar discovery. + */ + public LookupDiscoveryManager getDiscoveryManagement() { + + return lookupDiscoveryManager; + + } + + /** + * An object used to lookup services using the discovered service registars. + */ + public ServiceDiscoveryManager getServiceDiscoveryManager() { + + return serviceDiscoveryManager; + + } + + /** + * Lock controlling access to the {@link #discoveryEvent} {@link Condition}. + */ + protected final ReentrantLock discoveryEventLock = new ReentrantLock(); + + /** + * Condition signaled any time there is a {@link DiscoveryEvent} delivered to + * our {@link DiscoveryListener}. + */ + protected final Condition discoveryEvent = discoveryEventLock + .newCondition(); + + /** + * Signals anyone waiting on {@link #discoveryEvent}. + */ + public void discarded(final DiscoveryEvent e) { + + try { + + discoveryEventLock.lockInterruptibly(); + + try { + + discoveryEvent.signalAll(); + + } finally { + + discoveryEventLock.unlock(); + + } + + } catch (InterruptedException ex) { + + return; + + } + + } + + /** + * Signals anyone waiting on {@link #discoveryEvent}. + */ + public void discovered(final DiscoveryEvent e) { + + try { + + discoveryEventLock.lockInterruptibly(); + + try { + + discoveryEvent.signalAll(); + + } finally { + + discoveryEventLock.unlock(); + + } + + } catch (InterruptedException ex) { + + return; + + } + + } + + /** + * Conditionally install a suitable security manager if there is none in + * place. This is required before the server can download code. The code + * will be downloaded from the HTTP server identified by the + * <code>java.rmi.server.codebase</code> property specified for the VM + * running the service. + */ + final static public void setSecurityManager() { + + final SecurityManager sm = System.getSecurityManager(); + + if (sm == null) { + + System.setSecurityManager(new SecurityManager()); + + if (log.isInfoEnabled()) + log.info("Set security manager"); + + } else { + + if (log.isInfoEnabled()) + log.info("Security manager already in place: " + sm.getClass()); + + } + + } + + /** + * This method handles fatal exceptions for the server. + * <p> + * The default implementation logs the throwable, invokes + * {@link #shutdownNow()} to terminate any processing and release all + * resources, wraps the throwable as a runtime exception and rethrows the + * wrapped exception. + * <p> + * This implementation MAY be overridden to invoke {@link System#exit(int)} + * IFF it is known that the server is being invoked from a command line + * context. However in no case should execution be allowed to return to the + * caller. + */ + protected void fatal(final String msg, final Throwable t) { + + log.fatal(msg, t); + + try { + + shutdownNow(false/* destroy */); + + } catch (Throwable t2) { + + log.error(this, t2); + + } + + throw new RuntimeException( msg, t ); + + } + + /** + * Note: AbstractServer(String[]) is private to ensure that the ctor + * hierarchy always passes down the variant which accepts the {@link LifeCycle} + * as well. This simplifies additional initialization in subclasses. + */ + @SuppressWarnings("unused") + private AbstractServer(final String[] args) { + + throw new UnsupportedOperationException(); + + } + + /** + * Server startup reads {@link Configuration} data from the file or URL + * named by <i>args</i> and applies any optional overrides, starts the + * service, and advertises the service for discovery. Aside from the server + * class to start, the behavior is more or less entirely parameterized by + * the {@link Configuration}. + * + * @param args + * Either the command line arguments or the arguments from the + * {@link ServiceDescriptor}. Either way they identify the jini + * {@link Configuration} (you may specify either a file or URL) + * and optional overrides for that {@link Configuration}. + * @param lifeCycle + * The life cycle object. This is used if the server is started + * by the jini {@link ServiceStarter}. Otherwise specify a + * {@link FakeLifeCycle}. + * + * @see NonActivatableServiceDescriptor + */ + protected AbstractServer(final String[] args, final LifeCycle lifeCycle) { + + // Show the copyright banner during startup. + Banner.banner(); + + if (lifeCycle == null) + throw new IllegalArgumentException(); + + this.lifeCycle = lifeCycle; + + setSecurityManager(); + + Thread.setDefaultUncaughtExceptionHandler( + new Thread.UncaughtExceptionHandler() { + public void uncaughtException(Thread t, Throwable e) { + log.warn("Uncaught exception in thread", e); + } + }); + + /* + * Read jini configuration & service properties + */ + + List<Entry> entries = null; + + final String COMPONENT = getClass().getName(); + final String[] groups; + final LookupLocator[] locators; + try { + + config = ConfigurationProvider.getInstance(args); + + groups = (String[]) config.getEntry( + COMPONENT, ConfigurationOptions.GROUPS, String[].class); + + locators = (LookupLocator[]) config.getEntry( + COMPONENT, ConfigurationOptions.LOCATORS, LookupLocator[].class); + + // convert Entry[] to a mutable list. + entries = new LinkedList<Entry>(Arrays.asList((Entry[]) config + .getEntry(COMPONENT, ConfigurationOptions.ENTRIES, + Entry[].class, new Entry[0]))); + + if (log.isInfoEnabled()) { + log.info(ConfigurationOptions.GROUPS + "=" + Arrays.toString(groups)); + log.info(ConfigurationOptions.LOCATORS + "=" + Arrays.toString(locators)); + log.info(ConfigurationOptions.ENTRIES+ "=" + entries); + } + + /* + * Make sure that the parent directory exists. + * + * Note: the parentDir will be null if the serviceIdFile is in the + * root directory or if it is specified as a filename without any + * parents in the path expression. Note that the file names a file + * in the current working directory in the latter case and the root + * always exists in the former - and in both of those cases we do + * not have to create the parent directory. + */ + serviceDir = (File) config.getEntry(COMPONENT, + ConfigurationOptions.SERVICE_DIR, File.class); + + if (serviceDir != null && !serviceDir.exists()) { + + log.warn("Creating: " + serviceDir); + + serviceDir.mkdirs(); + + } + + // The file on which the ServiceID will be written. + serviceIdFile = new File(serviceDir, "service.id"); + + // The lock file. + lockFile = new File(serviceDir, ".lock"); + + /* + * Attempt to acquire an exclusive lock on a file in the same + * directory as the serviceIdFile. + */ + acquireFileLock(); + + writePIDFile(pidFile = new File(serviceDir, "pid")); + + /* + * Make sure that there is a Name and Hostname associated with the + * service. If a ServiceID was pre-assigned in the Configuration + * then we will extract that also. + */ + { + + String serviceName = null; + + String hostname = null; + + UUID serviceUUID = null; + + for (Entry e : entries) { + + if (e instanceof Name && serviceName == null) { + + // found a name. + serviceName = ((Name) e).name; + + } + + if (e instanceof Hostname && hostname == null) { + + hostname = ((Hostname) e).hostname; + + } + + if (e instanceof ServiceUUID && serviceUUID == null) { + + serviceUUID = ((ServiceUUID) e).serviceUUID; + + } + + } + + // if serviceName not given then set it now. + if (serviceName == null) { + + // assign a default service name. + + final String defaultName = getClass().getName() + + "@" + + AbstractStatisticsCollector.fullyQualifiedHostName + + "#" + hashCode(); + + serviceName = defaultName; + + // add to the Entry[]. + entries.add(new Name(serviceName)); + + } + + // set the field on the class. + this.serviceName = serviceName; + + // if hostname not given then set it now. + if (hostname == null) { + + /* + * @todo This is a best effort during startup and unchanging + * thereafter. We should probably report all names for the + * host and report the current names for the host. However + * there are a number of issues where similar data are not + * being updated which could lead to problems if host name + * assignments were changed. + */ + + hostname = AbstractStatisticsCollector.fullyQualifiedHostName; + + // add to the Entry[]. + entries.add(new Hostname(hostname)); + + } + + // set the field on the class. + this.hostname = hostname; + + // if serviceUUID assigned then set ServiceID from it now. + if (serviceUUID != null) { + + // set serviceID. + this.serviceID = JiniUtil.uuid2ServiceID(serviceUUID); + + if (!serviceIdFile.exists()) { + + // write the file iff it does not exist. + writeServiceIDOnFile(this.serviceID); + + } else { + /* + * The file exists, so verify that it agrees with the + * assigned ServiceID. + */ + try { + final ServiceID tmp = readServiceId(serviceIdFile); + if (!this.serviceID.equals(tmp)) { + /* + * The assigned ServiceID and ServiceID written + * on the file do not agree. + */ + throw new RuntimeException( + "Entry has ServiceID=" + this.serviceID + + ", but file as ServiceID=" + + tmp); + } + } catch (IOException e1) { + throw new RuntimeException(e1); + } + } + + } else if (!serviceIdFile.exists()) { + + /* + * Since nobody assigned us a ServiceID and since there is + * none on record in the [serviceIdFile], we assign one now + * ourselves. + */ + + // set serviceID. + this.serviceID = JiniUtil.uuid2ServiceID(UUID.randomUUID()); + + // write the file iff it does not exist. + writeServiceIDOnFile(this.serviceID); + + } + + } + + /* + * Extract how the service will provision itself from the + * Configuration. + */ + + // The exporter used to expose the service proxy. + exporter = (Exporter) config.getEntry(// + getClass().getName(), // component + ConfigurationOptions.EXPORTER, // name + Exporter.class, // type (of the return object) + /* + * The default exporter is a BasicJeriExporter using a + * TcpServerEnpoint. + */ + new BasicJeriExporter(TcpServerEndpoint.getInstance(0), + new BasicILFactory()) + ); + + if (serviceIdFile.exists()) { + + try { + + final ServiceID serviceIDFromFile = readServiceId(serviceIdFile); + + if (this.serviceID == null) { + + // set field on class. + this.serviceID = serviceIDFromFile; + + } else if (!this.serviceID.equals(serviceIDFromFile)) { + + /* + * This is a paranoia check on the Configuration and the + * serviceIdFile. The ServiceID should never change so + * these values should remain in agreement. + */ + + throw new ConfigurationException( + "ServiceID in Configuration does not agree with the value in " + + serviceIdFile + " : Configuration=" + + this.serviceID + ", serviceIdFile=" + + serviceIDFromFile); + + } + + } catch (IOException ex) { + + fatal("Could not read serviceID from existing file: " + + serviceIdFile + ": " + this, ex); + throw new AssertionError();// keeps compiler happy. + + } + + } else { + + if (log.isInfoEnabled()) + log.info("New service instance."); + + } + + } catch (ConfigurationException ex) { + + fatal("Configuration error: " + this, ex); + throw new AssertionError();// keeps compiler happy. + } + + /* + * The runtime shutdown hook appears to be a robust way to handle ^C by + * providing a clean service termination. + * + * Note: This is setup before we start any async threads, including + * service discovery. + */ + Runtime.getRuntime().addShutdownHook(new ShutdownThread(this)); + + /* + * Create the service object. + */ + try { + + /* + * Note: By creating the service object here rather than outside of + * the constructor we potentially create problems for subclasses of + * AbstractServer since their own constructor will not have been + * executed yet. + * + * Some of those problems are worked around using a JiniClient to + * handle all aspects of service discovery (how this service locates + * the other services in the federation). + * + * Note: If you explicitly assign values to those clients when the + * fields are declared, e.g., [timestampServiceClient=null] then the + * ctor will overwrite the values set by [newService] since it is + * running before those initializations are performed. This is + * really crufty, may be JVM dependent, and needs to be refactored + * to avoid this subclass ctor init problem. + */ + + if (log.isInfoEnabled()) + log.info("Creating service impl..."); + + // init. + impl = newService(config); + + if (log.isInfoEnabled()) + log.info("Service impl is " + impl); + + } catch(Exception ex) { + + fatal("Could not start service: "+this, ex); + throw new AssertionError();// keeps compiler happy. + } + + try { + + /* + * Note: This class will perform multicast discovery if ALL_GROUPS + * is specified and otherwise requires you to specify one or more + * unicast locators (URIs of hosts running discovery services). As + * an alternative, you can use LookupDiscovery, which always does + * multicast discovery. + */ + lookupDiscoveryManager = new LookupDiscoveryManager(groups, + locators, this /* DiscoveryListener */, config); + + /* + * Setup a helper class that will be notified as services join or + * leave the various registrars to which the data server is + * listening. + */ + try { + + serviceDiscoveryManager = new ServiceDiscoveryManager( + lookupDiscoveryManager, new LeaseRenewalManager(), + config); + + } catch (IOException ex) { + + throw new RuntimeException( + "Could not initiate service discovery manager", ex); + + } + + } catch (IOException ex) { + + fatal("Could not setup discovery", ex); + throw new AssertionError();// keep the compiler happy. + + } catch (ConfigurationException ex) { + + fatal("Could not setup discovery", ex); + throw new AssertionError();// keep the compiler happy. + + } + + /* + * Export a proxy object for this service instance. + * + * Note: This must be done before we start the join manager since the + * join manager will register the proxy. + */ + try { + + proxy = exporter.export(impl); + + if (log.isInfoEnabled()) + log.info("Proxy is " + proxy + "(" + proxy.getClass() + ")"); + + } catch (ExportException ex) { + + fatal("Export error: "+this, ex); + throw new AssertionError();// keeps compiler happy. + } + + /* + * Start the join manager. + */ + try { + + assert proxy != null : "No proxy?"; + + final Entry[] attributes = entries.toArray(new Entry[0]); + + if (this.serviceID != null) { + + /* + * We read the serviceID from local storage (either the + * serviceIDFile and/or the Configuration). + */ + + joinManager = new JoinManager(proxy, // service proxy + attributes, // attr sets + serviceID, // ServiceID + getDiscoveryManagement(), // DiscoveryManager + new LeaseRenewalManager(), // + config); + + } else { + + /* + * We are requesting a serviceID from the registrar. + */ + + joinManager = new JoinManager(proxy, // service proxy + attributes, // attr sets + this, // ServiceIDListener + getDiscoveryManagement(), // DiscoveryManager + new LeaseRenewalManager(), // + config); + + } + + } catch (Exception ex) { + + fatal("JoinManager: " + this, ex); + throw new AssertionError();// keeps compiler happy. + } + + /* + * Note: This is synchronized in case set via listener by the + * JoinManager, which would be rather fast action on its part. + */ + synchronized (this) { + + if (this.serviceID != null) { + + /* + * Notify the service that it's service UUID has been set. + * + * @todo Several things currently depend on this notification. + * In effect, it is being used as a proxy for the service + * registration event. + */ + + notifyServiceUUID(serviceID); + + } + + } + + } + + /** + * Simple representation of state (non-blocking, safe). Some fields reported + * in the representation may be <code>null</code> depending on the server + * state. + */ + public String toString() { + + // note: MAY be null. + final ServiceID serviceID = this.serviceID; + + return getClass().getName() + + "{serviceName=" + + serviceName + + ", hostname=" + + hostname + + ", serviceUUID=" + + (serviceID == null ? "null" : "" + + JiniUtil.serviceID2UUID(serviceID)) + "}"; + + } + + /** + * Attempt to acquire an exclusive lock on a file in the same directory as + * the {@link #serviceIdFile} (non-blocking). This is designed to prevent + * concurrent service starts and service restarts while the service is + * already running. + * <p> + * Note: The {@link FileLock} (if acquired) will be automatically released + * if the process dies. It is also explicitly closed by + * {@link #shutdownNow()}. DO NOT use advisory locks since they are not + * automatically removed if the service dies. + * + * @throws RuntimeException + * if the file is already locked by another process. + */ + private void acquireFileLock() { + + try { + + lockFileRAF = new RandomAccessFile(lockFile, "rw"); + + } catch (IOException ex) { + + /* + * E.g., due to permissions, etc. + */ + + throw new RuntimeException("Could not open: file=" + lockFile, ex); + + } + + try { + + fileLock = lockFileRAF.getChannel().tryLock(); + + if (fileLock == null) { + + /* + * Note: A null return indicates that someone else holds the + * lock. + */ + + try { + lockFileRAF.close(); + } catch (Throwable t) { + // ignore. + } finally { + lockFileRAF = null; + } + + throw new RuntimeException("Service already running: file=" + + lockFile); + + } + + } catch (IOException ex) { + + /* + * Note: This is true of NFS volumes. + */ + + log.warn("FileLock not supported: file=" + lockFile, ex); + + } + + } + + /** + * Writes the PID on a file in the service directory (best attempt to obtain + * the PID). If the server is run from the command line, then the pid will + * be the pid of the server. If you are running multiple servers inside of + * the same JVM, then the pid will be the same for each of those servers. + */ + private void writePIDFile(final File file) { + + try { + + // best guess at the PID of the JVM. + final String pid = Integer.toString(PIDUtil.getPID()); + + // open the file. + final FileOutputStream os = new FileOutputStream(file); + + try { + + // discard anything already in the file. + os.getChannel().truncate(0L); + + // write on the PID using ASCII characters. + os.write(pid.getBytes("ASCII")); + + // flush buffers. + os.flush(); + + } finally { + + // and close the file. + os.close(); + + } + + } catch (IOException ex) { + + log.warn("Could not write pid: file=" + file, ex); + + } + + } + + /** + * Unexports the {@link #proxy} - this is a NOP if the proxy is + * <code>null</code>. + * + * @param force + * When true, the object is unexported even if there are pending + * or in progress service requests. + * + * @return true iff the object is (or was) unexported. + * + * @see Exporter#unexport(boolean) + */ + synchronized protected boolean unexport(boolean force) { + + if (log.isInfoEnabled()) + log.info("force=" + force + ", proxy=" + proxy); + + try { + + if (proxy != null) { + + if (exporter.unexport(force)) { + + return true; + + } else { + + log.warn("Proxy was not unexported? : "+this); + + } + + } + + return false; + + } finally { + + proxy = null; + + } + + } + + /** + * Read and return the {@link ServiceID} from an existing local file. + * + * @param file + * The file whose contents are the serialized {@link ServiceID}. + * + * @return The {@link ServiceID} read from that file. + * + * @exception IOException + * if the {@link ServiceID} could not be read from the file. + */ + static public ServiceID readServiceId(final File file) throws IOException { + + final FileInputStream is = new FileInputStream(file); + + try { + + final ServiceID serviceID = new ServiceID(new DataInputStream(is)); + + if (log.isInfoEnabled()) + log.info("Read ServiceID=" + serviceID + "(" + + JiniUtil.serviceID2UUID(serviceID) + ") from " + file); + + return serviceID; + + } finally { + + is.close(); + + } + + } + + /** + * This method is responsible for saving the {@link ServiceID} on stable + * storage when it is invoked. It will be invoked iff the {@link ServiceID} + * was not defined and one was therefore assigned. + * + * @param serviceID + * The assigned {@link ServiceID}. + */ + synchronized public void serviceIDNotify(final ServiceID serviceID) { + + if (serviceID == null) + throw new IllegalArgumentException(); + + if (log.isInfoEnabled()) + log.info("serviceID=" + serviceID); + + if (this.serviceID != null && !this.serviceID.equals(serviceID)) { + + throw new IllegalStateException( + "ServiceID may not be changed: ServiceID=" + this.serviceID + + ", proposed=" + serviceID); + + } + + this.serviceID = serviceID; + + assert serviceIdFile != null : "serviceIdFile not defined?"; + + writeServiceIDOnFile(serviceID); + + notifyServiceUUID(serviceID); + + /* + * Update the Entry[] for the service registrars to reflect the assigned + * ServiceID. + */ + { + + final List<Entry> attributes = new LinkedList<Entry>(Arrays + .asList(joinManager.getAttributes())); + + final Iterator<Entry> itr = attributes.iterator(); + + while (itr.hasNext()) { + + final Entry e = itr.next(); + + if (e instanceof ServiceUUID) { + + itr.remove(); + + } + + } + + attributes.add(new ServiceUUID(JiniUtil.serviceID2UUID(serviceID))); + + joinManager.setAttributes(attributes.toArray(new Entry[0])); + + } + + } + + synchronized private void writeServiceIDOnFile(final ServiceID serviceID) { + + try { + + final DataOutputStream dout = new DataOutputStream( + new FileOutputStream(serviceIdFile)); + + try { + + serviceID.writeBytes(dout); + + dout.flush(); + + if (log.isInfoEnabled()) + log.info("ServiceID saved: file=" + serviceIdFile + + ", serviceID=" + serviceID); + + } finally { + + dout.close(); + + } + + } catch (Exception ex) { + + log.error("Could not save ServiceID : "+this, ex); + + } + + } + + /** + * Notify the {@link AbstractService} that it's service UUID has been set. + */ + synchronized protected void notifyServiceUUID(final ServiceID serviceID) { + + if (serviceID == null) + throw new IllegalArgumentException(); + + if (this.serviceID != null && !this.serviceID.equals(serviceID)) { + + throw new IllegalStateException( + "ServiceID may not be changed: ServiceID=" + this.serviceID + + ", proposed=" + serviceID); + + } + +// if(impl != null && impl instanceof AbstractService) { +// +// final UUID serviceUUID = JiniUtil.serviceID2UUID(serviceID); +// +// final AbstractService service = ((AbstractService) impl); +// +// service.setServiceUUID(serviceUUID); +// +// } + + this.serviceID = serviceID; + + } + + /** + * Logs a message. If the service is no longer registered with any + * {@link ServiceRegistrar}s then logs an error message. + * <p> + * Note: a service that is no longer registered with any + * {@link ServiceRegistrar}s is no longer discoverable but it remains + * accessible to clients which already have its proxy. If a new + * {@link ServiceRegistrar} accepts registration by the service then it will + * become discoverable again as well. + * <p> + * Note: This is only invoked if the automatic lease renewal by the lease + * manager is denied by the service registrar. + */ + public void notify(final LeaseRenewalEvent event) { + + log.warn("Lease could not be renewed: " + this + " : " + event); + + /* + * Note: Written defensively in case this.joinManager is asynchronously + * cleared or terminated. + */ + try { + + final JoinManager joinManager = this.joinManager; + + if (joinManager != null) { + + final ServiceRegistrar[] a = joinManager.getJoinSet(); + + if (a.length == 0) { + + log + .error("Service not registered with any service registrars"); + + } else { + + if (log.isInfoEnabled()) + log.info("Service remains registered with " + a.length + + " service registrars"); + + } + + } + + } catch (Exception ex) { + + log.error("Problem obtaining joinSet? : " + this, ex); + + } + + } + + /** + * Shutdown the server, including the service and any jini processing. It + * SHOULD always be safe to invoke this method. The implementation SHOULD be + * synchronized and SHOULD conditionally handle each class of asynchronous + * processing or resource, terminating or releasing it iff it has not + * already been terminated or released. + * <p> + * This implementation: + * <ul> + * <li>unregisters the proxy, making the service unavailable for future + * requests and terminating any existing requests</li> + * <li>{@link IServiceShutdown#shutdownNow()} is invoke if the service + * implements {@link IServiceShutdown}</li> + * <li>terminates any asynchronous jini processing on behalf of the server, + * including service and join management</li> + * <li>Handles handshaking with the {@link NonActivatableServiceDescriptor}</li> + * </ul> + * <p> + * Note: All errors are trapped, logged, and ignored. + * <p> + * Note: Normally, extended shutdown behavior is handled by the service + * implementation, not the server. However, subclasses MAY extend this + * method to terminate any additional processing and release any additional + * resources, taking care to (a) declare the method as + * <strong>synchronized</strong>, conditionally halt any asynchonrous + * processing not already halted, conditionally release any resources not + * already released, and trap, log, and ignored all errors. + * <p> + * Note: This is run from within the {@link ShutdownThread} in response to a + * request to destroy the service. + * + * @param destroy + * When <code>true</code> the persistent state associated with + * the service is also destroyed. + */ + synchronized public void shutdownNow(final boolean destroy) { + + if (shuttingDown) { + + // break recursion. + return; + + } + + shuttingDown = true; + + /* + * Unexport the proxy, making the service no longer available. + * + * Note: If you do not do this then the client can still make requests + * even after you have terminated the join manager and the service is no + * longer visible in the service browser. + */ + try { + + if (log.isInfoEnabled()) + log.info("Unexporting the service proxy."); + + unexport(true/* force */); + + } catch (Throwable ex) { + + log.error("Problem unexporting service: " + this, ex); + + /* Ignore */ + + } + + if (destroy && impl != null && impl instanceof IService) { + + final IService tmp = (IService) impl; + + /* + * Delegate to the service to destroy its persistent state. + */ + + try { + + tmp.destroy(); + + } catch (Throwable ex) { + + log.error("Problem with service destroy: " + this, ex); + + // ignore. + + } + + } + + /* + * Invoke the service's own logic to shutdown its processing. + */ + if (impl != null && impl instanceof IServiceShutdown) { + + try { + + final IServiceShutdown tmp = (IServiceShutdown) impl; + + if (tmp != null && tmp.isOpen()) { + + /* + * Note: The test on isOpen() for the service is deliberate. + * The service implementations invoke server.shutdownNow() + * from their shutdown() and shutdownNow() methods in order + * to terminate the jini facets of the service. Therefore we + * test in service.isOpen() here in order to avoid a + * recursive invocation of service.shutdownNow(). + */ + + tmp.shutdownNow(); + + } + + } catch(Throwable ex) { + + log.error("Problem with service shutdown: " + this, ex); + + // ignore. + + } + + } + + // discard reference to the service implementation object. + impl = null; + + /* + * Terminate manager threads. + */ + + try { + + terminate(); + + } catch (Throwable ex) { + + log.error("Could not terminate async threads (jini, zookeeper): " + + this, ex); + + // ignore. + + } + + /* + * Hand-shaking with the NonActivableServiceDescriptor. + */ + if (lifeCycle != null) { + + try { + + lifeCycle.unregister(this); + + } catch (Throwable ex) { + + log.error("Could not unregister lifeCycle: " + this, ex); + + // ignore. + + } finally { + + ... [truncated message content] |
From: <tho...@us...> - 2012-09-03 09:35:42
|
Revision: 6507 http://bigdata.svn.sourceforge.net/bigdata/?rev=6507&view=rev Author: thompsonbry Date: 2012-09-03 09:35:29 +0000 (Mon, 03 Sep 2012) Log Message: ----------- - Fixed bugs in awaitQuorum(), awaitBreak(), and awaitEnoughJoinedToMeet() where the logic for tracking the time remaining against a timeout was broken. Added unit tests for awaitQuorum() and awaitBreak() with timeouts to verify that we wait an appropriate amount of time but not longer. - done: setup classserver and registrar (lookup). this can be done using the examples below. you do not need the lookupstarter jar. This can be accomplished using the ClassServer and LookupStarter classes. startHttpd: [echo] java -jar /var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0/dist/bigdata/lib/classserver.jar -verbose -stoppable -port 23333 -dir /var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0/dist/bigdata/lib-dl startLookup: [echo] java -Dapp.home=/var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0 -Djini.lib=/var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0/dist/bigdata/lib -Djini.lib.dl=/var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0/dist/bigdata/lib-dl -Djava.security.policy=/var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0/dist/bigdata/var/config/policy/policy.all -Djava.security.debug=off -Djava.protocol.handler.pkgs=net.jini.url -Dlog4j.configuration=resources/logging/log4j.properties -Dcodebase.port=23333 -Djava.net.preferIPv4Stack=true -Dbigdata.fedname=bigdata.test.group-bigdata10 -Ddefault.nic=${default.nic} -jar /var/lib/jenkins/workspace/bigdata-release-1.2.0/BIGDATA_RELEASE_1_2_0/bigdata-test/lib/lookupstarter.jar [echo] [echo] test.zookeeper.installDir=/var/lib/jenkins/zookeeper-3.3.3 [echo] bin/zkServer.(sh|cmd) start [exec] JMX enabled by default [exec] Using config: /var/lib/jenkins/zookeeper-3.3.3/bin/../conf/zoo.cfg [exec] Starting zookeeper ... [exec] STARTED - Setup discovery for HAGlue services and use it to drive awaitQuorum() and awaitBreak(). [I've done this in StaticQuorum, but it does not implement enough of the Quorum semantics. In particular, there needs to be a QuorumMember and a QuorumActor. Perhaps the HAGlue and HAJournal?] I tried to integrate with the ZkQuorumImpl instead, but I am not observing quorum events. Perhaps I am not initiating things through member.add() to have each node add itself to the quorum. Yep. Now it sees events. - zkQuorum may transiently enter an unforseen state due to non-ephemeral data and/or emphermal data with a service restart cycle that is less than the timeout for the ephemeral heartbeat. I have added a method that is called from AbstractQuorum#start(C client) to obtain the then current value of the lastValidToken (if any) from the remote quorum. There may be similar problems with jini if we do not unregister any old service for the same ServiceID before registering the new proxy (and there could be similar problems if we wind up with a temporary network partition of the jini service registrars). - done. error if we attempt to reorder the pipeline (A then B). this was due to an unexported Future(). - done. Journal.quorumToken was never set. - done: Each service that we start should run the NSS (it its embedded mode). This change was made in the development branch and will be present in release 1.2.2. Parameter names are now camel case with initial lower case. For example: {{{ property-file }}} is now {{{ propertyFile }}} I have changed the default web.xml file. I have changed the ConfigParams file. People upgrading must either accept (and then if necessary edit) the new web.xml or edit their existing web.xml during an upgrade to have parameter names that are consistent with this change. @see https://sourceforge.net/apps/trac/bigdata/ticket/596 (Change web.xml parameter names to be consistent with Jini/River) @see https://sourceforge.net/apps/trac/bigdata/ticket/589 (2-Phase commit) @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAReceiveService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumListener.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestSingletonQuorumSemantics.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/ServiceStarter.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/ServiceConfiguration.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/StaticQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniClient.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/lookup/AbstractCachingServiceClient.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/lookup/ServiceCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/service/jini/util/JiniCoreServicesHelper.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConfigParams.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/NanoSparqlServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-war/src/resources/WEB-INF/web.xml Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalDiscoveryClient.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAReceiveService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAReceiveService.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAReceiveService.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -300,7 +300,7 @@ server.socket().bind(addrSelf); server.configureBlocking(false); if(log.isInfoEnabled()) - log.info("Listening on" + addrSelf); + log.info("Listening on: " + addrSelf); runNoBlock(server); } catch (InterruptedException e) { /* Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -584,9 +584,12 @@ b.flip(); assert b.remaining() > 0 : "Empty cache: " + cache; // send to 1st follower. - remoteWriteFuture = ((QuorumPipeline<HAPipelineGlue>) quorum - .getMember()).replicate(cache - .newHAWriteMessage(quorumToken), b); + @SuppressWarnings("unchecked") + final QuorumPipeline<HAPipelineGlue> quorumMember = (QuorumPipeline<HAPipelineGlue>) quorum + .getMember(); + assert quorumMember != null : "No quorum member?"; + remoteWriteFuture = quorumMember.replicate( + cache.newHAWriteMessage(quorumToken), b); counters.get().nsend++; } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -2607,8 +2607,6 @@ * the store. */ final long commitRecordIndexAddr = _commitRecordIndex.writeCheckpoint(); - - if (quorum != null) { /* @@ -4522,8 +4520,16 @@ * if (quorum.isHighlyAvailable() && quorum.isQuorumMet() && * quorum.getClient().isFollower(quorumToken)) { return true; } */ - private volatile long quorumToken; + private volatile long quorumToken = Quorum.NO_QUORUM; + protected final long getQuorumToken() { + return quorumToken; + } + protected void setQuorumToken(final long newValue) { + final long oldValue = quorumToken; + quorumToken = newValue; + } + /** * The current {@link Quorum} (if any). */ @@ -4606,6 +4612,21 @@ } + /** + * Return a proxy object for a {@link Future} suitable for use in an RMI + * environment (the default implementation returns its argument). + * + * @param future + * The future. + * + * @return The proxy for that future. + */ + protected <E> Future<E> getProxy(final Future<E> future) { + + return future; + + } + /* * @todo if the leader is synchronized with the followers then they * should all agree on whether they should be writing rootBlock0 or @@ -4709,7 +4730,7 @@ } - return ft; + return getProxy(ft); } @@ -4763,7 +4784,7 @@ */ ft.run(); - return ft; + return getProxy(ft); } @@ -4790,7 +4811,7 @@ */ ft.run(); - return ft; + return getProxy(ft); } @@ -4842,7 +4863,7 @@ ft.run(); - return ft; + return getProxy(ft); } @@ -4863,8 +4884,11 @@ throw new RuntimeException(e); } - return getQuorum().getClient().receiveAndReplicate(msg); + final Future<Void> ft = getQuorum().getClient() + .receiveAndReplicate(msg); + return getProxy(ft); + } public byte[] getRootBlock(final UUID storeId) { @@ -4889,7 +4913,7 @@ } }, null); ft.run(); - return ft; + return getProxy(ft); } /** @@ -4904,7 +4928,7 @@ } }, null/* result */); getExecutorService().execute(ft); - return ft; + return getProxy(ft); } }; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -48,6 +48,12 @@ .getLogger(AbstractQuorum.class); /** + * Dedicated logger for quorum state. + */ + static protected final transient Logger qlog = Logger + .getLogger("com.bigdata.quorum.quorumState"); + + /** * Text when an operation is not permitted because the service is not a * quorum member. */ @@ -83,7 +89,12 @@ static protected final transient String ERR_CAN_NOT_MEET = "Quorum can not meet : "; /** - * A timeout used to await some precondition to become true. + * Message when a quorum breaks.. + */ + static protected final transient String ERR_QUORUM_BREAK = "Quorum break"; + + /** + * A timeout (nanoseconds) used to await some precondition to become true. * * @see #awaitEnoughJoinedToMeet() * @@ -284,14 +295,21 @@ /** * When true, events are send synchronously in the thread of the watcher. * - * @toso This makes it easy to write the unit tests since we do not need to - * "wait" for events to arrive (and they are much faster without the - * eventService contending for the {@link #lock} all the time). Is - * this Ok as a long standing policy? [I think so. These events are - * only going out to local objects.] - * <p> - * Note that the events are not guaranteed to arrive in the same order - * that the internal state changes are made. + * FIXME This makes it easy to write the unit tests since we do not need to + * "wait" for events to arrive (and they are much faster without the + * eventService contending for the {@link #lock} all the time). Is this Ok + * as a long standing policy? [I think so. These events are only going out + * to local objects.] + * <p> + * Note that the events are not guaranteed to arrive in the same order that + * the internal state changes are made. + * <p> + * However, sending the event in the caller's thread means that the caller + * will be holding the {@link #lock} and thus the receiver MUST NOT execute + * any blocking code. Specifically, it must not cause a different thread to + * execute code that would content for the {@link #lock}. That means that + * the receiver must not wait on any other thread to perform an action that + * touches the {@link AbstractQuorum}. */ private final boolean sendSynchronous = true; @@ -376,6 +394,9 @@ .newCachedThreadPool(new DaemonThreadFactory( "WatcherActionService")); } + // Reach out to the durable quorum and get the lastValidToken + this.lastValidToken = getLastValidTokenFromQuorumState(client); + // Setup the watcher. this.watcher = newWatcher(client.getLogicalServiceId()); this.eventService = (sendSynchronous ? null : Executors .newSingleThreadExecutor(new DaemonThreadFactory( @@ -402,6 +423,16 @@ } } + /** + * Initialization method must return the lastValidToken from the durable + * quorum state and {@link #NO_QUORUM} if there is no durable state. + */ + protected long getLastValidTokenFromQuorumState(final C client) { + + return NO_QUORUM; + + } + public void terminate() { boolean interrupted = false; lock.lock(); @@ -655,7 +686,7 @@ } } - public void addListener(final QuorumListener listener) { + final public void addListener(final QuorumListener listener) { if (listener == null) throw new IllegalArgumentException(); if (listener == client) @@ -663,7 +694,7 @@ listeners.add(listener); } - public void removeListener(final QuorumListener listener) { + final public void removeListener(final QuorumListener listener) { if (listener == null) throw new IllegalArgumentException(); if (listener == client) @@ -671,7 +702,7 @@ listeners.remove(listener); } - public int replicationFactor() { + final public int replicationFactor() { // Note: [k] is final. return k; } @@ -680,7 +711,7 @@ return replicationFactor() > 1; } - public long lastValidToken() { + final public long lastValidToken() { lock.lock(); try { return lastValidToken; @@ -689,7 +720,7 @@ } } - public UUID[] getMembers() { + final public UUID[] getMembers() { lock.lock(); try { return members.toArray(new UUID[0]); @@ -698,7 +729,7 @@ } } - public Map<Long, UUID[]> getVotes() { + final public Map<Long, UUID[]> getVotes() { lock.lock(); try { /* @@ -720,7 +751,7 @@ } } - public Long getCastVote(final UUID serviceId) { + final public Long getCastVote(final UUID serviceId) { lock.lock(); try { final Iterator<Map.Entry<Long, LinkedHashSet<UUID>>> itr = votes @@ -804,7 +835,7 @@ return -1; } - public UUID[] getJoined() { + final public UUID[] getJoined() { lock.lock(); try { return joined.toArray(new UUID[0]); @@ -813,7 +844,7 @@ } } - public UUID[] getPipeline() { + final public UUID[] getPipeline() { lock.lock(); try { return pipeline.toArray(new UUID[0]); @@ -822,7 +853,7 @@ } } - public UUID getLastInPipeline() { + final public UUID getLastInPipeline() { lock.lock(); try { final Iterator<UUID> itr = pipeline.iterator(); @@ -836,7 +867,7 @@ } } - public UUID[] getPipelinePriorAndNext(final UUID serviceId) { + final public UUID[] getPipelinePriorAndNext(final UUID serviceId) { if (serviceId == null) throw new IllegalArgumentException(); lock.lock(); @@ -863,7 +894,7 @@ } } - public UUID getLeaderId() { + final public UUID getLeaderId() { lock.lock(); try { if (!isQuorumMet()) { @@ -918,7 +949,7 @@ * This watches the current token and will return as soon as the token is * valid. */ - public long awaitQuorum() throws InterruptedException, + final public long awaitQuorum() throws InterruptedException, AsynchronousQuorumCloseException { lock.lock(); try { @@ -933,20 +964,21 @@ } } - public long awaitQuorum(final long timeout, final TimeUnit units) + final public long awaitQuorum(final long timeout, final TimeUnit units) throws InterruptedException, TimeoutException, AsynchronousQuorumCloseException { final long begin = System.nanoTime(); - long nanos = units.toNanos(timeout); - if(!lock.tryLock(nanos, TimeUnit.NANOSECONDS)) + final long nanos = units.toNanos(timeout); + long remaining = nanos; + if(!lock.tryLock(remaining, TimeUnit.NANOSECONDS)) throw new TimeoutException(); try { - // remaining -= (now - begin) [aka elapsed] - nanos -= System.nanoTime() - begin; + // remaining = nanos - (now - begin) [aka elapsed] + remaining = nanos - (System.nanoTime() - begin); while (token == NO_QUORUM && client != null) { - if(!quorumChange.await(nanos,TimeUnit.NANOSECONDS)) + if(!quorumChange.await(remaining,TimeUnit.NANOSECONDS)) throw new TimeoutException(); - nanos -= System.nanoTime() - begin; + remaining = nanos - (System.nanoTime() - begin); } if (client == null) throw new AsynchronousQuorumCloseException(); @@ -956,18 +988,7 @@ } } - /** - * {@inheritDoc} - * - * @todo This triggers when it notices that the quorum is currently broken - * rather than when it notices that the quorum on entry had broken. It - * will fail to return if a quorum break is cured before it examines - * [token] again. Regardless, there is no guarantee that the quorum is - * still broken by the time the caller looks at the quorum again. - * <p> - * Are these the desired semantics for this public method? - */ - public void awaitBreak() throws InterruptedException, + final public void awaitBreak() throws InterruptedException, AsynchronousQuorumCloseException { lock.lock(); try { @@ -982,20 +1003,21 @@ } } - public void awaitBreak(final long timeout, final TimeUnit units) + final public void awaitBreak(final long timeout, final TimeUnit units) throws InterruptedException, TimeoutException, AsynchronousQuorumCloseException { final long begin = System.nanoTime(); - long nanos = units.toNanos(timeout); - if (!lock.tryLock(nanos, TimeUnit.NANOSECONDS)) + final long nanos = units.toNanos(timeout); + long remaining = nanos; + if (!lock.tryLock(remaining, TimeUnit.NANOSECONDS)) throw new TimeoutException(); try { - // remaining -= (now - begin) [aka elapsed] - nanos -= System.nanoTime() - begin; + // remaining = nanos - (now - begin) [aka elapsed] + remaining = nanos - (System.nanoTime() - begin); while (token != NO_QUORUM && client != null) { - if (!quorumChange.await(nanos, TimeUnit.NANOSECONDS)) + if (!quorumChange.await(remaining, TimeUnit.NANOSECONDS)) throw new TimeoutException(); - nanos -= System.nanoTime() - begin; + remaining = nanos - (System.nanoTime() - begin); } if (client == null) throw new AsynchronousQuorumCloseException(); @@ -1017,17 +1039,18 @@ throw new IllegalMonitorStateException(); try { final long begin = System.nanoTime(); - long nanos = timeout; + final long nanos = timeout; + long remaining = nanos; while (joined.size() < ((k + 1) / 2)) { if (client == null) throw new AsynchronousQuorumCloseException(); - if (!joinedChange.await(nanos, TimeUnit.NANOSECONDS)) + if (!joinedChange.await(remaining, TimeUnit.NANOSECONDS)) throw new QuorumException( "Not enough joined services: njoined=" + joined.size() + " : " + AbstractQuorum.this); - // remaining -= (now - begin) [aka elapsed] - nanos -= System.nanoTime() - begin; + // remaining = nanos - (now - begin) [aka elapsed] + remaining = nanos - (System.nanoTime() - begin); } return; } catch (InterruptedException e) { @@ -1577,7 +1600,7 @@ "Concurrent set of the token: old=" + oldValue + ", new=" + token); } - log.warn("Quorum break."); + log.warn(ERR_QUORUM_BREAK); } private void conditionalSetToken(final long newValue) @@ -1744,15 +1767,13 @@ } finally { lock.unlock(); } - // @todo trace should be [false]. - boolean trace = false; - if (trace) { -// System.err.println("lastCommitTimeConsensus = "+lastCommitTime); -// System.err.println("vote =" + Arrays.toString(voteOrder)); - System.err.println("pipeline=" + Arrays.toString(pipeline)); - System.err.println("joined =" + Arrays.toString(joined)); - System.err.println("leader = " + leaderId); - System.err.println("self = " + serviceId); + if (qlog.isInfoEnabled()) { +// log.info("lastCommitTimeConsensus = "+lastCommitTime); +// log.info("vote =" + Arrays.toString(voteOrder)); + qlog.info("pipeline=" + Arrays.toString(pipeline)); + qlog.info("joined =" + Arrays.toString(joined)); + qlog.info("leader = " + leaderId); + qlog.info("self = " + serviceId); } boolean modified = false; for (int i = 0; i < pipeline.length; i++) { @@ -1769,7 +1790,7 @@ + serviceId); } try { - // ask it to move itself to the end of the pipeline. + // ask it to move itself to the end of the pipeline (RMI) ((HAPipelineGlue) otherService).moveToEndOfPipeline().get(); } catch (IOException ex) { throw new QuorumException( @@ -1784,10 +1805,10 @@ "Could not move service to end of the pipeline: serviceId=" + serviceId + ", otherId=" + otherId, e); } - if(trace) { - System.err.println("moved ="+otherId); - System.err.println("pipeline="+Arrays.toString(getPipeline())); - System.err.println("joined ="+Arrays.toString(getJoined())); + if(qlog.isInfoEnabled()) { + qlog.info("moved ="+otherId); + qlog.info("pipeline="+Arrays.toString(getPipeline())); + qlog.info("joined ="+Arrays.toString(getJoined())); } modified = true; } @@ -2220,7 +2241,8 @@ * actor-watcher reflex arc unless it is run in * another thread. */ - log.warn("First service will join"); + if (log.isInfoEnabled()) + log.info("First service will join"); doAction(new Runnable() {public void run() {actor.serviceJoin();}}); } else { if (clientId.equals(serviceId)) { @@ -2241,14 +2263,15 @@ final UUID waitsFor = voteOrder[index - 1]; if (joined.contains(waitsFor)) { // our client can join immediately. - log.warn("Follower will join: " - + AbstractQuorum.this - .toString()); + if (log.isInfoEnabled()) + log.info("Follower will join: " + + AbstractQuorum.this + .toString()); doAction(new Runnable() { public void run() { actor.serviceJoin(); - System.err - .println("After join: " + if(qlog.isInfoEnabled()) + qlog.info("After join: " + AbstractQuorum.this); } }); @@ -2352,15 +2375,28 @@ /* * The quorum has met. */ - log.warn("leader=" + leaderId + ", newToken=" + token + if (log.isInfoEnabled()) + log.info("leader=" + leaderId + ", newToken=" + token // + " : " + AbstractQuorum.this ); final QuorumMember<S> client = getClientAsMember(); if (client != null) { client.quorumMeet(token, leaderId); } - sendEvent(new E(QuorumEventEnum.QUORUM_MEET, lastValidToken, - token, leaderId)); + /* + * Note: If we send out an event here then any code path that + * reenters the AbstractQuorum from another thread seeking a + * lock will deadlock. Either the event must be dispatched after + * we release the lock (which might be held at multiple points + * in the call stack) or the dispatch of the event must not + * cause this thread to block. That could either be accomplished + * by handing off the event to a dispatcher thread or by + * requiring the receiver to perform a non-blocking operation + * when they get the event. + */ + final E e = new E(QuorumEventEnum.QUORUM_MEET, lastValidToken, + token, leaderId); + sendEvent(e); } finally { lock.unlock(); } @@ -2471,7 +2507,7 @@ token = NO_QUORUM; if (willBreak) { quorumChange.signalAll(); - log.warn("Quorum break"); + log.warn(ERR_QUORUM_BREAK); final QuorumMember<S> client = getClientAsMember(); if (client != null) { // Notify the client that the quorum broke. @@ -2568,10 +2604,9 @@ /* * Elect the leader. */ - log - .warn("Ready to elect leader or reorganize pipeline: " - + AbstractQuorum.this - .toString()); + if (log.isInfoEnabled()) + log.info("Ready to elect leader or reorganize pipeline: " + + AbstractQuorum.this.toString()); doAction(new Runnable() { public void run() { if (actor.reorganizePipeline()) { @@ -2584,16 +2619,17 @@ * properly organized and then try * again. */ - log - .warn("Reorganized the pipeline: " - + AbstractQuorum.this - .toString()); + if (log.isInfoEnabled()) + log.info("Reorganized the pipeline: " + + AbstractQuorum.this + .toString()); } else { /* * The pipeline is well organized, * so elect the leader now. */ - log.warn("Electing leader: " + if (log.isInfoEnabled()) + log.info("Electing leader: " + AbstractQuorum.this .toString()); // actor @@ -2622,8 +2658,9 @@ /* * Elect a follower. */ - log.warn("Follower will join: " - + AbstractQuorum.this.toString()); + if (log.isInfoEnabled()) + log.info("Follower will join: " + + AbstractQuorum.this.toString()); doAction(new Runnable() {public void run() {actor.serviceJoin();}}); } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumListener.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumListener.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumListener.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -39,6 +39,11 @@ /** * Notify the client of a quorum event. + * <p> + * The listener MUST NOT take any event that could block. In particular, it + * MUST NOT wait on another thread that will access the {@link Quorum} as + * that will cause a deadlock around the internal lock maintained by the + * {@link Quorum}. */ void notify(QuorumEvent e); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -120,7 +120,7 @@ // + ":BEGIN:===================="); fixture = new MockQuorumFixture(); - fixture.start(); +// fixture.start(); logicalServiceId = "logicalService_" + getName(); k = 3; @@ -172,6 +172,51 @@ } /* + * FIXME It appears that it is necessary to start the QuorumFixture and + * then each Quorum *BEFORE* any quorum member takes an action, e.g., + * by doing a memberAdd(). That is likely a flaw in the QuorumFixture + * or the Quorum code. + */ + fixture.start(); + + for (int i = 0; i < replicationCount; i++) { + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = stores[i] + .getQuorum(); + final HAJournal jnl = (HAJournal) stores[i]; + final UUID serviceId = jnl.getUUID(); + quorum.start(newQuorumService(logicalServiceId, serviceId, + jnl.newHAGlue(serviceId), jnl)); + } + + for (int i = 0; i < replicationCount; i++) { + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = stores[i] + .getQuorum(); + final HAJournal jnl = (HAJournal) stores[i]; + /* + * Tell the actor to try and join the quorum. It will join iff our + * current root block can form a simple majority with the other + * services in the quorum. + */ + final QuorumActor<?, ?> actor = quorum.getActor(); + try { + + actor.memberAdd(); + fixture.awaitDeque(); + + actor.pipelineAdd(); + fixture.awaitDeque(); + + actor.castVote(jnl.getLastCommitTime()); + fixture.awaitDeque(); + + } catch (InterruptedException ex) { + + throw new RuntimeException(ex); + + } + } + + /* * Initialize the master first. The followers will get their root blocks * from the master. */ @@ -199,7 +244,7 @@ assertEquals(k, q.getMembers().length); } catch (TimeoutException ex) { - +for(int i=0; i<3; i++)log.error("quorum["+i+"]:"+(stores[i].getQuorum()).toString()); throw new RuntimeException(ex); } catch (InterruptedException ex) { @@ -223,12 +268,12 @@ final HAJournal jnl = new HAJournal(properties, quorum); - /* - * FIXME This probably should be a constant across the life cycle of the - * service, in which case it needs to be elevated outside of this method - * which is used both to open and re-open the journal. - */ - final UUID serviceId = UUID.randomUUID(); +// /* +// * FIXME This probably should be a constant across the life cycle of the +// * service, in which case it needs to be elevated outside of this method +// * which is used both to open and re-open the journal. +// */ +// final UUID serviceId = UUID.randomUUID(); /* * Set the client on the quorum. @@ -236,8 +281,8 @@ * FIXME The client needs to manage the quorumToken and various other * things. */ - quorum.start(newQuorumService(logicalServiceId, serviceId, jnl - .newHAGlue(serviceId), jnl)); +// quorum.start(newQuorumService(logicalServiceId, serviceId, jnl +// .newHAGlue(serviceId), jnl)); // // discard the current write set. // abort(); @@ -248,29 +293,29 @@ // // save off the current token (typically NO_QUORUM unless standalone). // quorumToken = quorum.token(); - /* - * Tell the actor to try and join the quorum. It will join iff our - * current root block can form a simple majority with the other services - * in the quorum. - */ - final QuorumActor<?, ?> actor = quorum.getActor(); - try { - - actor.memberAdd(); - fixture.awaitDeque(); - - actor.pipelineAdd(); - fixture.awaitDeque(); - - actor.castVote(jnl.getLastCommitTime()); - fixture.awaitDeque(); - - } catch (InterruptedException ex) { +// /* +// * Tell the actor to try and join the quorum. It will join iff our +// * current root block can form a simple majority with the other services +// * in the quorum. +// */ +// final QuorumActor<?, ?> actor = quorum.getActor(); +// try { +// +// actor.memberAdd(); +// fixture.awaitDeque(); +// +// actor.pipelineAdd(); +// fixture.awaitDeque(); +// +// actor.castVote(jnl.getLastCommitTime()); +// fixture.awaitDeque(); +// +// } catch (InterruptedException ex) { +// +// throw new RuntimeException(ex); +// +// } - throw new RuntimeException(ex); - - } - return jnl; // return new Journal(properties) { @@ -479,4 +524,4 @@ // // } -} +} \ No newline at end of file Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -315,7 +315,7 @@ */ if ((e = deque.peek()) == null) throw new AssertionError(); - log.warn("Next event: " + e); + log.info("\n==> Next event: " + e); } finally { lock.unlock(); } @@ -759,7 +759,7 @@ } catch (Throwable t) { if (InnerCause.isInnerCause(t, InterruptedException.class)) - log.warn("Shutdown : " + t); + log.info("Shutdown : " + t); else log.error(t, t); break; @@ -1245,21 +1245,25 @@ if (isPipelineMember()) { - log.warn("Will remove self from the pipeline: " - + getServiceId()); + if (log.isDebugEnabled()) + log.debug("Will remove self from the pipeline: " + + getServiceId()); getActor().pipelineRemove(); - log.warn("Will add self back into the pipeline: " - + getServiceId()); + if (log.isDebugEnabled()) + log.debug("Will add self back into the pipeline: " + + getServiceId()); getActor().pipelineAdd(); if (lastCommitTime != null) { - log.warn("Will cast our vote again: lastCommitTime=" - + +lastCommitTime - + ", " + getServiceId()); + if (log.isDebugEnabled()) + log.debug("Will cast our vote again: lastCommitTime=" + + +lastCommitTime + + ", " + + getServiceId()); getActor().castVote(lastCommitTime); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -27,10 +27,18 @@ package com.bigdata.quorum; +import java.util.concurrent.atomic.AtomicLong; + +import junit.framework.AssertionFailedError; import junit.framework.Test; import junit.framework.TestCase; +import junit.framework.TestListener; +import junit.framework.TestResult; import junit.framework.TestSuite; +import junit.textui.ResultPrinter; +import org.apache.log4j.Logger; + /** * Aggregates test suites in increasing dependency order. * @@ -39,8 +47,11 @@ */ public class TestAll extends TestCase { - public static boolean s_includeQuorum = false; - /** + private final static Logger log = Logger.getLogger(TestAll.class); + + final private static boolean s_includeQuorum = false; + + /** * */ public TestAll() { @@ -92,5 +103,61 @@ return suite; } - + + /** + * Run the test suite many times. + * + * @param args + * The #of times to run the test suite (defaults to 100). + */ + public static void main(final String[] args) { + + final int LIMIT = args.length == 0 ? 100 : Integer.valueOf(args[0]); + + final AtomicLong nerrs = new AtomicLong(0); + final AtomicLong nfail = new AtomicLong(0); + + // Setup test result. + final TestResult result = new TestResult(); + + // Setup listener, which will write the result on System.out + result.addListener(new ResultPrinter(System.out)); + + result.addListener(new TestListener() { + + public void startTest(Test arg0) { + log.info(arg0); + } + + public void endTest(Test arg0) { + log.info(arg0); + } + + public void addFailure(Test arg0, AssertionFailedError arg1) { + nfail.incrementAndGet(); + log.error(arg0,arg1); + } + + public void addError(Test arg0, Throwable arg1) { + nerrs.incrementAndGet(); + log.error(arg0,arg1); + } + }); + + final Test suite = TestAll.suite(); + + int i = 0; + for (; i < LIMIT && nerrs.get() == 0 && nfail.get() == 0; i++) { + + System.out.println("Starting iteration: " + i); + + suite.run(result); + + } + + System.out + .println("Finished " + i + " out of " + LIMIT + " iterations"); + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestSingletonQuorumSemantics.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestSingletonQuorumSemantics.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestSingletonQuorumSemantics.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -30,6 +30,7 @@ import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicLong; import junit.framework.AssertionFailedError; @@ -233,8 +234,15 @@ public void test_voting() throws InterruptedException, AsynchronousQuorumCloseException, TimeoutException { - // This is a debug flag. It should be [false] normally. - final boolean awaitMeetsAndBreaks = true; + /* + * When true, this test also exercises the awaitQuorum() and + * awaitBreak() methods that accept a timeout, but only for the case in + * which the condition should be true on entry. There is another unit + * test in this class that verifies that the TimeoutException is + * correctly thrown if the condition does not become true within the + * timeout. + */ + final boolean awaitMeetsAndBreaks = true; final Quorum<?, ?> quorum = quorums[0]; final MockQuorumMember<?> client = clients[0]; @@ -470,4 +478,160 @@ } + /** + * Unit test of timeout in {@link Quorum#awaitQuorum(long, TimeUnit)}. and + * {@link Quorum#awaitBreak(long, TimeUnit)}. + * + * @throws AsynchronousQuorumCloseException + * @throws InterruptedException + */ + public void test_awaitQuorum() throws AsynchronousQuorumCloseException, InterruptedException { + + final AbstractQuorum<?, ?> quorum = quorums[0]; + final MockQuorumMember<?> client = clients[0]; + final QuorumActor<?,?> actor = actors[0]; + final UUID serviceId = client.getServiceId(); + + final long lastCommitTime = 0L; + final long lastCommitTime2 = 2L; + + // declare the service as a quorum member. + actor.memberAdd(); + fixture.awaitDeque(); + + assertTrue(client.isMember()); + assertEquals(new UUID[]{serviceId},quorum.getMembers()); + + // add to the pipeline. + actor.pipelineAdd(); + fixture.awaitDeque(); + + assertTrue(client.isPipelineMember()); + assertEquals(new UUID[]{serviceId},quorum.getPipeline()); + + final long timeout = 1500;// ms + final long slop = 100;// margin of error. + { + /* + * Verify that a we timeout when awaiting a quorum meet that does + * not occur. + */ + final AtomicLong didTimeout = new AtomicLong(-1L); + + final Thread t = new Thread() { + public void run() { + final long begin = System.currentTimeMillis(); + try { + // wait for a quorum (but will not meet). + log.info("Waiting for quorum meet."); + quorum.awaitQuorum(timeout, TimeUnit.MILLISECONDS); + } catch (TimeoutException e) { + // This is what we are looking for. + final long elapsed = System.currentTimeMillis() - begin; + didTimeout.set(elapsed); + if (log.isInfoEnabled()) + log.info("Timeout after " + elapsed + "ms"); + } catch (Exception e) { + log.error(e, e); + } + } + }; + t.run(); + Thread.sleep(timeout + 250/* ms */); + t.interrupt(); + final long elapsed = didTimeout.get(); + assertTrue("did not timeout", elapsed != -1); + assertTrue("Timeout occurred too soon: elapsed=" + elapsed + + ",timeout=" + timeout, elapsed >= timeout); + assertTrue("Timeout took too long: elapsed=" + elapsed + + ",timeout=" + timeout, elapsed < (timeout + slop)); + } + + // cast a vote for a lastCommitTime. + actor.castVote(lastCommitTime); + fixture.awaitDeque(); + + assertEquals(1,quorum.getVotes().size()); + assertEquals(new UUID[] { serviceId }, quorum.getVotes().get( + lastCommitTime)); + + // verify the consensus was updated. + assertEquals(lastCommitTime, client.lastConsensusValue); + + // wait for quorum meet. + final long token1 = quorum.awaitQuorum(); + + // verify service was joined. + assertTrue(client.isJoinedMember(quorum.token())); + assertEquals(new UUID[] { serviceId }, quorum.getJoined()); + + // validate the token was assigned. + fixture.awaitDeque(); + assertEquals(Quorum.NO_QUORUM + 1, quorum.lastValidToken()); + assertEquals(Quorum.NO_QUORUM + 1, quorum.token()); + assertTrue(quorum.isQuorumMet()); + + { + /* + * Verify that we timeout when awaiting a quorum break that does not + * occur. + */ + final AtomicLong didTimeout = new AtomicLong(-1L); + + final Thread t = new Thread() { + public void run() { + final long begin = System.currentTimeMillis(); + try { + // wait for a quorum break (but will not break). + log.info("Waiting for quorum break."); + quorum.awaitBreak(timeout, TimeUnit.MILLISECONDS); + } catch (TimeoutException e) { + // This is what we are looking for. + final long elapsed = System.currentTimeMillis() - begin; + didTimeout.set(elapsed); + if (log.isInfoEnabled()) + log.error("Timeout after " + elapsed + "ms"); + } catch (Exception e) { + log.error(e, e); + } + } + }; + t.run(); + Thread.sleep(timeout + 250/* ms */); + t.interrupt(); + final long elapsed = didTimeout.get(); + assertTrue("did not timeout", elapsed != -1); + assertTrue("Timeout occurred too soon: elapsed=" + elapsed + + ",timeout=" + timeout, elapsed >= timeout); + assertTrue("Timeout took too long: elapsed=" + elapsed + + ",timeout=" + timeout, elapsed < (timeout + slop)); + } + + try { + // Verify awaitBreak() does not return normally. + quorum.awaitBreak(1, TimeUnit.MILLISECONDS); + fail("Not expecting quorum break"); + } catch (TimeoutException e) { + if (log.isInfoEnabled()) + log.info("Ignoring expected excption: " + e); + } + + /* + * Do service leave, quorum should break. + */ + + actor.serviceLeave(); + fixture.awaitDeque(); + + quorum.awaitBreak(); + + try { + // Verify awaitBreak() returns normally. + quorum.awaitBreak(1, TimeUnit.MILLISECONDS); + } catch (TimeoutException e) { + fail("Not expecting " + e, e); + } + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/ServiceStarter.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/ServiceStarter.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/ServiceStarter.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -38,6 +38,7 @@ import com.bigdata.jini.start.config.ServiceConfiguration; import com.bigdata.jini.start.config.JavaServiceConfiguration.JavaServiceStarter; import com.bigdata.jini.start.process.ProcessHelper; +import com.bigdata.jini.util.ConfigMath; /** * Starts an unmanaged service using the specified configuration. @@ -169,7 +170,7 @@ className, config); // pass through any options. - serviceConfig.options = ServiceConfiguration.concat( + serviceConfig.options = ConfigMath.concat( serviceConfig.options, args2); // get the service starter. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -478,7 +478,7 @@ final ServiceUUID serviceUUID = new ServiceUUID(this.serviceUUID); - return concat(new Entry[] { serviceName, hostName, serviceDir, + return ConfigMath.concat(new Entry[] { serviceName, hostName, serviceDir, serviceUUID }, entries); } @@ -926,7 +926,7 @@ public static String[] getJiniOptions(final String className, final Configuration config) throws ConfigurationException { - return concat( // for all services + return ConfigMath.concat( // for all services getStringArray(Options.JINI_OPTIONS, JiniClient.class.getName(), config, new String[0]), // for this service. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/ServiceConfiguration.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/ServiceConfiguration.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/ServiceConfiguration.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -24,6 +24,7 @@ import com.bigdata.jini.start.ServicesManagerServer; import com.bigdata.jini.start.config.ManagedServiceConfiguration.ManagedServiceStarter; import com.bigdata.jini.start.process.ProcessHelper; +import com.bigdata.jini.util.ConfigMath; import com.bigdata.service.jini.IReplicatableService; import com.bigdata.service.jini.JiniFederation; import com.sun.jini.tool.ClassServer; @@ -778,8 +779,8 @@ final long timeout, final TimeUnit unit) throws Exception { try { - final int exitValue = processHelper.exitValue(timeout, unit); + throw new IOException("exitValue=" + exitValue); @@ -927,7 +928,7 @@ IServiceConstraint[].class, new IServiceConstraint[0]); if (a != null && b != null) - return concat(a, b); + return ConfigMath.concat(a, b); if (a != null) return a; @@ -947,7 +948,7 @@ String[].class, defaultValue); if (a != null && b != null) - return concat(a, b); + return ConfigMath.concat(a, b); if (a != null) return a; @@ -963,30 +964,12 @@ * @param a * @param b * @return + * @deprecated Use {@link ConfigMath#concat(T[],T[])} instead */ - @SuppressWarnings("unchecked") public static <T> T[] concat(final T[] a, final T[] b) { - if (a == null && b == null) - return a; - - if (a == null) - return b; - - if (b == null) - return a; - - final T[] c = (T[]) java.lang.reflect.Array.newInstance(a.getClass() - .getComponentType(), a.length + b.length); - - // final String[] c = new String[a.length + b.length]; - - System.arraycopy(a, 0, c, 0, a.length); - - System.arraycopy(b, 0, c, a.length, b.length); - - return c; - + return ConfigMath.concat(a, b); + } /** Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java 2012-09-03 07:41:45 UTC (rev 6506) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java 2012-09-03 09:35:29 UTC (rev 6507) @@ -28,6 +28,7 @@ package com.bigdata.jini.util; import java.io.File; +import java.lang.reflect.Array; import java.util.concurrent.TimeUnit; import com.sun.jini.config.ConfigUtil; @@ -218,4 +219,37 @@ } + /** + * Combines the two arrays, appending the contents of the 2nd array to the + * contents of the first array. + * + * @param a + * @param b + * @return + */ + @SuppressWarnings("unchecked") + public static <T> T[] concat(final T[] a, final T[] b) { + + if (a == null && b == null) + return a; + + if (a == null) + return b; + + if (b == null) + return a; + + final T[] c = (T[]) java.lang.reflect.Array.newInstance(a.getClass() + .getComponentType(), a.length + b.length); + + // final String[] c = new String[a.length + b.length]; + + System.arraycopy(a, 0, c, 0, a.length); + + System.arraycopy(b, 0, c, a.length, b.length); + + return c; + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java ===================================================... [truncated message content] |
From: <tho...@us...> - 2012-09-03 10:36:53
|
Revision: 6508 http://bigdata.svn.sourceforge.net/bigdata/?rev=6508&view=rev Author: thompsonbry Date: 2012-09-03 10:36:47 +0000 (Mon, 03 Sep 2012) Log Message: ----------- HAJournalServer: pass [token] rather than [NO_QUORUM] on meet. AbstractQuorum.assertLeader(token) : throw exception if caller's token is NO_QUORUM. AbstractJournal : lifted newHAGlue() and getPort() into AbstractHAJournalTestCase. Working on setQuorumToken() semantics. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 09:35:29 UTC (rev 6507) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 10:36:47 UTC (rev 6508) @@ -30,10 +30,7 @@ import java.io.File; import java.io.IOException; import java.lang.ref.WeakReference; -import java.net.BindException; import java.net.InetSocketAddress; -import java.net.ServerSocket; -import java.net.UnknownHostException; import java.nio.ByteBuffer; import java.nio.channels.Channel; import java.nio.channels.FileChannel; @@ -2610,7 +2607,7 @@ if (quorum != null) { /* - * Verify that the last negotiated quorum is still in valid. + * Verify that the last negotiated quorum is still valid. */ quorum.assertLeader(quorumToken); } @@ -4523,11 +4520,110 @@ private volatile long quorumToken = Quorum.NO_QUORUM; protected final long getQuorumToken() { + return quorumToken; + } + protected void setQuorumToken(final long newValue) { + + /* + * The token is [volatile]. Save it's state on entry. Figure out if this + * is a quorum meet or a quorum break. + */ + final long oldValue = quorumToken; - quorumToken = newValue; + + if (oldValue == newValue) { + + // No change. + return; + + } + + final boolean didBreak; + final boolean didMeet; + + if (newValue == Quorum.NO_QUORUM && oldValue != Quorum.NO_QUORUM) { + + /* + * Quorum break. + * + * Immediately invalidate the token. Do not wait for a lock. + */ + + this.quorumToken = newValue; + + didBreak = true; + didMeet = false; + + } else if (newValue != Quorum.NO_QUORUM && oldValue == Quorum.NO_QUORUM) { + + /* + * Quorum meet. + * + * We must wait for the lock to update the token. + */ + + didBreak = false; + didMeet = true; + + } else { + + /* + * Excluded middle. If there was no change, then we returned + * immediately up above. If there is a change, then it must be + * either a quorum break or a quorum meet, which were identified in + * the if-then-else above. + */ + + throw new AssertionError(); + + } + + /* + * Both a meet and a break require an exclusive write lock. + */ + final WriteLock lock = _fieldReadWriteLock.writeLock(); + + lock.lock(); + + try { + + if (didBreak) { + + /* + * We also need to discard any active read/write tx since there + * is no longer a quorum and a read/write tx was running on the + * old leader. + * + * We do not need to discard read-only tx since the committed + * state should remain valid even when a quorum is lost. + */ + abort(); + + } else if (didMeet) { + + quorumToken = newValue; + + /* + * FIXME We need to re-open the backing store with the token for + * the new quorum. + */ +// _bufferStrategy.reopen(quorumToken); + + } else { + + throw new AssertionError(); + + } + + } finally { + + lock.unlock(); + + } + } /** @@ -4547,24 +4643,16 @@ /** * Factory for the {@link HADelegate} object for this - * {@link AbstractJournal}. This may be overridden to publish additional - * methods for the low-level HA API. The object returned by this factor is + * {@link AbstractJournal}. The object returned by this method will be made * available using {@link QuorumMember#getService()}. + * + * @throws UnsupportedOperationException + * always. */ protected HAGlue newHAGlue(final UUID serviceId) { - // FIXME This is defaulting to a random port on the loopback address. - final InetSocketAddress writePipelineAddr; - try { - writePipelineAddr = new InetSocketAddress(getPort(0)); - } catch (UnknownHostException e) { - throw new RuntimeException(e); - } catch (IOException e) { - throw new RuntimeException(e); - } - - return new BasicHA(serviceId, writePipelineAddr); - + throw new UnsupportedOperationException(); + } /** @@ -4934,30 +5022,6 @@ }; /** - * Return an unused port. - * - * @param suggestedPort - * The suggested port. - * - * @return The suggested port, unless it is zero or already in use, in which - * case an unused port is returned. - * - * @throws IOException - */ - static protected int getPort(int suggestedPort) throws IOException { - ServerSocket openSocket; - try { - openSocket = new ServerSocket(suggestedPort); - } catch (BindException ex) { - // the port is busy, so look for a random open port - openSocket = new ServerSocket(0); - } - final int port = openSocket.getLocalPort(); - openSocket.close(); - return port; - } - - /** * Remove all commit records between the two provided keys. * * This is called from the RWStore when it checks for deferredFrees against Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-03 09:35:29 UTC (rev 6507) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-03 10:36:47 UTC (rev 6508) @@ -921,9 +921,13 @@ } final public void assertLeader(final long token) { + if (token == NO_QUORUM) { + // The quorum was not met when the client obtained that token. + throw new QuorumException("Client token is invalid."); + } if (this.token == NO_QUORUM) { // The quorum is not met. - throw new QuorumException(); + throw new QuorumException("Quorum is not met."); } final UUID leaderId = getLeaderId(); final QuorumMember<S> client = getClientAsMember(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java 2012-09-03 09:35:29 UTC (rev 6507) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java 2012-09-03 10:36:47 UTC (rev 6508) @@ -28,6 +28,11 @@ package com.bigdata.journal.ha; import java.io.File; +import java.io.IOException; +import java.net.BindException; +import java.net.InetSocketAddress; +import java.net.ServerSocket; +import java.net.UnknownHostException; import java.nio.ByteBuffer; import java.rmi.Remote; import java.util.Properties; @@ -340,12 +345,65 @@ super(properties, quorum); } + /** + * {@inheritDoc} + * <p> + * Note: This uses a random port on the loopback address. + */ + @Override public HAGlue newHAGlue(final UUID serviceId) { - return super.newHAGlue(serviceId); - + final InetSocketAddress writePipelineAddr; + try { + writePipelineAddr = new InetSocketAddress(getPort(0)); + } catch (UnknownHostException e) { + throw new RuntimeException(e); + } catch (IOException e) { + throw new RuntimeException(e); + } + + return new HAGlueService(serviceId, writePipelineAddr); + } + /** + * Return an unused port. + * + * @param suggestedPort + * The suggested port. + * + * @return The suggested port, unless it is zero or already in use, in which + * case an unused port is returned. + * + * @throws IOException + */ + static protected int getPort(int suggestedPort) throws IOException { + ServerSocket openSocket; + try { + openSocket = new ServerSocket(suggestedPort); + } catch (BindException ex) { + // the port is busy, so look for a random open port + openSocket = new ServerSocket(0); + } + final int port = openSocket.getLocalPort(); + openSocket.close(); + return port; + } + + /** + * Extended implementation supports RMI. + */ + protected class HAGlueService extends BasicHA { + + protected HAGlueService(final UUID serviceId, + final InetSocketAddress writePipelineAddr) { + + super(serviceId, writePipelineAddr); + + } + + } + } protected Quorum<HAGlue, QuorumService<HAGlue>> newQuorum() { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-03 09:35:29 UTC (rev 6507) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-03 10:36:47 UTC (rev 6508) @@ -498,7 +498,7 @@ super.quorumMeet(token, leaderId); // Inform the journal that there is a new quorum token. - journal.setQuorumToken(Quorum.NO_QUORUM); + journal.setQuorumToken(token); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-03 15:37:34
|
Revision: 6511 http://bigdata.svn.sourceforge.net/bigdata/?rev=6511&view=rev Author: thompsonbry Date: 2012-09-03 15:37:26 +0000 (Mon, 03 Sep 2012) Log Message: ----------- Fixed problem where WriteCache was not being initialized with the then current file extent. I added fileExtent to the newWriteCache() abstract method. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAWriteMessageBase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ha/HAWriteMessage.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWORMWriteCacheService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCacheServiceLifetime.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -1537,7 +1537,7 @@ : new WriteCache.FileChannelWriteCache( IndexSegmentCheckpoint.SIZE, null/* buf */, useChecksums, false/* isHighlyAvailable */, - false/* bufferHasData */, new NOPReopener(out)); + false/* bufferHasData */, new NOPReopener(out), 0L/* fileExtent */); /* * Open the node buffer. We only do this if there will be at least @@ -3018,7 +3018,7 @@ final WriteCache.FileChannelWriteCache writeCache = new WriteCache.FileChannelWriteCache( offsetNodes, null/* buf */, useChecksums, false/* isHighlyAvailable */, false/* bufferHasData */, - new NOPReopener(out)); + new NOPReopener(out), 0L/* fileExtent */); try { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAWriteMessageBase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAWriteMessageBase.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/pipeline/HAWriteMessageBase.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -85,14 +85,21 @@ } - public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { - sze = in.readInt(); - chk = in.readInt(); - } + public void readExternal(final ObjectInput in) throws IOException, + ClassNotFoundException { - public void writeExternal(ObjectOutput out) throws IOException { - out.writeInt(sze); - out.writeInt(chk); - } + sze = in.readInt(); + + chk = in.readInt(); + + } + public void writeExternal(final ObjectOutput out) throws IOException { + + out.writeInt(sze); + + out.writeInt(chk); + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -381,11 +381,14 @@ * caller's buffer will be cleared. The code presumes that the * {@link WriteCache} instance will be used to lay down a single * buffer worth of data onto the backing file. - * + * @param fileExtent + * The then current extent of the backing file. + * * @throws InterruptedException */ public WriteCache(IBufferAccess buf, final boolean scatteredWrites, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData) throws InterruptedException { + final boolean isHighlyAvailable, final boolean bufferHasData, + final long fileExtent) throws InterruptedException { if (bufferHasData && buf == null) throw new IllegalArgumentException(); @@ -423,6 +426,9 @@ // the capacity of the buffer in bytes. this.capacity = buf.buffer().capacity(); + // apply the then current file extent. + this.fileExtent.set(fileExtent); + /* * Discard anything in the buffer, resetting the position to zero, the * mark to zero, and the limit to the capacity. @@ -466,7 +472,7 @@ */ resetRecordMapFromBuffer(); } - + } /** @@ -1517,11 +1523,15 @@ * * @throws InterruptedException */ - public FileChannelWriteCache(final long baseOffset, final IBufferAccess buf, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData, final IReopenChannel<FileChannel> opener) + public FileChannelWriteCache(final long baseOffset, + final IBufferAccess buf, final boolean useChecksum, + final boolean isHighlyAvailable, final boolean bufferHasData, + final IReopenChannel<FileChannel> opener, + final long fileExtent) throws InterruptedException { - super(buf, false/* scatteredWrites */, useChecksum, isHighlyAvailable, bufferHasData); + super(buf, false/* scatteredWrites */, useChecksum, + isHighlyAvailable, bufferHasData, fileExtent); if (baseOffset < 0) throw new IllegalArgumentException(); @@ -1536,8 +1546,10 @@ } @Override - protected boolean writeOnChannel(final ByteBuffer data, final long firstOffset, - final Map<Long, RecordMetadata> recordMap, final long nanos) throws InterruptedException, IOException { + protected boolean writeOnChannel(final ByteBuffer data, + final long firstOffset, + final Map<Long, RecordMetadata> recordMap, final long nanos) + throws InterruptedException, IOException { final long begin = System.nanoTime(); @@ -1604,12 +1616,15 @@ * * @throws InterruptedException */ - public FileChannelScatteredWriteCache(final IBufferAccess buf, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData, final IReopenChannel<FileChannel> opener, - final BufferedWrite bufferedWrite) + public FileChannelScatteredWriteCache(final IBufferAccess buf, + final boolean useChecksum, final boolean isHighlyAvailable, + final boolean bufferHasData, + final IReopenChannel<FileChannel> opener, + final long fileExtent, final BufferedWrite bufferedWrite) throws InterruptedException { - super(buf, true/* scatteredWrites */, useChecksum, isHighlyAvailable, bufferHasData); + super(buf, true/* scatteredWrites */, useChecksum, + isHighlyAvailable, bufferHasData, fileExtent); if (opener == null) throw new IllegalArgumentException(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -357,13 +357,9 @@ /** * The {@link Quorum} token under which this {@link WriteCacheService} - * instance is valid. - * - * @todo As long as a service is the leader, it could use the same - * {@link WriteCacheService} instance. For example, adding a new - * service to the pipeline in principle need not invalidate the - * {@link WriteCacheService}. However, if the leader changes then much - * more has to change in concert to setup the new {@link Quorum}. + * instance is valid. This is fixed for the life cycle of the + * {@link WriteCacheService}. This ensures that all writes are buffered + * under a consistent quorum meet. */ final private long quorumToken; @@ -441,13 +437,13 @@ // Add [current] WriteCache. current.set(buffers[0] = newWriteCache(null/* buf */, - useChecksum, false/* bufferHasData */, opener)); + useChecksum, false/* bufferHasData */, opener, fileExtent)); // add remaining buffers. for (int i = 1; i < nbuffers; i++) { final WriteCache tmp = newWriteCache(null/* buf */, useChecksum, - false/* bufferHasData */, opener); + false/* bufferHasData */, opener, fileExtent); buffers[i] = tmp; @@ -833,6 +829,8 @@ * @param opener * The object which knows how to re-open the backing channel * (required). + * @param fileExtent + * The then current extent of the backing file. * * @return A {@link WriteCache} wrapping that buffer and able to write on * that channel. @@ -841,7 +839,8 @@ */ abstract public WriteCache newWriteCache(IBufferAccess buf, boolean useChecksum, boolean bufferHasData, - IReopenChannel<? extends Channel> opener) throws InterruptedException; + IReopenChannel<? extends Channel> opener, final long fileExtent) + throws InterruptedException; /** * {@inheritDoc} Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -4607,10 +4607,16 @@ quorumToken = newValue; /* - * FIXME We need to re-open the backing store with the token for - * the new quorum. + * We need to reset the backing store with the token for the new + * quorum. There should not be any active writers since there + * was no quorum. Thus, this should just cause the backing store + * to become aware of the new quorum and enable writes. + * + * Note: This is done using a local abort, not a 2-phase abort. + * Each node in the quorum should handle this locally when it + * sees the quorum meet event. */ -// _bufferStrategy.reopen(quorumToken); + _abort(); } else { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -902,11 +902,13 @@ public WriteCache newWriteCache(final IBufferAccess buf, final boolean useChecksum, final boolean bufferHasData, - final IReopenChannel<? extends Channel> opener) + final IReopenChannel<? extends Channel> opener, + final long fileExtent) throws InterruptedException { return new WriteCacheImpl(0/* baseOffset */, buf, useChecksum, bufferHasData, - (IReopenChannel<FileChannel>) opener); + (IReopenChannel<FileChannel>) opener, + fileExtent); } }; this._checkbuf = null; @@ -939,11 +941,12 @@ public WriteCacheImpl(final long baseOffset, final IBufferAccess buf, final boolean useChecksum, final boolean bufferHasData, - final IReopenChannel<FileChannel> opener) + final IReopenChannel<FileChannel> opener, + final long fileExtent) throws InterruptedException { super(baseOffset, buf, useChecksum, isHighlyAvailable, - bufferHasData, opener); + bufferHasData, opener, fileExtent); } @@ -2251,7 +2254,8 @@ throws IOException, InterruptedException { writeCacheService.newWriteCache(b, useChecksums, - true/* bufferHasData */, opener).flush(false/* force */); + true/* bufferHasData */, opener, msg.getFileExtent()).flush( + false/* force */); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ha/HAWriteMessage.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ha/HAWriteMessage.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ha/HAWriteMessage.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -114,9 +114,9 @@ * The file offset at which the data will be written (WORM only). */ public HAWriteMessage(final int sze, final int chk, - StoreTypeEnum storeType, long quorumToken, long fileExtent, - long firstOffset) { - + final StoreTypeEnum storeType, final long quorumToken, + final long fileExtent, final long firstOffset) { + super(sze, chk); if (storeType == null) @@ -134,8 +134,9 @@ private static final byte VERSION0 = 0x0; - public void readExternal(ObjectInput in) throws IOException, + public void readExternal(final ObjectInput in) throws IOException, ClassNotFoundException { + super.readExternal(in); final byte version = in.readByte(); switch (version) { @@ -144,16 +145,16 @@ default: throw new IOException("Unknown version: " + version); } - storeType = (StoreTypeEnum) in.readObject(); + storeType = StoreTypeEnum.valueOf(in.readByte()); quorumToken = in.readLong(); fileExtent = in.readLong(); firstOffset = in.readLong(); } - public void writeExternal(ObjectOutput out) throws IOException { + public void writeExternal(final ObjectOutput out) throws IOException { super.writeExternal(out); out.write(VERSION0); - out.writeObject(storeType); + out.writeByte(storeType.getType()); out.writeLong(quorumToken); out.writeLong(fileExtent); out.writeLong(firstOffset); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -412,7 +412,17 @@ private final Quorum<?,?> m_quorum; - final RWWriteCacheService m_writeCache; + /** + * The #of buffers that will be used by the {@link WriteCacheService}. + */ + private final int m_writeCacheBufferCount; + + /** + * Note: This is not final because we replace the {@link WriteCacheService} + * during {@link #reset(long)} in order to propagate the then current quorum + * token to the {@link WriteCacheService}. + */ + RWWriteCacheService m_writeCache; /** * The actual allocation sizes as read from the store. @@ -536,11 +546,13 @@ public WriteCacheImpl(final IBufferAccess buf, final boolean useChecksum, final boolean bufferHasData, - final IReopenChannel<FileChannel> opener) + final IReopenChannel<FileChannel> opener, + final long fileExtent) throws InterruptedException { super(buf, useChecksum, m_quorum != null && m_quorum.isHighlyAvailable(), bufferHasData, opener, + fileExtent, m_bufferedWrite); } @@ -661,31 +673,12 @@ m_bufferedWrite = null; } - final int buffers = fileMetadata.writeCacheBufferCount; + m_writeCacheBufferCount = fileMetadata.writeCacheBufferCount; if(log.isInfoEnabled()) - log.info("RWStore using writeCacheService with buffers: " + buffers); + log.info("RWStore using writeCacheService with buffers: " + m_writeCacheBufferCount); - try { - m_writeCache = new RWWriteCacheService(buffers, m_fd.length(), - m_reopener, m_quorum) { - - @SuppressWarnings("unchecked") - public WriteCache newWriteCache(final IBufferAccess buf, - final boolean useChecksum, - final boolean bufferHasData, - final IReopenChannel<? extends Channel> opener) - throws InterruptedException { - return new WriteCacheImpl(buf, - useChecksum, bufferHasData, - (IReopenChannel<FileChannel>) opener); - } - }; - } catch (InterruptedException e) { - throw new IllegalStateException(ERR_WRITE_CACHE_CREATE, e); - } catch (IOException e) { - throw new IllegalStateException(ERR_WRITE_CACHE_CREATE, e); - } + m_writeCache = newWriteCache(); try { if (m_rb.getNextOffset() == 0) { // if zero then new file @@ -749,6 +742,36 @@ } } + /** + * Create and return a new {@link RWWriteCacheService} instance. The caller + * is responsible for closing out the old one and must be holding the + * appropriate locks when it switches in the new instance. + */ + private RWWriteCacheService newWriteCache() { + try { + return new RWWriteCacheService(m_writeCacheBufferCount, + m_fd.length(), m_reopener, m_quorum) { + + @SuppressWarnings("unchecked") + public WriteCache newWriteCache(final IBufferAccess buf, + final boolean useChecksum, + final boolean bufferHasData, + final IReopenChannel<? extends Channel> opener, + final long fileExtent) + throws InterruptedException { + return new WriteCacheImpl(buf, + useChecksum, bufferHasData, + (IReopenChannel<FileChannel>) opener, + fileExtent); + } + }; + } catch (InterruptedException e) { + throw new IllegalStateException(ERR_WRITE_CACHE_CREATE, e); + } catch (IOException e) { + throw new IllegalStateException(ERR_WRITE_CACHE_CREATE, e); + } + } + private void setAllocations(final FileMetadata fileMetadata) throws IOException { @@ -2111,22 +2134,37 @@ * Unisolated writes must also be removed from the write cache. * * The AllocBlocks of the FixedAllocators maintain the state to determine - * the correct reset behaviour. + * the correct reset behavior. * * If the store is using DirectFixedAllocators then an IllegalStateException * is thrown */ - public void reset() { - assertOpen(); - - if (log.isInfoEnabled()) { - log.info("RWStore Reset"); - } + public void reset() { + + if (log.isInfoEnabled()) { + log.info("RWStore Reset"); + } m_allocationLock.lock(); try { + assertOpen(); for (FixedAllocator fa : m_allocs) { fa.reset(m_writeCache); } + if (m_quorum != null) { + /** + * When the RWStore is part of an HA quorum, we need to close + * out and then reopen the WriteCacheService every time the + * quorum token is changed. For convienence, this is handled by + * extending the semantics of abort() on the Journal and reset() + * on the RWStore. + * + * @see <a + * href="https://sourceforge.net/apps/trac/bigdata/ticket/530"> + * HA Journal </a> + */ + m_writeCache.close(); + m_writeCache = newWriteCache(); + } } catch (Exception e) { throw new IllegalStateException("Unable to reset the store", e); } finally { @@ -2440,6 +2478,9 @@ // volatile private long m_curHdrAddr = 0; // volatile private int m_rootAddr; + /** + * {@link #m_fileSize} is in units of -32K. + */ volatile private int m_fileSize; volatile private int m_nextAllocation; final private int m_maxFileSize; @@ -3315,8 +3356,13 @@ extendFile(convertFromAddr(extent - currentExtent)); } else if (extent < currentExtent) { - throw new IllegalArgumentException("Cannot shrink RWStore extent"); - } + + throw new IllegalArgumentException( + "Cannot shrink RWStore extent: currentExtent=" + + currentExtent + ", fileSize=" + m_fileSize + + ", newValue=" + extent); + + } } @@ -4538,7 +4584,8 @@ throws IOException, InterruptedException { m_writeCache.newWriteCache(b, true/* useChecksums */, - true/* bufferHasData */, m_reopener).flush(false/* force */); + true/* bufferHasData */, m_reopener, msg.getFileExtent()) + .flush(false/* force */); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -63,7 +63,8 @@ public WriteCache newWriteCache(final IBufferAccess buf, final boolean useChecksum, final boolean bufferHasData, - final IReopenChannel<? extends Channel> opener) + final IReopenChannel<? extends Channel> opener, + final long fileExtent) throws InterruptedException { final boolean highlyAvailable = getQuorum() != null @@ -72,7 +73,7 @@ return new FileChannelScatteredWriteCache(buf, true/* useChecksum */, highlyAvailable, bufferHasData, - (IReopenChannel<FileChannel>) opener, null); + (IReopenChannel<FileChannel>) opener, fileExtent, null); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWORMWriteCacheService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWORMWriteCacheService.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWORMWriteCacheService.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -1877,9 +1877,10 @@ fileExtent, opener, quorum) { @Override - public WriteCache newWriteCache(IBufferAccess buf, - boolean useChecksum, boolean bufferHasData, - IReopenChannel<? extends Channel> opener) + public WriteCache newWriteCache(final IBufferAccess buf, + final boolean useChecksum, final boolean bufferHasData, + final IReopenChannel<? extends Channel> opener, + final long fileExtent) throws InterruptedException { switch (storeType) { @@ -1887,11 +1888,13 @@ return new FileChannelWriteCache(0/* baseOffset */, buf, useChecksum, isHighlyAvailable, bufferHasData, - (IReopenChannel<FileChannel>) opener); + (IReopenChannel<FileChannel>) opener, + fileExtent); case RW: return new FileChannelScatteredWriteCache(buf, useChecksum, isHighlyAvailable, bufferHasData, - (IReopenChannel<FileChannel>) opener, null); + (IReopenChannel<FileChannel>) opener, fileExtent, + null); default: throw new UnsupportedOperationException(); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCache.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCache.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCache.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -108,8 +108,8 @@ // The buffer size must be at least 1k for these tests. assertTrue(DirectBufferPool.INSTANCE.getBufferCapacity() >= Bytes.kilobyte32); - final WriteCache writeCache = new WriteCache.FileChannelWriteCache(0, buf, - true, isHighlyAvailable, false, opener); + final WriteCache writeCache = new WriteCache.FileChannelWriteCache( + 0, buf, true, isHighlyAvailable, false, opener, 0L/* fileExtent */); final long addr1 = 0; final long addr2 = 12800; @@ -207,7 +207,7 @@ // ctor correct rejection tests: baseOffset is negative. try { new WriteCache.FileChannelWriteCache(-1L, buf, useChecksum, - isHighlyAvailable, bufferHasData, opener); + isHighlyAvailable, bufferHasData, opener, 0L/* fileExtent */); fail("Expected: " + IllegalArgumentException.class); } catch (IllegalArgumentException ex) { if (log.isInfoEnabled()) @@ -217,7 +217,8 @@ // ctor correct rejection tests: opener is null. try { new WriteCache.FileChannelWriteCache(baseOffset, buf, - useChecksum, isHighlyAvailable, bufferHasData, null/* opener */); + useChecksum, isHighlyAvailable, bufferHasData, + null/* opener */, 0L/* fileExtent */); fail("Expected: " + IllegalArgumentException.class); } catch (IllegalArgumentException ex) { if (log.isInfoEnabled()) @@ -227,7 +228,7 @@ // allocate write cache using our buffer. final WriteCache writeCache = new WriteCache.FileChannelWriteCache( baseOffset, buf, useChecksum, isHighlyAvailable, - bufferHasData, opener); + bufferHasData, opener, 0L/*fileExtent*/); // verify the write cache self-reported capacity. assertEquals(DirectBufferPool.INSTANCE.getBufferCapacity(), @@ -588,7 +589,8 @@ // ctor correct rejection tests: opener is null. try { new WriteCache.FileChannelScatteredWriteCache(buf, - useChecksum, isHighlyAvailable, bufferHasData, null/* opener */, null); + useChecksum, isHighlyAvailable, bufferHasData, + null/* opener */, 0L/* fileExtent */, null); fail("Expected: " + IllegalArgumentException.class); } catch (IllegalArgumentException ex) { if (log.isInfoEnabled()) @@ -597,7 +599,8 @@ // allocate write cache using our buffer. final WriteCache writeCache = new WriteCache.FileChannelScatteredWriteCache( - buf, useChecksum, isHighlyAvailable, bufferHasData, opener, null); + buf, useChecksum, isHighlyAvailable, bufferHasData, + opener, 0L/* fileExtent */, null); // verify the write cache self-reported capacity. assertEquals(DirectBufferPool.INSTANCE.getBufferCapacity() @@ -925,14 +928,14 @@ final IBufferAccess buf2 = DirectBufferPool.INSTANCE.acquire(); try { long addr1 = 12800; - ByteBuffer data1 = getRandomData(20 * 1024); - int chk1 = ChecksumUtility.threadChk.get().checksum(data1, 0/* offset */, data1.limit()); - ByteBuffer data2 = getRandomData(20 * 1024); - int chk2 = ChecksumUtility.threadChk.get().checksum(data2, 0/* offset */, data2.limit()); - WriteCache cache1 = new WriteCache.FileChannelScatteredWriteCache(buf, true, true, - false, opener, null); - WriteCache cache2 = new WriteCache.FileChannelScatteredWriteCache(buf, true, true, - false, opener, null); + final ByteBuffer data1 = getRandomData(20 * 1024); + final int chk1 = ChecksumUtility.threadChk.get().checksum(data1, 0/* offset */, data1.limit()); + final ByteBuffer data2 = getRandomData(20 * 1024); + final int chk2 = ChecksumUtility.threadChk.get().checksum(data2, 0/* offset */, data2.limit()); + final WriteCache cache1 = new WriteCache.FileChannelScatteredWriteCache(buf, true, true, + false, opener, 0L/* fileExtent */, null); + final WriteCache cache2 = new WriteCache.FileChannelScatteredWriteCache(buf, true, true, + false, opener, 0L/* fileExtent */, null); // write first data buffer cache1.write(addr1, data1, chk1); @@ -1063,8 +1066,9 @@ try { // allocate write cache using our buffer. - final WriteCache writeCache = new WriteCache.FileChannelScatteredWriteCache( - buf, useChecksum, isHighlyAvailable, bufferHasData, opener, null); + final WriteCache writeCache = new WriteCache.FileChannelScatteredWriteCache( + buf, useChecksum, isHighlyAvailable, bufferHasData, + opener, 0L/* fileExtent */, null); /* * First write 500 records into the cache and confirm they can all be read okay Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCacheServiceLifetime.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCacheServiceLifetime.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/io/writecache/TestWriteCacheServiceLifetime.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -17,16 +17,15 @@ import com.bigdata.ha.HAPipelineGlue; import com.bigdata.io.FileChannelUtility; +import com.bigdata.io.IBufferAccess; import com.bigdata.io.IReopenChannel; -import com.bigdata.io.IBufferAccess; import com.bigdata.io.TestCase3; import com.bigdata.io.writecache.TestWORMWriteCacheService.MyMockQuorumMember; import com.bigdata.io.writecache.WriteCache.FileChannelScatteredWriteCache; import com.bigdata.io.writecache.WriteCache.FileChannelWriteCache; import com.bigdata.quorum.MockQuorumFixture; +import com.bigdata.quorum.MockQuorumFixture.MockQuorum; import com.bigdata.quorum.QuorumActor; -import com.bigdata.quorum.MockQuorumFixture.MockQuorum; -import com.bigdata.rwstore.RWWriteCacheService; import com.bigdata.util.ChecksumUtility; import com.bigdata.util.concurrent.DaemonThreadFactory; @@ -167,20 +166,23 @@ fileExtent, config.opener, config.quorum) { @Override - public WriteCache newWriteCache(IBufferAccess buf, - boolean useChecksum, boolean bufferHasData, - IReopenChannel<? extends Channel> opener) + public WriteCache newWriteCache(final IBufferAccess buf, + final boolean useChecksum, final boolean bufferHasData, + final IReopenChannel<? extends Channel> opener, + final long fileExtent) throws InterruptedException { if (!rw) { return new FileChannelWriteCache(0/* baseOffset */, buf, useChecksum, false, bufferHasData, - (IReopenChannel<FileChannel>) opener); + (IReopenChannel<FileChannel>) opener, + fileExtent); } else { - return new FileChannelScatteredWriteCache(buf, - useChecksum, false, bufferHasData, - (IReopenChannel<FileChannel>) opener, null); + return new FileChannelScatteredWriteCache(buf, useChecksum, + false, bufferHasData, + (IReopenChannel<FileChannel>) opener, fileExtent, + null); } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -223,15 +223,19 @@ * * @return The service. * - * @throws IllegalArgumentException + * @throws QuorumException * if there is no known {@link QuorumMember} for that serviceId. */ public Object getService(final UUID serviceId) { final QuorumMember<?> member = getMember(serviceId); - if (member == null) - throw new IllegalArgumentException("Unknown: " + serviceId); + if (member == null) { + + // Per the API. + throw new QuorumException("Unknown: " + serviceId); + + } return member.getService(); @@ -315,7 +319,8 @@ */ if ((e = deque.peek()) == null) throw new AssertionError(); - log.info("\n==> Next event: " + e); + if (log.isInfoEnabled()) + log.info("\n==> Next event: " + e); } finally { lock.unlock(); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -27,7 +27,6 @@ import java.io.Serializable; import java.net.InetSocketAddress; import java.rmi.Remote; -import java.rmi.server.ExportException; import java.util.Properties; import java.util.UUID; import java.util.concurrent.Future; @@ -43,11 +42,9 @@ import com.bigdata.ha.HAGlue; import com.bigdata.ha.QuorumService; +import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; import com.bigdata.quorum.Quorum; -import com.bigdata.service.proxy.ClientFuture; -import com.bigdata.service.proxy.RemoteFuture; -import com.bigdata.service.proxy.RemoteFutureImpl; import com.bigdata.service.proxy.ThickFuture; import com.bigdata.zookeeper.ZooKeeperAccessor; @@ -104,7 +101,7 @@ public HAJournal(final Properties properties, final Quorum<HAGlue, QuorumService<HAGlue>> quorum) { - super(properties, quorum); + super(checkProperties(properties), quorum); /* * Note: We need this so pass it through to the HAGlue class below. @@ -115,12 +112,40 @@ writePipelineAddr = (InetSocketAddress) properties .get(Options.WRITE_PIPELINE_ADDR); - if (writePipelineAddr == null) + } + + /** + * Perform some checks on the {@link HAJournal} configuration properties. + * + * @param properties + * The configuration properties. + * + * @return The argument. + */ + protected static Properties checkProperties(final Properties properties) { + + final BufferMode bufferMode = BufferMode.valueOf(properties + .getProperty(Options.BUFFER_MODE, Options.DEFAULT_BUFFER_MODE)); + + switch (bufferMode) { + case DiskRW: + break; + default: + throw new IllegalArgumentException(Options.BUFFER_MODE + "=" + + bufferMode + " : does not support HA"); + } + + if (properties.get(Options.WRITE_PIPELINE_ADDR) == null) { + throw new RuntimeException(Options.WRITE_PIPELINE_ADDR + " : required property not found."); + + } + return properties; + } - + @Override protected HAGlue newHAGlue(final UUID serviceId) { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-03 14:24:09 UTC (rev 6510) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-03 15:37:26 UTC (rev 6511) @@ -41,6 +41,7 @@ import com.bigdata.quorum.Quorum; import com.bigdata.quorum.QuorumActor; import com.bigdata.quorum.QuorumEvent; +import com.bigdata.quorum.QuorumException; import com.bigdata.quorum.QuorumListener; import com.bigdata.quorum.zk.ZKQuorumImpl; import com.bigdata.rdf.sail.webapp.ConfigParams; @@ -513,9 +514,10 @@ if (serviceItem == null) { - // Not found. - return null; - + // Not found (per the API). + throw new QuorumException("Service not found: uuid=" + + serviceId); + } return (HAGlue) serviceItem.service; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-03 17:10:51
|
Revision: 6512 http://bigdata.svn.sourceforge.net/bigdata/?rev=6512&view=rev Author: thompsonbry Date: 2012-09-03 17:10:43 +0000 (Mon, 03 Sep 2012) Log Message: ----------- - done. Isolate all code along the lines (k+1)/2 so we can have a replication chain that does not allow resynchronization and which therefore requires exactly k nodes to allow writes. [We may want to extract an interface for this.] Added support for write replication using writeRawBuffer. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/Quorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/StaticQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -6,7 +6,6 @@ import java.util.concurrent.Future; import com.bigdata.journal.AbstractJournal; -import com.bigdata.journal.IRootBlockView; /** * A {@link Remote} interface for methods supporting high availability for a set Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -213,7 +213,7 @@ final int k = getQuorum().replicationFactor(); - if (nyes < (k + 1) / 2) { + if (!getQuorum().isQuorum(nyes)) { log.error("prepare rejected: nyes=" + nyes + " out of " + k); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -4924,7 +4924,8 @@ public Future<byte[]> readFromDisk(final long token, final UUID storeId, final long addr) { - final FutureTask<byte[]> ft = new FutureTask<byte[]>(new Callable<byte[]>() { + final FutureTask<byte[]> ft = new FutureTask<byte[]>( + new Callable<byte[]>() { public byte[] call() throws Exception { @@ -4972,10 +4973,14 @@ * @todo Trap truncation vs extend? */ try { + ((IHABufferStrategy) AbstractJournal.this._bufferStrategy) .setExtentForLocalStore(msg.getFileExtent()); + } catch (InterruptedException e) { + throw new RuntimeException(e); + } final Future<Void> ft = getQuorum().getClient() @@ -5000,10 +5005,10 @@ } - /** NOP. */ public Future<Void> bounceZookeeperConnection() { final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { + // NOP (no zookeeper at this layer). } }, null); ft.run(); @@ -5011,7 +5016,9 @@ } /** - * Does pipeline remove/add. + * {@inheritDoc} + * <p> + * This implementation does pipeline remove() followed by pipline add(). */ public Future<Void> moveToEndOfPipeline() { final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -115,6 +115,22 @@ private final int k; /** + * The minimum #of joined services that constitutes a quorum as defined by + * <code>(k + 1) / 2 </code>. + * <p> + * Note: This constant is isolated here so we can have "quorums" that + * require ALL services to be joined. For example, a highly available system + * that can replicate writes but can not resynchronize services that were + * not present during a quorum commit would specify the same value for + * {@link #k} and {@link #kmeet} in order to ensure that all services are + * joined before the quorum "meets". + * + * TODO Specify means to compute this. It is fixed by the constructor right + * now. Maybe we should just pull out an interface for this? + */ + protected final int kmeet; + + /** * The current quorum token. This is volatile and will be cleared as soon as * the leader fails or the quorum breaks. * @@ -326,7 +342,9 @@ throw new IllegalArgumentException("k must be odd: " + k); this.k = k; - + + this.kmeet = (k + 1) / 2; + this.token = this.lastValidToken = NO_QUORUM; /* @@ -703,12 +721,22 @@ } final public int replicationFactor() { + // Note: [k] is final. return k; + } + public final boolean isQuorum(final int njoined) { + + return njoined >= kmeet; + + } + final public boolean isHighlyAvailable() { + return replicationFactor() > 1; + } final public long lastValidToken() { @@ -782,7 +810,7 @@ * * @return The lastCommitTime for which the service has cast its vote -or- * <code>null</code> if the service is not participating in a - * consensus of at least <code>(k+1)/2</code> services. + * consensus. */ private Long getLastCommitTimeConsensus(final UUID serviceId) { final Iterator<Map.Entry<Long, LinkedHashSet<UUID>>> itr = votes @@ -1032,8 +1060,7 @@ } /** - * Enforce a precondition that at least <code>(k+1)/2</code> services have - * joined. + * Enforce a precondition that at least {@link #kmeet} services have joined. * <p> * Note: This is used in certain places to work around concurrent * indeterminism. @@ -1045,7 +1072,7 @@ final long begin = System.nanoTime(); final long nanos = timeout; long remaining = nanos; - while (joined.size() < ((k + 1) / 2)) { + while (!isQuorum(joined.size())) { if (client == null) throw new AsynchronousQuorumCloseException(); if (!joinedChange.await(remaining, TimeUnit.NANOSECONDS)) @@ -1289,7 +1316,7 @@ // final public void setLastValidToken(final long newToken) { // lock.lock(); // try { -// if (joined.size() < ((k + 1) / 2)) +// if (!isQuorum(joined.size())) // throw new QuorumException(ERR_CAN_NOT_MEET // + " too few services are joined: #joined=" // + joined.size() + ", k=" + k); @@ -1325,7 +1352,7 @@ final public void setToken(final long newToken) { lock.lock(); try { - if (joined.size() < ((k + 1) / 2)) + if (!isQuorum(joined.size())) throw new QuorumException(ERR_CAN_NOT_MEET + " too few services are joined: #joined=" + joined.size() + ", k=" + k); @@ -2210,9 +2237,9 @@ // queue event. sendEvent(new E(QuorumEventEnum.CAST_VOTE, lastValidToken, token, serviceId, lastCommitTime)); - if (nvotes >= (k + 1) / 2) { + if (isQuorum(nvotes)) { final QuorumMember<S> client = getClientAsMember(); - if (nvotes == (k + 1) / 2) { + if (nvotes == kmeet) { if (client != null) { /* * Tell the client that consensus has been @@ -2228,7 +2255,7 @@ if (client != null) { final UUID clientId = client.getServiceId(); final UUID[] voteOrder = tmp.toArray(new UUID[0]); - if (nvotes == (k + 1) / 2 + if (nvotes == kmeet && clientId.equals(voteOrder[0])) { /* * The client is the first service in the vote @@ -2318,7 +2345,7 @@ votesChange.signalAll(); sendEvent(new E(QuorumEventEnum.WITHDRAW_VOTE, lastValidToken, token, serviceId)); - if (votes.size() + 1 == (k + 1) / 2) { + if (votes.size() + 1 == kmeet) { final QuorumMember<S> client = getClientAsMember(); if (client != null) { // Tell the client that the consensus was lost. @@ -2555,7 +2582,7 @@ log.info("serviceId=" + serviceId.toString()); // final int njoined = joined.size(); // final int k = replicationFactor(); -// final boolean willMeet = njoined == (k + 1) / 2; +// final boolean willMeet = njoined == kmeet; // if (willMeet) { // /* // * The quorum will meet. @@ -2604,7 +2631,7 @@ final boolean isLeader = leaderId.equals(clientId); if (isLeader) { final int njoined = joined.size(); - if (njoined >= ((k + 1) / 2) && token == NO_QUORUM) { + if (njoined >= kmeet && token == NO_QUORUM) { /* * Elect the leader. */ @@ -2704,13 +2731,13 @@ joinedChange.signalAll(); final int k = replicationFactor(); // iff the quorum was joined. - final boolean wasJoined = njoinedBefore >= ((k + 1) / 2); + final boolean wasJoined = njoinedBefore >= kmeet; // iff the leader just left the quorum. final boolean leaderLeft = wasJoined && serviceId.equals(leaderId); // iff the quorum will break. final boolean willBreak = leaderLeft - || (njoinedBefore == ((k + 1) / 2)); + || (njoinedBefore == kmeet); if (log.isInfoEnabled()) log.info("serviceId=" + serviceId + ", k=" + k + ", njoined(before)=" + njoinedBefore + ", wasJoined=" Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/Quorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/Quorum.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/Quorum.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -83,16 +83,40 @@ long NO_QUORUM = -1; /** - * Return <em>k</em>, the target replication factor. The replication factor - * must be a non-negative odd integer (1, 3, 5, 7, etc). A quorum exists - * only when <code>(k + 1)/2</code> physical services for the same logical - * service have an agreement on state. A single service with + * Return <em>k</em>, the target replication factor. + * <p> + * A normal quorum requires a simple majority and a replication factor that + * is a non-negative odd integer (1, 3, 5, 7, etc). For this case, a quorum + * exists only when <code>(k + 1)/2</code> physical services for the same + * logical service have an agreement on state. A single service with * <code>k := 1</code> is the degenerate case and has a minimum quorum size * of ONE (1). High availability is only possible when <code>k</code> is GT * ONE (1). Thus <code>k := 3</code> is the minimum value for which services * can be highly available and has a minimum quorum size of <code>2</code>. + * + * @see #isQuorum(int) */ int replicationFactor(); + + /** + * Return <code>true</code> iff the argument is large enough to constitute a + * quorum. + * <p> + * Note: This method makes it easier to write code that obeys policies other + * than simple majority rule. For example, a quorum could exist only when + * ALL services are joined. This alternative rule is useful when the + * services do not support a resynchronization policy. For such services, + * all services must participate in all commits since services can not + * recover if they miss a commit. Simple replication is an example of this + * policy. + * + * @param njoined + * The argument. + * + * @return <code>true</code> if that a quorum is met for that many joined + * services. + */ + boolean isQuorum(int njoined); /** * The current token for the quorum. The initial value before the quorum has Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -1030,8 +1030,24 @@ assertCondition(new Runnable() { public void run() { - // services have voted for a single lastCommitTime. + // Services have voted for a single lastCommitTime. assertEquals(0, quorum0.getVotes().size()); + /** + * TODO The assert above occasionally fails with this trace. + * + * <pre> + * junit.framework.AssertionFailedError: expected:<0> but was:<1> + * at junit.framework.Assert.fail(Assert.java:47) + * at junit.framework.Assert.failNotEquals(Assert.java:282) + * at junit.framework.Assert.assertEquals(Assert.java:64) + * at junit.framework.Assert.assertEquals(Assert.java:201) + * at junit.framework.Assert.assertEquals(Assert.java:207) + * at com.bigdata.quorum.TestHA3QuorumSemantics$19.run(TestHA3QuorumSemantics.java:1034) + * at com.bigdata.quorum.AbstractQuorumTestCase.assertCondition(AbstractQuorumTestCase.java:184) + * at com.bigdata.quorum.AbstractQuorumTestCase.assertCondition(AbstractQuorumTestCase.java:225) + * at com.bigdata.quorum.TestHA3QuorumSemantics.test_serviceJoin3_simple(TestHA3QuorumSemantics.java:1031) + * </pre> + */ // verify the vote order. assertEquals(null, quorum0.getVotes().get(lastCommitTime)); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-03 17:10:43 UTC (rev 6512) @@ -27,6 +27,10 @@ import com.bigdata.jini.start.config.*; import com.bigdata.jini.util.ConfigMath; +import org.apache.zookeeper.ZooDefs; +import org.apache.zookeeper.data.ACL; +import org.apache.zookeeper.data.Id; + // imports for various options. import com.bigdata.btree.IndexMetadata; import com.bigdata.btree.keys.KeyBuilder; @@ -62,7 +66,7 @@ private static fedname = "benchmark"; // NanoSparqlServer (http) port. - private static nssPort = ConfigMath.add(8090,1); + private static nssPort = 8090; // write replication pipeline port. private static haPort = ConfigMath.add(9090,1); @@ -111,6 +115,27 @@ }; + /** + * A common point to set the Zookeeper client's requested + * sessionTimeout and the jini lease timeout. The default lease + * renewal period for jini is 5 minutes while for zookeeper it is + * more like 5 seconds. This puts the two systems onto a similar + * timeout period so that a disconnected client is more likely to + * be noticed in roughly the same period of time for either + * system. A value larger than the zookeeper default helps to + * prevent client disconnects under sustained heavy load. + */ + + // jini + static private leaseTimeout = ConfigMath.m2ms(60);// 20s=20000; 5m=300000; + + // zookeeper + static private sessionTimeout = (int)ConfigMath.m2ms(10);// was 5m 20s=20000; 5m=300000; + + /* + * Configuration for default KB. + */ + private static namespace = "kb"; private static kb = new NV[] { @@ -137,7 +162,43 @@ ( namespace + "." + SPORelation.NAME_SPO_RELATION, IndexMetadata.Options.BTREE_BRANCHING_FACTOR ), "1024"), + }; +} + +/* + * Zookeeper client configuration. + */ +org.apache.zookeeper.ZooKeeper { + + /* Root znode for the federation instance. */ + zroot = "/" + bigdata.fedname; + + /* A comma separated list of host:port pairs, where the port is + * the CLIENT port for the zookeeper server instance. + */ + // standalone. + servers = "localhost:2081"; + // ensemble +// servers = bigdata.zoo1+":2181" +// + ","+bigdata.zoo2+":2181" +// + ","+bigdata.zoo3+":2181" +// ; + + /* Session timeout (optional). */ + sessionTimeout = bigdata.sessionTimeout; + + /* + * ACL for the zookeeper nodes created by the bigdata federation. + * + * Note: zookeeper ACLs are not transmitted over secure channels + * and are placed into plain text Configuration files by the + * ServicesManagerServer. + */ + acl = new ACL[] { + + new ACL(ZooDefs.Perms.ALL, new Id("world", "anyone")) + }; } @@ -150,8 +211,6 @@ */ com.bigdata.journal.jini.ha.HAJournalServer { - zroot = "/" + bigdata.fedname; - serviceDir = bigdata.serviceDir; groups = bigdata.groups; @@ -164,6 +223,9 @@ }; + // TODO Support this : The lease timeout for jini joins. + // "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout + // Where the service will expose its write replication listener. writePipelineAddr = new InetSocketAddress("localhost",bigdata.haPort); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-03 17:10:43 UTC (rev 6512) @@ -27,6 +27,10 @@ import com.bigdata.jini.start.config.*; import com.bigdata.jini.util.ConfigMath; +import org.apache.zookeeper.ZooDefs; +import org.apache.zookeeper.data.ACL; +import org.apache.zookeeper.data.Id; + // imports for various options. import com.bigdata.btree.IndexMetadata; import com.bigdata.btree.keys.KeyBuilder; @@ -111,6 +115,27 @@ }; + /** + * A common point to set the Zookeeper client's requested + * sessionTimeout and the jini lease timeout. The default lease + * renewal period for jini is 5 minutes while for zookeeper it is + * more like 5 seconds. This puts the two systems onto a similar + * timeout period so that a disconnected client is more likely to + * be noticed in roughly the same period of time for either + * system. A value larger than the zookeeper default helps to + * prevent client disconnects under sustained heavy load. + */ + + // jini + static private leaseTimeout = ConfigMath.m2ms(60);// 20s=20000; 5m=300000; + + // zookeeper + static private sessionTimeout = (int)ConfigMath.m2ms(10);// was 5m 20s=20000; 5m=300000; + + /* + * Configuration for default KB. + */ + private static namespace = "kb"; private static kb = new NV[] { @@ -137,7 +162,43 @@ ( namespace + "." + SPORelation.NAME_SPO_RELATION, IndexMetadata.Options.BTREE_BRANCHING_FACTOR ), "1024"), + }; +} + +/* + * Zookeeper client configuration. + */ +org.apache.zookeeper.ZooKeeper { + + /* Root znode for the federation instance. */ + zroot = "/" + bigdata.fedname; + + /* A comma separated list of host:port pairs, where the port is + * the CLIENT port for the zookeeper server instance. + */ + // standalone. + servers = "localhost:2081"; + // ensemble +// servers = bigdata.zoo1+":2181" +// + ","+bigdata.zoo2+":2181" +// + ","+bigdata.zoo3+":2181" +// ; + + /* Session timeout (optional). */ + sessionTimeout = bigdata.sessionTimeout; + + /* + * ACL for the zookeeper nodes created by the bigdata federation. + * + * Note: zookeeper ACLs are not transmitted over secure channels + * and are placed into plain text Configuration files by the + * ServicesManagerServer. + */ + acl = new ACL[] { + + new ACL(ZooDefs.Perms.ALL, new Id("world", "anyone")) + }; } @@ -150,8 +211,6 @@ */ com.bigdata.journal.jini.ha.HAJournalServer { - zroot = "/" + bigdata.fedname; - serviceDir = bigdata.serviceDir; groups = bigdata.groups; @@ -164,6 +223,9 @@ }; + // TODO Support this : The lease timeout for jini joins. + // "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout + // Where the service will expose its write replication listener. writePipelineAddr = new InetSocketAddress("localhost",bigdata.haPort); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-03 17:10:43 UTC (rev 6512) @@ -27,6 +27,10 @@ import com.bigdata.jini.start.config.*; import com.bigdata.jini.util.ConfigMath; +import org.apache.zookeeper.ZooDefs; +import org.apache.zookeeper.data.ACL; +import org.apache.zookeeper.data.Id; + // imports for various options. import com.bigdata.btree.IndexMetadata; import com.bigdata.btree.keys.KeyBuilder; @@ -111,6 +115,27 @@ }; + /** + * A common point to set the Zookeeper client's requested + * sessionTimeout and the jini lease timeout. The default lease + * renewal period for jini is 5 minutes while for zookeeper it is + * more like 5 seconds. This puts the two systems onto a similar + * timeout period so that a disconnected client is more likely to + * be noticed in roughly the same period of time for either + * system. A value larger than the zookeeper default helps to + * prevent client disconnects under sustained heavy load. + */ + + // jini + static private leaseTimeout = ConfigMath.m2ms(60);// 20s=20000; 5m=300000; + + // zookeeper + static private sessionTimeout = (int)ConfigMath.m2ms(10);// was 5m 20s=20000; 5m=300000; + + /* + * Configuration for default KB. + */ + private static namespace = "kb"; private static kb = new NV[] { @@ -138,9 +163,47 @@ IndexMetadata.Options.BTREE_BRANCHING_FACTOR ), "1024"), }; + } /* + * Zookeeper client configuration. + */ +org.apache.zookeeper.ZooKeeper { + + /* Root znode for the federation instance. */ + zroot = "/" + bigdata.fedname; + + /* A comma separated list of host:port pairs, where the port is + * the CLIENT port for the zookeeper server instance. + */ + // standalone. + servers = "localhost:2081"; + // ensemble +// servers = bigdata.zoo1+":2181" +// + ","+bigdata.zoo2+":2181" +// + ","+bigdata.zoo3+":2181" +// ; + + /* Session timeout (optional). */ + sessionTimeout = bigdata.sessionTimeout; + + /* + * ACL for the zookeeper nodes created by the bigdata federation. + * + * Note: zookeeper ACLs are not transmitted over secure channels + * and are placed into plain text Configuration files by the + * ServicesManagerServer. + */ + acl = new ACL[] { + + new ACL(ZooDefs.Perms.ALL, new Id("world", "anyone")) + + }; + +} + +/* * You should not have to edit below this line. */ @@ -149,8 +212,6 @@ */ com.bigdata.journal.jini.ha.HAJournalServer { - zroot = "/" + bigdata.fedname; - serviceDir = bigdata.serviceDir; groups = bigdata.groups; @@ -163,6 +224,9 @@ }; + // TODO Support this : The lease timeout for jini joins. + // "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout + // Where the service will expose its write replication listener. writePipelineAddr = new InetSocketAddress("localhost",bigdata.haPort); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -26,27 +26,22 @@ import java.io.IOException; import java.io.Serializable; import java.net.InetSocketAddress; -import java.rmi.Remote; import java.util.Properties; import java.util.UUID; import java.util.concurrent.Future; +import java.util.concurrent.FutureTask; import net.jini.config.Configuration; import net.jini.export.Exporter; -import net.jini.jeri.BasicILFactory; -import net.jini.jeri.BasicJeriExporter; -import net.jini.jeri.InvocationLayerFactory; -import net.jini.jeri.tcp.TcpServerEndpoint; -import org.apache.log4j.Logger; - +import com.bigdata.concurrent.FutureTaskMon; import com.bigdata.ha.HAGlue; import com.bigdata.ha.QuorumService; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; import com.bigdata.quorum.Quorum; +import com.bigdata.quorum.zk.ZKQuorumImpl; import com.bigdata.service.proxy.ThickFuture; -import com.bigdata.zookeeper.ZooKeeperAccessor; /** * A {@link Journal} that that participates in a write replication pipeline. The @@ -77,7 +72,7 @@ */ public class HAJournal extends Journal { - private static final Logger log = Logger.getLogger(HAJournal.class); +// private static final Logger log = Logger.getLogger(HAJournal.class); public interface Options extends Journal.Options { @@ -172,49 +167,65 @@ } - /** - * {@inheritDoc} - * - * TODO This should actually bounce the zk connection. We need to pass - * the {@link ZooKeeperAccessor} into the constructor for that. - */ @Override public Future<Void> bounceZookeeperConnection() { - return super.bounceZookeeperConnection(); + final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { + @SuppressWarnings("rawtypes") + public void run() { + + if(getQuorum() instanceof ZKQuorumImpl) { + + try { + + // Close the current connection (if any). + ((ZKQuorumImpl) getQuorum()).getZookeeper().close(); + + } catch (InterruptedException e) { + + // Propagate the interrupt. + Thread.currentThread().interrupt(); + + } + + } + } + }, null); + ft.run(); + return getProxy(ft); + +// } - /** - * Note: The invocation layer factory is reused for each exported proxy (but - * the exporter itself is paired 1:1 with the exported proxy). - * - * TODO Configuration for this stuff. - */ - final private InvocationLayerFactory invocationLayerFactory = new BasicILFactory(); - - /** - * Return an {@link Exporter} for a single object that implements one or - * more {@link Remote} interfaces. - * <p> - * Note: This uses TCP Server sockets. - * <p> - * Note: This uses [port := 0], which means a random port is assigned. - * <p> - * Note: The VM WILL NOT be kept alive by the exported proxy (keepAlive is - * <code>false</code>). - * - * @param enableDGC - * if distributed garbage collection should be used for the - * object to be exported. - * - * @return The {@link Exporter}. - */ - protected Exporter getExporter(final boolean enableDGC) { - - return new BasicJeriExporter(TcpServerEndpoint - .getInstance(0/* port */), invocationLayerFactory, enableDGC, - false/* keepAlive */); - - } +// /** +// * Note: The invocation layer factory is reused for each exported proxy (but +// * the exporter itself is paired 1:1 with the exported proxy). +// */ +// final private InvocationLayerFactory invocationLayerFactory = new BasicILFactory(); +// +// /** +// * Return an {@link Exporter} for a single object that implements one or +// * more {@link Remote} interfaces. +// * <p> +// * Note: This uses TCP Server sockets. +// * <p> +// * Note: This uses [port := 0], which means a random port is assigned. +// * <p> +// * Note: The VM WILL NOT be kept alive by the exported proxy (keepAlive is +// * <code>false</code>). +// * +// * @param enableDGC +// * if distributed garbage collection should be used for the +// * object to be exported. +// * +// * @return The {@link Exporter}. +// */ +// protected Exporter getExporter(final boolean enableDGC) { +// +// return new BasicJeriExporter(TcpServerEndpoint +// .getInstance(0/* port */), invocationLayerFactory, enableDGC, +// false/* keepAlive */); +// +// } /** * Note that {@link Future}s generated by @@ -229,18 +240,16 @@ * * @return A proxy for that {@link Future} that masquerades any RMI * exceptions. - * - * TODO All of these methods that return a {@link Future} are - * going to cause a problem with DGC. They should just do the - * work whenever possible, especially for methods that have high - * overhead. One workaround is to return a {@link ThickFuture}. */ @Override protected <E> Future<E> getProxy(final Future<E> future) { /* * This was borrowed from a fix for a DGC thread leak on the - * clustered database. + * clustered database. Returning a Future so the client can + * wait on the outcome is often less desirable than having + * the service compute the Future and then return a think + * future. * * @see https://sourceforge.net/apps/trac/bigdata/ticket/433 * Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -26,7 +26,6 @@ import org.apache.log4j.MDC; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.data.ACL; import org.eclipse.jetty.server.Server; @@ -34,9 +33,11 @@ import com.bigdata.ha.HAGlueDelegate; import com.bigdata.ha.QuorumService; import com.bigdata.ha.QuorumServiceBase; -import com.bigdata.jini.util.ConfigMath; +import com.bigdata.io.IBufferAccess; +import com.bigdata.jini.start.config.ZookeeperClientConfig; import com.bigdata.jini.util.JiniUtil; import com.bigdata.journal.AbstractJournal; +import com.bigdata.journal.RWStrategy; import com.bigdata.journal.ha.HAWriteMessage; import com.bigdata.quorum.Quorum; import com.bigdata.quorum.QuorumActor; @@ -74,11 +75,6 @@ String COMPONENT = HAJournalServer.class.getName(); /** - * The root path under which the logical services are arranged. - */ - String ZROOT = "zroot"; - - /** * The target replication factor (k). */ String REPLICATION_FACTOR = "replicationFactor"; @@ -126,6 +122,7 @@ private UUID serviceUUID; private HAGlue haGlueService; + private ZookeeperClientConfig zkClientConfig; private String logicalServiceId; /** @@ -226,12 +223,10 @@ * Setup the Quorum / HAJournal. */ - final String zroot = (String) config - .getEntry(ConfigurationOptions.COMPONENT, - ConfigurationOptions.ZROOT, - String.class); + zkClientConfig = new ZookeeperClientConfig(config); - logicalServiceId = zroot + "/" + HAJournalServer.class.getName(); + logicalServiceId = zkClientConfig.zroot + "/" + + HAJournalServer.class.getName(); final int replicationFactor = (Integer) config.getEntry( ConfigurationOptions.COMPONENT, @@ -275,9 +270,9 @@ * Zookeeper quorum. */ - final List<ACL> acl = Ids.OPEN_ACL_UNSAFE; // TODO CONFIGURATION - final String zoohosts = "localhost:2081"; // TODO CONFIGURATION (zooclient) - final int sessionTimeout = (int) ConfigMath.m2ms(10); + final List<ACL> acl = zkClientConfig.acl; + final String zoohosts = zkClientConfig.servers; + final int sessionTimeout = zkClientConfig.sessionTimeout; final ZooKeeperAccessor zka = new ZooKeeperAccessor( zoohosts, sessionTimeout); @@ -296,8 +291,10 @@ * Ensure key znodes exist. */ try { - zka.getZookeeper().create(zroot, new byte[] {/* data */}, - acl, CreateMode.PERSISTENT); + zka.getZookeeper() + .create(zkClientConfig.zroot, + new byte[] {/* data */}, acl, + CreateMode.PERSISTENT); } catch (NodeExistsException ex) { // ignore. } @@ -524,27 +521,42 @@ } - /** - * FIXME handle replicated writes. Probably just dump it on the - * jnl's WriteCacheService. Or maybe wrap it back up using a - * WriteCache and let that lay it down onto the disk. - */ @Override - protected void handleReplicatedWrite(HAWriteMessage msg, - ByteBuffer data) throws Exception { -// new WriteCache() { -// -// @Override -// protected boolean writeOnChannel(ByteBuffer buf, long firstOffset, -// Map<Long, RecordMetadata> recordMap, long nanos) -// throws InterruptedException, TimeoutException, IOException { -// // TODO Auto-generated method stub -// return false; -// } -// }; - - throw new UnsupportedOperationException(); + protected void handleReplicatedWrite(final HAWriteMessage msg, + final ByteBuffer data) throws Exception { + + /* + * Note: the ByteBuffer is owned by the HAReceiveService. This + * just wraps up the reference to the ByteBuffer with an + * interface that is also used by the WriteCache to control + * access to ByteBuffers allocated from the DirectBufferPool. + * However, release() is a NOP on this implementation since the + * ByteBuffer is owner by the HAReceiveService. + */ + final IBufferAccess b = new IBufferAccess() { + + @Override + public void release(long timeout, TimeUnit unit) + throws InterruptedException { + // NOP + } + + @Override + public void release() throws InterruptedException { + // NOP + } + + @Override + public ByteBuffer buffer() { + return data; + } + }; + + ((RWStrategy) journal.getBufferStrategy()).writeRawBuffer(msg, + b); + } + }; } @@ -651,19 +663,15 @@ final HAJournalServer server = new HAJournalServer(args, new FakeLifeCycle()); -// // FIXME Remove this. It is here to look for a quorum meet. -// try { -// server.journal.getQuorum().awaitQuorum(6L,TimeUnit.SECONDS); -// } catch (Exception e) { -// log.error(e, e); -// } - // Wait for the HAJournalServer to terminate. server.run(); try { - // Wait for the jetty server to terminate. - server.jettyServer.join(); + final Server tmp = server.jettyServer; + if (tmp != null) { + // Wait for the jetty server to terminate. + tmp.join(); + } } catch (InterruptedException e) { log.warn(e); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/StaticQuorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/StaticQuorum.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/StaticQuorum.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -142,6 +142,13 @@ } @Override + public boolean isQuorum(int n) { + + return n >= ((pipeline.length + 1) / 2); + + } + + @Override public long token() { return token; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-03 15:37:26 UTC (rev 6511) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-03 17:10:43 UTC (rev 6512) @@ -118,7 +118,7 @@ * * @throws InterruptedException */ - protected ZooKeeper getZookeeper() throws InterruptedException { + public ZooKeeper getZookeeper() throws InterruptedException { return zka.getZookeeper(); @@ -1705,7 +1705,7 @@ } else if (lastValidToken() != state.lastValidToken()) { final UUID[] joined = getJoined(); final int k = replicationFactor(); - if (joined.length >= ((k + 1) / 2)) { + if (isQuorum(joined.length)) { setToken(state.lastValidToken()); } else { log.warn("Can not set token - not enough joined services: k=" This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-04 14:20:31
|
Revision: 6519 http://bigdata.svn.sourceforge.net/bigdata/?rev=6519&view=rev Author: thompsonbry Date: 2012-09-04 14:20:21 +0000 (Tue, 04 Sep 2012) Log Message: ----------- I have added a listener to the BigdataSailConnection and a simple event model. I have also done a trial integration in the SPARQL UPDATE execution as performed through the NSS. The listener interface is working. The only odd bit is that I am seeing negative elapsed times when I convert from nanoseconds to milliseconds to paint the HTTP response. The interface includes incremental reporting on LOAD. I have not tested this much yet. @see https://sourceforge.net/apps/trac/bigdata/ticket/597 (SPARQL UPDATE LISTENER) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdateContext.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/ISPARQLUpdateListener.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/SPARQLUpdateEvent.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java 2012-09-04 14:17:54 UTC (rev 6518) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdate.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -93,6 +93,7 @@ import com.bigdata.rdf.model.BigdataURI; import com.bigdata.rdf.rio.IRDFParserOptions; import com.bigdata.rdf.sail.BigdataSail; +import com.bigdata.rdf.sail.SPARQLUpdateEvent; import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.Sesame2BigdataIterator; import com.bigdata.rdf.sparql.ast.ASTContainer; @@ -204,8 +205,7 @@ } /** - * Generate physical plan for the update operations (attached to the AST as - * a side-effect). + * Convert and/or execute the update request. * * @throws Exception */ @@ -270,9 +270,17 @@ } + final long begin = System.nanoTime(); + // convert/run the update operation. left = convertUpdateSwitch(left, op, context); + final long elapsed = begin - System.nanoTime(); + + // notify listener(s) + context.conn.getSailConnection().fireEvent( + new SPARQLUpdateEvent(op, elapsed)); + updateIndex++; } @@ -1236,7 +1244,8 @@ + defaultContext); doLoad(context.conn.getSailConnection(), sourceURL, - defaultContext, op.getRDFParserOptions(), nmodified); + defaultContext, op.getRDFParserOptions(), nmodified, + op); } catch (Throwable t) { @@ -1336,9 +1345,9 @@ */ private static void doLoad(final BigdataSailConnection conn, final URL sourceURL, final URI defaultContext, - final IRDFParserOptions parserOptions, - final AtomicLong nmodified) throws IOException, RDFParseException, - RDFHandlerException { + final IRDFParserOptions parserOptions, final AtomicLong nmodified, + final LoadGraph op) throws IOException, + RDFParseException, RDFHandlerException { // Use the default context if one was given and otherwise // the URI from which the data are being read. @@ -1423,7 +1432,7 @@ rdfParser.setDatatypeHandling(parserOptions.getDatatypeHandling()); rdfParser.setRDFHandler(new AddStatementHandler(conn, nmodified, - defactoContext)); + defactoContext, op)); /* * Setup the input stream. @@ -1497,12 +1506,15 @@ */ private static class AddStatementHandler extends RDFHandlerBase { + private final LoadGraph op; + private final long beginNanos; private final BigdataSailConnection conn; private final AtomicLong nmodified; private final Resource[] defaultContexts; public AddStatementHandler(final BigdataSailConnection conn, - final AtomicLong nmodified, final Resource defaultContext) { + final AtomicLong nmodified, final Resource defaultContext, + final LoadGraph op) { this.conn = conn; this.nmodified = nmodified; final boolean quads = conn.getTripleStore().isQuads(); @@ -1512,6 +1524,8 @@ } else { this.defaultContexts = new Resource[0]; } + this.op = op; + this.beginNanos = System.nanoTime(); } public void handleStatement(final Statement stmt) @@ -1533,8 +1547,18 @@ } - nmodified.incrementAndGet(); + final long nparsed = nmodified.incrementAndGet(); + if (nparsed % 10000 == 0L) { + + final long elapsed = System.nanoTime() - beginNanos; + + // notify listener(s) + conn.fireEvent(new SPARQLUpdateEvent.LoadProgress(op, elapsed, + nparsed)); + + } + } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdateContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdateContext.java 2012-09-04 14:17:54 UTC (rev 6518) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUpdateContext.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -146,5 +146,5 @@ } private BigdataURI nullGraph = null; - + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java 2012-09-04 14:17:54 UTC (rev 6518) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -1036,7 +1036,7 @@ } } - + /** * Apply the {@link Dataset} to each {@link DeleteInsertGraph} in the UPDATE * request. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2012-09-04 14:17:54 UTC (rev 6518) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -68,6 +68,7 @@ import java.util.Properties; import java.util.Vector; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.CopyOnWriteArraySet; import java.util.concurrent.Semaphore; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantReadWriteLock; @@ -147,6 +148,7 @@ import com.bigdata.striterator.CloseableIteratorWrapper; import com.bigdata.striterator.IChunkedIterator; import com.bigdata.striterator.IChunkedOrderedIterator; +import com.bigdata.util.InnerCause; import cutthecrap.utils.striterators.Expander; import cutthecrap.utils.striterators.Striterator; @@ -1667,6 +1669,13 @@ this.truthMaintenance = BigdataSail.this.truthMaintenanceIsSupportable && unisolated; + /* + * Permit registration of SPARQL UPDATE listeners iff the connection + * is mutable. + */ + listeners = readOnly ? null + : new CopyOnWriteArraySet<ISPARQLUpdateListener>(); + } /** @@ -3482,6 +3491,87 @@ */ protected DelegatingChangeLog changeLog; + /* + * SPARQL UPDATE LISTENER API + */ + + /** Registered listeners. */ + private final CopyOnWriteArraySet<ISPARQLUpdateListener> listeners; + + /** Add a SPARQL UDPATE listener. */ + public void addListener(final ISPARQLUpdateListener l) { + + if(isReadOnly()) + throw new UnsupportedOperationException(); + + if (l == null) + throw new IllegalArgumentException(); + + listeners.add(l); + + } + + /** Remove a SPARQL UDPATE listener. */ + public void removeListener(final ISPARQLUpdateListener l) { + + if(isReadOnly()) + throw new UnsupportedOperationException(); + + if (l == null) + throw new IllegalArgumentException(); + + listeners.remove(l); + + } + + /** + * Send an event to all registered listeners. + */ + public void fireEvent(final SPARQLUpdateEvent e) { + + if(isReadOnly()) + throw new UnsupportedOperationException(); + + if (e == null) + throw new IllegalArgumentException(); + + if(listeners.isEmpty()) { + + // NOP + return; + + } + + final ISPARQLUpdateListener[] a = listeners + .toArray(new ISPARQLUpdateListener[0]); + + for (ISPARQLUpdateListener l : a) { + + final ISPARQLUpdateListener listener = l; + + try { + + // send event. + listener.updateEvent(e); + + } catch (Throwable t) { + + if (InnerCause.isInnerCause(t, InterruptedException.class)) { + + // Propagate interrupt. + throw new RuntimeException(t); + + } + + // Log and ignore. + log.error(t, t); + + } + + } + + } + } // class BigdataSailConnection /** Added: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/ISPARQLUpdateListener.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/ISPARQLUpdateListener.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/ISPARQLUpdateListener.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -0,0 +1,36 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ +package com.bigdata.rdf.sail; + +/** + * A listener for SPARQL UPDATE operations. Listeners MUST NOT block in the + * event thread. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public interface ISPARQLUpdateListener { + + void updateEvent(SPARQLUpdateEvent e); + +} Added: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/SPARQLUpdateEvent.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/SPARQLUpdateEvent.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/SPARQLUpdateEvent.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -0,0 +1,103 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ +package com.bigdata.rdf.sail; + +import com.bigdata.rdf.sparql.ast.Update; + +/** + * An event reflecting progress for some sequence of SPARQL UPDATE operations. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * + * TODO Support incremental reporting on long running operations, + * especially LOAD. + */ +public class SPARQLUpdateEvent { + + private final Update op; + private final long elapsed; + + public SPARQLUpdateEvent(final Update op, final long elapsed) { + + this.op = op; + this.elapsed = elapsed; + + } + + /** + * Return the update operation. + */ + public Update getUpdate() { + return op; + } + + /** + * Return the elapsed runtime for that update operation in nanoseconds. + */ + public long getElapsedNanos() { + return elapsed; + } + + /** + * Incremental progress report during <code>LOAD</code>. + */ + public static class LoadProgress extends SPARQLUpdateEvent { + + private final long nparsed; + + public LoadProgress(final Update op, final long elapsed, + final long nparsed) { + + super(op, elapsed); + + this.nparsed = nparsed; + + } + + /** + * Return the #of statements parsed as of the moment that this event was + * generated. + * <P> + * Note: Statements are incrementally written onto the backing store. + * Thus, the parser will often run ahead of the actual index writes. + * This can manifest as periods during which the "parsed" statement + * count climbs quickly followed by periods in which it is stable. That + * occurs when the parser is blocked because it can not add statements + * to the connection while the connection is being flushed to the + * database. There is a ticket to fix this blocking behavior so the + * parser can continue to run while we are flushing statements onto the + * database - see below. + * + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/529"> + * Improve load performance </a> + */ + public long getParsedCount() { + + return nparsed; + + } + + } + +} Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2012-09-04 14:17:54 UTC (rev 6518) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2012-09-04 14:20:21 UTC (rev 6519) @@ -86,6 +86,8 @@ import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; import com.bigdata.rdf.sail.BigdataSailTupleQuery; import com.bigdata.rdf.sail.BigdataSailUpdate; +import com.bigdata.rdf.sail.ISPARQLUpdateListener; +import com.bigdata.rdf.sail.SPARQLUpdateEvent; import com.bigdata.rdf.sail.sparql.Bigdata2ASTSPARQLParser; import com.bigdata.rdf.sparql.ast.ASTContainer; import com.bigdata.rdf.sparql.ast.QueryHints; @@ -666,7 +668,8 @@ * query begins to execute and storing the {@link RunningQuery} in the * {@link #m_queries} map. * - * @param The connection. + * @param cxn + * The connection. */ final BigdataSailUpdate setupUpdate( final BigdataSailRepositoryConnection cxn) { @@ -1006,13 +1009,17 @@ } + /** + * {@inheritDoc} + * <p> + * This executes the SPARQL UPDATE and formats the HTTP response. + */ protected void doQuery(final BigdataSailRepositoryConnection cxn, final OutputStream os) throws Exception { + // Prepare the UPDATE request. final BigdataSailUpdate update = setupUpdate(cxn); - this.commitTime.set(update.execute2()); - /* * Setup the response headers. */ @@ -1028,24 +1035,77 @@ final HTMLBuilder doc = new HTMLBuilder(charset.name(), w); + XMLBuilder.Node current = doc.root("html"); { + current = current.node("head"); + current.node("meta") + .attr("http-equiv", "Content-Type") + .attr("content", + "text/html;charset=" + charset.name()) + .close(); + current.node("title").textNoEncode("bigdata®") + .close(); + current = current.close();// close the head. + } - XMLBuilder.Node current = doc.root("html"); - { - current = current.node("head"); - current.node("meta") - .attr("http-equiv", "Content-Type") - .attr("content", - "text/html;charset=" + charset.name()) - .close(); - current.node("title").textNoEncode("bigdata®") - .close(); - current = current.close();// close the head. + // open the body + current = current.node("body"); + final XMLBuilder.Node body = current; + + final ISPARQLUpdateListener listener = new ISPARQLUpdateListener() { + + @Override + public void updateEvent(final SPARQLUpdateEvent e) { + /* + * TODO We may need to flush the writer to get the + * progress reports back to the client incrementally. + */ + try { + final long ms = TimeUnit.NANOSECONDS.toMillis(e + .getElapsedNanos()); + if (e instanceof SPARQLUpdateEvent.LoadProgress) { + /* + * Incremental progress on LOAD. + * + * TODO Flag the first such event for a given + * load and then begin a nexted structure in + * which we report the progress with the details + * of the LOAD request in attributes for the + * parent element. + */ + final SPARQLUpdateEvent.LoadProgress tmp = (SPARQLUpdateEvent.LoadProgress) e; + final long parsed = tmp.getParsedCount(); + body.node("p").text( + "elapsed=" + ms + "ms, parsed=" + + parsed); + } else { + /* + * End of some UPDATE operation. + */ + body.node("p").text("elapsed=" + ms + "ms") + .node("pre") + .text(e.getUpdate().toString()).close() + .close(); + } + } catch (IOException e1) { + throw new RuntimeException(e1); + } } + }; - // open the body - current = current.node("body"); + /* + * Add the SPARQL UPDATE listener + * + * FIXME Finish up the formatting of the listener output. The + * listener itself is Ok. Except the timestamps are a bit off. + */ +// cxn.getSailConnection().addListener(listener); + + // Execute the UPDATE. + this.commitTime.set(update.execute2()); + { + current.node("p")// .text("commitTime=" + commitTime.get())// .close(); @@ -1054,10 +1114,10 @@ // .text("commitTime=" + updateTask.commitTime.get())// // .close(); - doc.closeAll(current); - } + doc.closeAll(current); + w.flush(); } finally { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-04 18:58:50
|
Revision: 6525 http://bigdata.svn.sourceforge.net/bigdata/?rev=6525&view=rev Author: thompsonbry Date: 2012-09-04 18:58:42 +0000 (Tue, 04 Sep 2012) Log Message: ----------- More logging and some slight refactoring in support of the HA Journal. This issue is currently blocked on a means to load the allocators from the disk after a commit without hosing concurrent readers. Martyn is working on a solution for this. The first commit occurs, but the follower can not re-load the commit record from the root block because it does not have the then current allocators on hand. We are also observing the absence of the metabits on the follower after a commit, which is reported as an invalid store version. However, I can clearly see that the write cache on the leader is being flushed to the follower and the write cache should have the metabits data, so it is puzzling when it is not visible on restart. @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumReadImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -62,6 +62,9 @@ protected <F extends Future<T>, T> void cancelRemoteFutures( final List<F> remoteFutures) { + if (log.isInfoEnabled()) + log.info(""); + for (F rf : remoteFutures) { try { @@ -97,6 +100,10 @@ if (unit == null) throw new IllegalArgumentException(); + if (log.isInfoEnabled()) + log.info("isRootBlock0=" + isRootBlock0 + ", rootBlock=" + + rootBlock + ", timeout=" + timeout + ", unit=" + unit); + /* * The token of the quorum for which the leader issued this prepare * message. @@ -226,6 +233,9 @@ public void commit2Phase(final long token, final long commitTime) throws IOException, InterruptedException { + if (log.isInfoEnabled()) + log.info("token=" + token + ", commitTime=" + commitTime); + /* * To minimize latency, we first submit the futures for the other * services and then do f.run() on the leader. This will allow the other @@ -329,6 +339,9 @@ public void abort2Phase(final long token) throws IOException, InterruptedException { + if (log.isInfoEnabled()) + log.info("token=" + token); + /* * To minimize latency, we first submit the futures for the other * services and then do f.run() on the leader. This will allow the other Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -251,6 +251,8 @@ * pipeline order. */ public void pipelineAdd() { + if (log.isInfoEnabled()) + log.info(""); super.pipelineAdd(); lock.lock(); try { @@ -270,6 +272,8 @@ } public void pipelineElectedLeader() { + if (log.isInfoEnabled()) + log.info(""); super.pipelineElectedLeader(); lock.lock(); try { @@ -286,6 +290,8 @@ */ @Override public void pipelineRemove() { + if (log.isInfoEnabled()) + log.info(""); super.pipelineRemove(); lock.lock(); try { @@ -302,6 +308,8 @@ */ public void pipelineChange(final UUID oldDownStreamId, final UUID newDownStreamId) { + if (log.isInfoEnabled()) + log.info(""); super.pipelineChange(oldDownStreamId, newDownStreamId); lock.lock(); try { @@ -354,6 +362,8 @@ * receive buffer. */ protected void tearDown() { + if (log.isInfoEnabled()) + log.info(""); lock.lock(); try { /* @@ -443,6 +453,8 @@ * Setup the send service. */ protected void setUpSendService() { + if (log.isInfoEnabled()) + log.info(""); lock.lock(); try { // Allocate the send service. @@ -554,6 +566,9 @@ public Future<Void> replicate(final HAWriteMessage msg, final ByteBuffer b) throws IOException { + if (log.isTraceEnabled()) + log.trace("Leader will send: " + b.remaining() + " bytes"); + member.assertLeader(msg.getQuorumToken()); final PipelineState<S> downstream = pipelineStateRef.get(); @@ -564,9 +579,6 @@ * This is the leader, so send() the buffer. */ - if (log.isTraceEnabled()) - log.trace("Leader will send: " + b.remaining() + " bytes"); - ft = new FutureTask<Void>(new Callable<Void>() { public Void call() throws Exception { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumReadImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumReadImpl.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumReadImpl.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -91,6 +91,9 @@ public byte[] readFromQuorum(final UUID storeId, final long addr) throws InterruptedException, IOException { + if (log.isInfoEnabled()) + log.info("storeId=" + storeId + ", addr=" + addr); + if (storeId == null) throw new IllegalArgumentException(); @@ -107,9 +110,7 @@ // @todo monitoring hook (Nagios)? // // @todo counters (in WORMStrategy right now). - if (log.isInfoEnabled()) - log.info("Failover read: storeId=" + storeId + ", addr=" + addr); - + // Block if necessary awaiting a met quorum. final long token = member.getQuorum().awaitQuorum(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -1730,6 +1730,7 @@ final int limit = buf.limit(); // end position. int pos = buf.position(); // start position // buf.limit(sp); + int nwrite = 0; while (pos < limit) { buf.position(pos); final long addr = buf.getLong(); // 8 bytes @@ -1746,6 +1747,7 @@ } else { recordMap.put(addr, new RecordMetadata(addr, pos + 12, sze)); } + nwrite++; pos += 12 + sze; // skip header (addr + sze) and data } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -168,6 +168,11 @@ protected static final Logger log = Logger.getLogger(WriteCacheService.class); /** + * Logger for HA events. + */ + private static final Logger haLog = Logger.getLogger("com.bigdata.ha"); + + /** * <code>true</code> until the service is {@link #close() closed}. */ // private volatile boolean open = true; @@ -1249,6 +1254,18 @@ public boolean flush(final boolean force, final long timeout, final TimeUnit units) throws TimeoutException, InterruptedException { + if (haLog.isInfoEnabled()) { + /* + * Note: This is an important event for HA. The write cache is + * flushed to ensure that the entire write set is replicated on the + * followers. Once that has been done, HA will do a 2-phase commit + * to verify that there is a quorum that agrees to write the root + * block. Writing the root block is the only thing that the nodes in + * the quorum need to do once the write cache has been flushed. + */ + haLog.info("Flushing the write cache."); + } + final long begin = System.nanoTime(); final long nanos = units.toNanos(timeout); long remaining = nanos; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -193,6 +193,11 @@ private static final Logger txLog = Logger.getLogger("com.bigdata.txLog"); /** + * Logger for HA events. + */ + protected static final Logger haLog = Logger.getLogger("com.bigdata.haLog"); + + /** * The index of the root address containing the address of the persistent * {@link Name2Addr} mapping names to {@link BTree}s registered for the * store. @@ -2081,14 +2086,12 @@ } - /** - * Return <code>true</code> if the journal is configured for high - * availability. High availability exists (in principle) when the - * {@link QuorumManager#replicationFactor()} <em>k</em> is greater than one. - * - * @see #getQuorum() - * @see QuorumManager#isHighlyAvailable() - */ + /** + * Return <code>true</code> if the journal is configured for high + * availability. + * + * @see QuorumManager#isHighlyAvailable() + */ public boolean isHighlyAvailable() { return getQuorum().isHighlyAvailable(); @@ -2574,12 +2577,13 @@ return 0L; } + /* + * Explicitly call the RootBlockCommitter + */ + rootAddrs[PREV_ROOTBLOCK] = this.m_rootBlockCommitter + .handleCommit(commitTime); + /* - * Explicitly call the RootBlockCommitter - */ - rootAddrs[PREV_ROOTBLOCK] = this.m_rootBlockCommitter.handleCommit(commitTime); - - /* * Write the commit record onto the store. * * @todo Modify to log the current root block and set the address of @@ -2592,10 +2596,12 @@ final long newCommitCounter = old.getCommitCounter() + 1; - final ICommitRecord commitRecord = new CommitRecord(commitTime, newCommitCounter, rootAddrs); + final ICommitRecord commitRecord = new CommitRecord(commitTime, + newCommitCounter, rootAddrs); - final long commitRecordAddr = write(ByteBuffer - .wrap(CommitRecordSerializer.INSTANCE.serialize(commitRecord))); + final long commitRecordAddr = write(ByteBuffer + .wrap(CommitRecordSerializer.INSTANCE + .serialize(commitRecord))); /* * Before flushing the commitRecordIndex we need to check for @@ -2614,18 +2620,19 @@ */ _commitRecordIndex.add(commitRecordAddr, commitRecord); - /* - * Flush the commit record index to the store and stash the address - * of its metadata record in the root block. - * - * Note: The address of the root of the CommitRecordIndex itself - * needs to go right into the root block. We are unable to place it - * into the commit record since we need to serialize the commit - * record, get its address, and add the entry to the - * CommitRecordIndex before we can flush the CommitRecordIndex to - * the store. - */ - final long commitRecordIndexAddr = _commitRecordIndex.writeCheckpoint(); + /* + * Flush the commit record index to the store and stash the address + * of its metadata record in the root block. + * + * Note: The address of the root of the CommitRecordIndex itself + * needs to go right into the root block. We are unable to place it + * into the commit record since we need to serialize the commit + * record, get its address, and add the entry to the + * CommitRecordIndex before we can flush the CommitRecordIndex to + * the store. + */ + final long commitRecordIndexAddr = _commitRecordIndex + .writeCheckpoint(); if (quorum != null) { /* @@ -2634,12 +2641,21 @@ quorum.assertLeader(quorumToken); } - /* - * Call commit on buffer strategy prior to retrieving root block, - * required for RWStore since the metaBits allocations are not made - * until commit, leading to invalid addresses for recent store - * allocations. - */ + /* + * Call commit on buffer strategy prior to retrieving root block, + * required for RWStore since the metaBits allocations are not made + * until commit, leading to invalid addresses for recent store + * allocations. + * + * Note: This will flush the write cache. For HA, that ensures that + * the write set has been replicated to the followers. + * + * Note: After this, we do not write anything on the backing store + * other than the root block. The rest of this code is dedicated to + * creating a properly formed root block. For a non-HA deployment, + * we just lay down the root block. For an HA deployment, we do a + * 2-phase commit. + */ _bufferStrategy.commit(); /* @@ -2662,9 +2678,10 @@ * unisolated transactions). */ - final long firstCommitTime = (old.getFirstCommitTime() == 0L ? commitTime : old.getFirstCommitTime()); + final long firstCommitTime = (old.getFirstCommitTime() == 0L ? commitTime + : old.getFirstCommitTime()); - final long priorCommitTime = old.getLastCommitTime(); + final long priorCommitTime = old.getLastCommitTime(); if (priorCommitTime != 0L) { @@ -2744,12 +2761,10 @@ final QuorumService<HAGlue> quorumService = quorum.getClient(); + boolean didVoteYes = false; try { - final int minYes = (quorum.replicationFactor() + 1) >> 1; - - // @todo config prepare timeout (Watch out for long GC - // pauses!) + // TODO Configure prepare timeout (Watch out for long GC pauses!) final int nyes = quorumService.prepare2Phase(// !old.isRootBlock0(),// newRootBlock,// @@ -2757,8 +2772,18 @@ TimeUnit.MILLISECONDS// ); - if (nyes >= minYes) { + final boolean willCommit = quorum.isQuorum(nyes); + + if (haLog.isInfoEnabled()) + haLog.info("Will " + (willCommit ? "" : "not ") + + "commit: " + nyes + " out of " + + quorum.replicationFactor() + + " services voted yes."); + if (willCommit) { + + didVoteYes = true; + quorumService.commit2Phase(quorumToken, commitTime); } else { @@ -2768,20 +2793,34 @@ } } catch (Throwable e) { - /* - * FIXME At this point the quorum is probably inconsistent - * in terms of their root blocks. Rather than attempting to - * send an abort() message to the quorum, we probably should - * force the master to yield its role at which point the - * quorum will attempt to elect a new master and - * resynchronize. - */ - if (quorumService != null) { - try { - quorumService.abort2Phase(quorumToken); - } catch (Throwable t) { - log.warn(t, t); + if (didVoteYes) { + /* + * The quorum voted to commit, but something went wrong. + * + * FIXME At this point the quorum is probably + * inconsistent in terms of their root blocks. Rather + * than attempting to send an abort() message to the + * quorum, we probably should force the master to yield + * its role at which point the quorum will attempt to + * elect a new master and resynchronize. + */ + if (quorumService != null) { + try { + quorumService.abort2Phase(quorumToken); + } catch (Throwable t) { + log.warn(t, t); + } } + } else { + /* + * This exception was thrown during the abort handling + * logic. Note that we already attempting an 2-phase + * abort since the quorum did not vote "yes". + * + * TODO We should probably force a quorum break since + * there is clearly something wrong with the lines of + * communication among the nodes. + */ } throw new RuntimeException(e); } @@ -4563,6 +4602,9 @@ } + if (haLog.isInfoEnabled()) + haLog.info("oldValue=" + oldValue + ", newToken=" + newValue); + final boolean didBreak; final boolean didMeet; @@ -4761,10 +4803,14 @@ new ChecksumUtility()// ); + if (haLog.isInfoEnabled()) + haLog.info("isRootBlock0=" + isRootBlock0 + ", rootBlock=" + + rootBlock + ", timeout=" + timeout + ", unit=" + unit); + if (rootBlock.getLastCommitTime() <= getLastCommitTime()) throw new IllegalStateException(); - getQuorum().assertQuorum(rootBlock.getQuorumToken()); + quorum.assertQuorum(rootBlock.getQuorumToken()); prepareRequest.set(rootBlock); @@ -4780,10 +4826,13 @@ final IRootBlockView rootBlock = prepareRequest.get(); + if (haLog.isInfoEnabled()) + haLog.info("preparedRequest=" + rootBlock); + if (rootBlock == null) throw new IllegalStateException(); - getQuorum().assertQuorum(prepareToken); + quorum.assertQuorum(prepareToken); /* * Call to ensure strategy does everything required for @@ -4852,12 +4901,18 @@ public Future<Void> commit2Phase(final long commitTime) { - final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { + if (haLog.isInfoEnabled()) + haLog.info("commitTime=" + commitTime); + + final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { - final IRootBlockView rootBlock = prepareRequest.get(); + if (haLog.isInfoEnabled()) + haLog.info("commitTime=" + commitTime); + final IRootBlockView rootBlock = prepareRequest.get(); + if (rootBlock == null) throw new IllegalStateException(); @@ -4869,7 +4924,7 @@ throw new IllegalStateException(); // verify that the qourum has not changed. - getQuorum().assertQuorum(rootBlock.getQuorumToken()); + quorum.assertQuorum(rootBlock.getQuorumToken()); // write the root block on to the backing store. _bufferStrategy.writeRootBlock(rootBlock, forceOnCommit); @@ -4882,6 +4937,9 @@ prepareRequest.set(null/* discard */); + if (txLog.isInfoEnabled()) + txLog.info("COMMIT: commitTime=" + commitTime); + } finally { _fieldReadWriteLock.writeLock().unlock(); @@ -4906,10 +4964,16 @@ public Future<Void> abort2Phase(final long token) { + if (haLog.isInfoEnabled()) + haLog.info("token=" + token); + final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { - getQuorum().assertQuorum(token); + if (haLog.isInfoEnabled()) + haLog.info("token=" + token); + + quorum.assertQuorum(token); prepareRequest.set(null/* discard */); @@ -4946,11 +5010,17 @@ public Future<byte[]> readFromDisk(final long token, final UUID storeId, final long addr) { + if (haLog.isInfoEnabled()) + haLog.info("token=" + token); + final FutureTask<byte[]> ft = new FutureTask<byte[]>( new Callable<byte[]>() { public byte[] call() throws Exception { + if (haLog.isInfoEnabled()) + haLog.info("token=" + token); + quorum.assertQuorum(token); // final ILRUCache<Long, Object> cache = (LRUNexus.INSTANCE @@ -4987,10 +5057,12 @@ public Future<Void> receiveAndReplicate(final HAWriteMessage msg) throws IOException { + if (haLog.isInfoEnabled()) + haLog.info("msg=" + msg); + /* * Adjust the size on the disk of the local store to that given in - * the message. For the RW store, this can cause the allocation - * blocks to be moved if the store is extended. + * the message. * * @todo Trap truncation vs extend? */ @@ -5005,7 +5077,7 @@ } - final Future<Void> ft = getQuorum().getClient() + final Future<Void> ft = quorum.getClient() .receiveAndReplicate(msg); return getProxy(ft); @@ -5017,6 +5089,9 @@ if (storeId == null) throw new IllegalArgumentException(); + if (haLog.isInfoEnabled()) + haLog.info("storeId=" + storeId); + if (getUUID().equals(storeId)) { // A request for a different journal's root block. throw new UnsupportedOperationException(); @@ -5031,6 +5106,8 @@ final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { // NOP (no zookeeper at this layer). + if (haLog.isInfoEnabled()) + haLog.info(""); } }, null); ft.run(); @@ -5045,6 +5122,8 @@ public Future<Void> moveToEndOfPipeline() { final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { + if (haLog.isInfoEnabled()) + haLog.info(""); final QuorumActor<?, ?> actor = quorum.getActor(); actor.pipelineRemove(); actor.pipelineAdd(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -938,7 +938,7 @@ // Can handle minor store version incompatibility final int storeVersion = strBuf.readInt(); if ((storeVersion & 0xFF00) != (cVersion & 0xFF00)) { - throw new IllegalStateException("Incompatible RWStore header version"); + throw new IllegalStateException("Incompatible RWStore header version: storeVersion="+(storeVersion&0xff00)+", cVersion="+(cVersion&0xff00)); } m_lastDeferredReleaseTime = strBuf.readLong(); cDefaultMetaBitsSize = strBuf.readInt(); @@ -1876,7 +1876,7 @@ // private volatile long m_maxAllocation = 0; private volatile long m_spareAllocation = 0; - + /** Core allocation method. */ public int alloc(final int size, final IAllocationContext context) { if (size > m_maxFixedAlloc) { throw new IllegalArgumentException("Allocation size to big: " + size + " > " + m_maxFixedAlloc); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -148,6 +148,11 @@ } + /** + * {@inheritDoc} + * <p> + * Overridden to expose this method to the {@link HAJournalServer}. + */ @Override protected final void setQuorumToken(final long newValue) { @@ -173,8 +178,11 @@ @SuppressWarnings("rawtypes") public void run() { - if(getQuorum() instanceof ZKQuorumImpl) { + if (haLog.isInfoEnabled()) + haLog.info(""); + if (getQuorum() instanceof ZKQuorumImpl) { + try { // Close the current connection (if any). @@ -255,7 +263,7 @@ * * @see https://sourceforge.net/apps/trac/bigdata/ticket/437 */ - return new ThickFuture(future); + return new ThickFuture<E>(future); // /* // * Setup the Exporter for the Future. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-04 18:44:10 UTC (rev 6524) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-04 18:58:42 UTC (rev 6525) @@ -68,6 +68,11 @@ private static final Logger log = Logger.getLogger(HAJournal.class); /** + * Logger for HA events. + */ + protected static final Logger haLog = Logger.getLogger("com.bigdata.haLog"); + + /** * Configuration options for the {@link HAJournalServer}. */ public interface ConfigurationOptions { @@ -525,6 +530,9 @@ protected void handleReplicatedWrite(final HAWriteMessage msg, final ByteBuffer data) throws Exception { + if (haLog.isInfoEnabled()) + haLog.info("msg=" + msg); + /* * Note: the ByteBuffer is owned by the HAReceiveService. This * just wraps up the reference to the ByteBuffer with an This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-05 15:17:47
|
Revision: 6529 http://bigdata.svn.sourceforge.net/bigdata/?rev=6529&view=rev Author: thompsonbry Date: 2012-09-05 15:17:39 +0000 (Wed, 05 Sep 2012) Log Message: ----------- Refactored the ITransactionService to extract the distributed 2-phase commit logic for the cluster (which was never finished). This leaves us with an ITransactionService implementation that we can expose on the HAGlue interface that will allow readers on a follower to obtain a read-lock on the leader. I still need to setup the appropriate delegation pattern. Probably all we need to do is send the newTx() and related requests to the leader. @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DelegateTransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalTransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DataService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/resources/MockTransactionService.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniFederation.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IDistributedTransactionService.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -1329,9 +1329,11 @@ * started on this data service). */ - transactionManager.getTransactionService().declareResources( - timestamp, dataServiceUUID, resource); + final IDistributedTransactionService txs = (IDistributedTransactionService) transactionManager + .getTransactionService(); + txs.declareResources(timestamp, dataServiceUUID, resource); + } catch (IOException e) { // RMI problem. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DelegateTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DelegateTransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DelegateTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -38,11 +38,12 @@ * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ -public class DelegateTransactionService implements ITransactionService { +public class DelegateTransactionService implements + IDistributedTransactionService { - private final ITransactionService proxy; + private final IDistributedTransactionService proxy; - public DelegateTransactionService(final ITransactionService proxy) { + public DelegateTransactionService(final IDistributedTransactionService proxy) { if (proxy == null) throw new IllegalArgumentException(); this.proxy = proxy; Added: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IDistributedTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IDistributedTransactionService.java (rev 0) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IDistributedTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -0,0 +1,118 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.journal; + +import java.io.IOException; +import java.util.UUID; +import java.util.concurrent.BrokenBarrierException; + +import com.bigdata.service.IBigdataFederation; +import com.bigdata.service.IDataService; +import com.bigdata.service.ITxCommitProtocol; + +/** + * Extended interface for distributed 2-phase transactions for an + * {@link IBigdataFederation}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public interface IDistributedTransactionService extends ITransactionService { + + /** + * An {@link IDataService} MUST invoke this method before permitting an + * operation isolated by a read-write transaction to execute with access to + * the named resources (this applies only to distributed databases). The + * declared resources are used in the commit phase of the read-write tx to + * impose a partial order on commits. That partial order guarantees that + * commits do not deadlock in contention for the same resources. + * + * @param tx + * The transaction identifier. + * @param dataService + * The {@link UUID} an {@link IDataService} on which the + * transaction will write. + * @param resource + * An array of the named resources which the transaction will use + * on that {@link IDataService} (this may be different for each + * operation submitted by that transaction to the + * {@link IDataService}). + * + * @return {@link IllegalStateException} if the transaction is not an active + * read-write transaction. + */ + public void declareResources(long tx, UUID dataService, String[] resource) + throws IOException; + + /** + * Callback by an {@link IDataService} participating in a two phase commit + * for a distributed transaction. The {@link ITransactionService} will wait + * until all {@link IDataService}s have prepared. It will then choose a + * <i>commitTime</i> for the transaction and return that value to each + * {@link IDataService}. + * <p> + * Note: If this method throws ANY exception then the task MUST cancel the + * commit, discard the local write set of the transaction, and note that the + * transaction is aborted in its local state. + * + * @param tx + * The transaction identifier. + * @param dataService + * The {@link UUID} of the {@link IDataService} which sent the + * message. + * + * @return The assigned commit time. + * + * @throws InterruptedException + * @throws BrokenBarrierException + * @throws IOException + * if there is an RMI problem. + */ + public long prepared(long tx, UUID dataService) throws IOException, + InterruptedException, BrokenBarrierException; + + /** + * Sent by a task participating in a distributed commit of a transaction + * when the task has successfully committed the write set of the transaction + * on the live journal of the local {@link IDataService}. If this method + * returns <code>false</code> then the distributed commit has failed and + * the task MUST rollback the live journal to the previous commit point. If + * the return is <code>true</code> then the distributed commit was + * successful and the task should halt permitting the {@link IDataService} + * to return from the {@link ITxCommitProtocol#prepare(long, long)} method. + * + * @param tx + * The transaction identifier. + * @param dataService + * The {@link UUID} of the {@link IDataService} which sent the + * message. + * + * @return <code>true</code> if the distributed commit was successfull and + * <code>false</code> if there was a problem. + * + * @throws IOException + */ + public boolean committed(long tx, UUID dataService) throws IOException, + InterruptedException, BrokenBarrierException; + +} Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/ITransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -28,13 +28,10 @@ package com.bigdata.journal; import java.io.IOException; -import java.util.UUID; -import java.util.concurrent.BrokenBarrierException; import com.bigdata.btree.isolation.IConflictResolver; import com.bigdata.service.IBigdataFederation; import com.bigdata.service.IDataService; -import com.bigdata.service.ITxCommitProtocol; /** * <p> @@ -253,80 +250,4 @@ */ public long getReleaseTime() throws IOException; - /** - * An {@link IDataService} MUST invoke this method before permitting an - * operation isolated by a read-write transaction to execute with access to - * the named resources (this applies only to distributed databases). The - * declared resources are used in the commit phase of the read-write tx to - * impose a partial order on commits. That partial order guarantees that - * commits do not deadlock in contention for the same resources. - * - * @param tx - * The transaction identifier. - * @param dataService - * The {@link UUID} an {@link IDataService} on which the - * transaction will write. - * @param resource - * An array of the named resources which the transaction will use - * on that {@link IDataService} (this may be different for each - * operation submitted by that transaction to the - * {@link IDataService}). - * - * @return {@link IllegalStateException} if the transaction is not an active - * read-write transaction. - */ - public void declareResources(long tx, UUID dataService, String[] resource) - throws IOException; - - /** - * Callback by an {@link IDataService} participating in a two phase commit - * for a distributed transaction. The {@link ITransactionService} will wait - * until all {@link IDataService}s have prepared. It will then choose a - * <i>commitTime</i> for the transaction and return that value to each - * {@link IDataService}. - * <p> - * Note: If this method throws ANY exception then the task MUST cancel the - * commit, discard the local write set of the transaction, and note that the - * transaction is aborted in its local state. - * - * @param tx - * The transaction identifier. - * @param dataService - * The {@link UUID} of the {@link IDataService} which sent the - * message. - * - * @return The assigned commit time. - * - * @throws InterruptedException - * @throws BrokenBarrierException - * @throws IOException - * if there is an RMI problem. - */ - public long prepared(long tx, UUID dataService) throws IOException, - InterruptedException, BrokenBarrierException; - - /** - * Sent by a task participating in a distributed commit of a transaction - * when the task has successfully committed the write set of the transaction - * on the live journal of the local {@link IDataService}. If this method - * returns <code>false</code> then the distributed commit has failed and - * the task MUST rollback the live journal to the previous commit point. If - * the return is <code>true</code> then the distributed commit was - * successful and the task should halt permitting the {@link IDataService} - * to return from the {@link ITxCommitProtocol#prepare(long, long)} method. - * - * @param tx - * The transaction identifier. - * @param dataService - * The {@link UUID} of the {@link IDataService} which sent the - * message. - * - * @return <code>true</code> if the distributed commit was successfull and - * <code>false</code> if there was a problem. - * - * @throws IOException - */ - public boolean committed(long tx, UUID dataService) throws IOException, - InterruptedException, BrokenBarrierException; - } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalTransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/JournalTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -28,9 +28,7 @@ package com.bigdata.journal; -import java.io.IOException; import java.util.Properties; -import java.util.UUID; import java.util.concurrent.ExecutionException; import com.bigdata.service.AbstractFederation; @@ -471,27 +469,29 @@ // // } - /** - * Throws exception since distributed transactions are not used for a single - * {@link Journal}. - */ - public long prepared(long tx, UUID dataService) throws IOException { +// /** +// * Throws exception since distributed transactions are not used for a single +// * {@link Journal}. +// */ +// @Override +// public long prepared(long tx, UUID dataService) throws IOException { +// +// throw new UnsupportedOperationException(); +// +// } +// +// /** +// * Throws exception since distributed transactions are not used for a single +// * {@link Journal}. +// */ +// @Override +// public boolean committed(long tx, UUID dataService) throws IOException { +// +// throw new UnsupportedOperationException(); +// +// } - throw new UnsupportedOperationException(); - - } - /** - * Throws exception since distributed transactions are not used for a single - * {@link Journal}. - */ - public boolean committed(long tx, UUID dataService) throws IOException { - - throw new UnsupportedOperationException(); - - } - - /** * Throws exception. * * @throws UnsupportedOperationException Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -222,6 +222,25 @@ final private ConcurrentHashMap<Long, TxState> activeTx = new ConcurrentHashMap<Long, TxState>(); /** + * Return the {@link TxState} associated with the specified transition + * identifier. + * <p> + * Note: This method is an internal API. The caller must adhere to the + * internal synchronization APIs for the transaction service. + * + * @param tx + * The transaction identifier. + * + * @return The {@link TxState} -or- <code>null</code> if there is no such + * active transaction. + */ + final protected TxState getTxState(final long tx) { + + return activeTx.get(tx); + + } + + /** * The #of open transactions in any {@link RunState}. */ final public int getActiveCount() { @@ -1879,81 +1898,6 @@ } /** - * Note: Only those {@link DataService}s on which a read-write transaction - * has started will participate in the commit. If there is only a single - * such {@link IDataService}, then a single-phase commit will be used. - * Otherwise a distributed transaction commit protocol will be used. - * <p> - * Note: The commits requests are placed into a partial order by sorting the - * total set of resources which the transaction declares (via this method) - * across all operations executed by the transaction and then contending for - * locks on the named resources using a LockManager. This is - * handled by the {@link DistributedTransactionService}. - */ - public void declareResources(final long tx, final UUID dataServiceUUID, - final String[] resource) throws IllegalStateException { - - setupLoggingContext(); - - lock.lock(); - try { - - switch (getRunState()) { - case Running: - case Shutdown: - break; - default: - throw new IllegalStateException(ERR_SERVICE_NOT_AVAIL); - } - - if (dataServiceUUID == null) - throw new IllegalArgumentException(); - - if (resource == null) - throw new IllegalArgumentException(); - - final TxState state = activeTx.get(tx); - - if (state == null) { - - throw new IllegalStateException(ERR_NO_SUCH); - - } - - state.lock.lock(); - - try { - - if (state.isReadOnly()) { - - throw new IllegalStateException(ERR_READ_ONLY); - - } - - if (!state.isActive()) { - - throw new IllegalStateException(ERR_NOT_ACTIVE); - - } - - state.declareResources(dataServiceUUID, resource); - - } finally { - - state.lock.unlock(); - - } - - } finally { - - lock.unlock(); - clearLoggingContext(); - - } - - } - - /** * Transaction state as maintained by the {@link ITransactionService}. * <p> * Note: The commitTime and revisionTime are requested by the local Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DataService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DataService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DataService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -59,6 +59,7 @@ import com.bigdata.journal.ConcurrencyManager; import com.bigdata.journal.DropIndexTask; import com.bigdata.journal.IConcurrencyManager; +import com.bigdata.journal.IDistributedTransactionService; import com.bigdata.journal.ILocalTransactionManager; import com.bigdata.journal.IResourceManager; import com.bigdata.journal.ITransactionService; @@ -1145,7 +1146,7 @@ @Override protected Void doTask() throws Exception { - final ITransactionService txService = resourceManager + final IDistributedTransactionService txService = (IDistributedTransactionService) resourceManager .getLiveJournal().getLocalTransactionManager() .getTransactionService(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -61,6 +61,7 @@ import com.bigdata.config.LongValidator; import com.bigdata.counters.CounterSet; import com.bigdata.counters.Instrument; +import com.bigdata.journal.IDistributedTransactionService; import com.bigdata.journal.ITransactionService; import com.bigdata.journal.ITx; import com.bigdata.journal.Name2Addr; @@ -76,7 +77,7 @@ * @version $Id$ */ public abstract class DistributedTransactionService extends - AbstractTransactionService { + AbstractTransactionService implements IDistributedTransactionService { /** * Options understood by this service. @@ -1692,6 +1693,82 @@ } /** + * Note: Only those {@link DataService}s on which a read-write transaction + * has started will participate in the commit. If there is only a single + * such {@link IDataService}, then a single-phase commit will be used. + * Otherwise a distributed transaction commit protocol will be used. + * <p> + * Note: The commits requests are placed into a partial order by sorting the + * total set of resources which the transaction declares (via this method) + * across all operations executed by the transaction and then contending for + * locks on the named resources using a LockManager. This is + * handled by the {@link DistributedTransactionService}. + */ + @Override + public void declareResources(final long tx, final UUID dataServiceUUID, + final String[] resource) throws IllegalStateException { + + setupLoggingContext(); + + lock.lock(); + try { + + switch (getRunState()) { + case Running: + case Shutdown: + break; + default: + throw new IllegalStateException(ERR_SERVICE_NOT_AVAIL); + } + + if (dataServiceUUID == null) + throw new IllegalArgumentException(); + + if (resource == null) + throw new IllegalArgumentException(); + + final TxState state = getTxState(tx); + + if (state == null) { + + throw new IllegalStateException(ERR_NO_SUCH); + + } + + state.lock.lock(); + + try { + + if (state.isReadOnly()) { + + throw new IllegalStateException(ERR_READ_ONLY); + + } + + if (!state.isActive()) { + + throw new IllegalStateException(ERR_NOT_ACTIVE); + + } + + state.declareResources(dataServiceUUID, resource); + + } finally { + + state.lock.unlock(); + + } + + } finally { + + lock.unlock(); + clearLoggingContext(); + + } + + } + + /** * Waits at "prepared" barrier. When the barrier breaks, examing the * {@link TxState}. If the transaction is aborted, then throw an * {@link InterruptedException}. Otherwise return the commitTime assigned @@ -1700,6 +1777,7 @@ * @throws InterruptedException * if the barrier is reset while the caller is waiting. */ + @Override public long prepared(final long tx, final UUID dataService) throws IOException, InterruptedException, BrokenBarrierException { @@ -1752,6 +1830,7 @@ * an exception of their {@link ITxCommitProtocol#prepare(long, long)} * method. */ + @Override public boolean committed(final long tx, final UUID dataService) throws IOException, InterruptedException, BrokenBarrierException { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -30,8 +30,6 @@ import java.io.IOException; import java.util.Properties; -import java.util.UUID; -import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.TimeUnit; import junit.framework.TestCase2; @@ -74,7 +72,7 @@ /** * Implementation uses a mock client. */ - protected MockTransactionService newFixture(Properties p) { + protected MockTransactionService newFixture(final Properties p) { return new MockTransactionService(p).start(); @@ -83,7 +81,7 @@ protected static class MockTransactionService extends AbstractTransactionService { - public MockTransactionService(Properties p) { + public MockTransactionService(final Properties p) { super(p); @@ -110,14 +108,14 @@ } @Override - protected void abortImpl(TxState state) { + protected void abortImpl(final TxState state) { state.setRunState(RunState.Aborted); } @Override - protected long commitImpl(TxState state) throws Exception { + protected long commitImpl(final TxState state) throws Exception { state.setRunState(RunState.Committed); @@ -129,24 +127,24 @@ } - /** - * Note: We are not testing distributed commits here so this is not - * implemented. - */ - public long prepared(long tx, UUID dataService) - throws InterruptedException, BrokenBarrierException { - return 0; - } +// /** +// * Note: We are not testing distributed commits here so this is not +// * implemented. +// */ +// public long prepared(long tx, UUID dataService) +// throws InterruptedException, BrokenBarrierException { +// return 0; +// } +// +// /** +// * Note: We are not testing distributed commits here so this is not +// * implemented. +// */ +// public boolean committed(long tx, UUID dataService) throws IOException, +// InterruptedException, BrokenBarrierException { +// return false; +// } - /** - * Note: We are not testing distributed commits here so this is not - * implemented. - */ - public boolean committed(long tx, UUID dataService) throws IOException, - InterruptedException, BrokenBarrierException { - return false; - } - @Override public long getLastCommitTime() { @@ -983,9 +981,11 @@ // try { // request a timestamp in the future. - final long tx = service.newTx(timestamp1 * 2); - System.err.println("ts="+timestamp1); - System.err.println("tx="+tx); + final long tx = service.newTx(timestamp1 * 2); + if (log.isInfoEnabled()) { + log.info("ts=" + timestamp1); + log.info("tx=" + tx); + } // fail("Expecting: "+IllegalStateException.class); // } catch(IllegalStateException ex) { // log.info("Ignoring expected exception: "+ex); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/resources/MockTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/resources/MockTransactionService.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/resources/MockTransactionService.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -28,10 +28,7 @@ package com.bigdata.resources; -import java.io.IOException; import java.util.Properties; -import java.util.UUID; -import java.util.concurrent.BrokenBarrierException; import com.bigdata.service.AbstractFederation; import com.bigdata.service.AbstractTransactionService; @@ -57,7 +54,6 @@ @Override protected void abortImpl(TxState state) throws Exception { // TODO Auto-generated method stub - } @Override @@ -90,16 +86,16 @@ return null; } - public boolean committed(long tx, UUID dataService) throws IOException, - InterruptedException, BrokenBarrierException { - // TODO Auto-generated method stub - return false; - } +// public boolean committed(long tx, UUID dataService) throws IOException, +// InterruptedException, BrokenBarrierException { +// // TODO Auto-generated method stub +// return false; +// } +// +// public long prepared(long tx, UUID dataService) throws IOException, +// InterruptedException, BrokenBarrierException { +// // TODO Auto-generated method stub +// return 0; +// } - public long prepared(long tx, UUID dataService) throws IOException, - InterruptedException, BrokenBarrierException { - // TODO Auto-generated method stub - return 0; - } - } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniFederation.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniFederation.java 2012-09-05 11:19:55 UTC (rev 6528) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniFederation.java 2012-09-05 15:17:39 UTC (rev 6529) @@ -73,6 +73,7 @@ import com.bigdata.jini.start.config.ZookeeperClientConfig; import com.bigdata.jini.util.JiniUtil; import com.bigdata.journal.DelegateTransactionService; +import com.bigdata.journal.IDistributedTransactionService; import com.bigdata.journal.IResourceLockService; import com.bigdata.journal.ITransactionService; import com.bigdata.relation.accesspath.IAccessPath; @@ -588,7 +589,7 @@ if (transactionServiceClient == null) return null; - final ITransactionService proxy = transactionServiceClient + final IDistributedTransactionService proxy = (IDistributedTransactionService) transactionServiceClient .getTransactionService(); if(proxy == null) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-06 16:19:49
|
Revision: 6541 http://bigdata.svn.sourceforge.net/bigdata/?rev=6541&view=rev Author: thompsonbry Date: 2012-09-06 16:19:38 +0000 (Thu, 06 Sep 2012) Log Message: ----------- Added an option to the BigdataSail to configure the DESCRIBE behavior for a KB instance. The default is the same as the historical behavior. You can also override this on a query-by-query basis using a query hint. http://sourceforge.net/apps/trac/bigdata/ticket/577 (Concise Bounded Description) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpContext.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTDescribeOptimizer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java 2012-09-06 15:58:10 UTC (rev 6540) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/QueryHints.java 2012-09-06 16:19:38 UTC (rev 6541) @@ -431,10 +431,6 @@ * @see #DEFAULT_DESCRIBE_MODE * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/578"> * Concise Bounded Description </a> - * - * FIXME Add a query hint to control this. Note that DESCRIBE is often - * issued without a WHERE clause, so after translating away the query - * hint we may have an empty WHERE clause. */ String DESCRIBE_MODE = "describeMode"; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpContext.java 2012-09-06 15:58:10 UTC (rev 6540) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpContext.java 2012-09-06 16:19:38 UTC (rev 6541) @@ -26,11 +26,14 @@ import com.bigdata.journal.IIndexManager; import com.bigdata.journal.ITx; import com.bigdata.journal.TimestampUtility; +import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sparql.ast.ASTContainer; +import com.bigdata.rdf.sparql.ast.DescribeModeEnum; import com.bigdata.rdf.sparql.ast.EmptySolutionSetStats; import com.bigdata.rdf.sparql.ast.FunctionNode; import com.bigdata.rdf.sparql.ast.FunctionRegistry; import com.bigdata.rdf.sparql.ast.ISolutionSetStats; +import com.bigdata.rdf.sparql.ast.ProjectionNode; import com.bigdata.rdf.sparql.ast.QueryHints; import com.bigdata.rdf.sparql.ast.StaticAnalysis; import com.bigdata.rdf.sparql.ast.cache.CacheConnectionFactory; @@ -537,6 +540,41 @@ } + public DescribeModeEnum getDescribeMode(final ProjectionNode projection) { + + // The effective DescribeMode. + DescribeModeEnum describeMode = projection.getDescribeMode(); + + if (describeMode != null) { + /* + * Explicitly specified on the project. E.g., set by a query hint or + * through code. + */ + return describeMode; + } + + /* + * Consult the KB for a configured default behavior. + */ + final String describeDefaultStr = db.getProperties().getProperty( + BigdataSail.Options.DESCRIBE_MODE); + + if (describeDefaultStr != null) { + + // The KB has specified a default DESCRIBE algorithm. + describeMode = DescribeModeEnum.valueOf(describeDefaultStr); + + } else { + + // Use the default specified on QueryHints. + describeMode = QueryHints.DEFAULT_DESCRIBE_MODE; + + } + + return describeMode; + + } + @Override public ISolutionSetStats getSolutionSetStats(final String localName) { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java 2012-09-06 15:58:10 UTC (rev 6540) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java 2012-09-06 16:19:38 UTC (rev 6541) @@ -472,9 +472,11 @@ ); // The effective DescribeMode. - final DescribeModeEnum describeMode = optimizedQuery.getProjection() - .getDescribeMode() == null ? QueryHints.DEFAULT_DESCRIBE_MODE - : optimizedQuery.getProjection().getDescribeMode(); +// final DescribeModeEnum describeMode = optimizedQuery.getProjection() +// .getDescribeMode() == null ? QueryHints.DEFAULT_DESCRIBE_MODE +// : optimizedQuery.getProjection().getDescribeMode(); + final DescribeModeEnum describeMode = context + .getDescribeMode(optimizedQuery.getProjection()); final CloseableIteration<BindingSet, QueryEvaluationException> solutions2; final ConcurrentHashSet<BigdataValue> describedResources; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTDescribeOptimizer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTDescribeOptimizer.java 2012-09-06 15:58:10 UTC (rev 6540) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTDescribeOptimizer.java 2012-09-06 16:19:38 UTC (rev 6541) @@ -147,8 +147,9 @@ } // The effective DescribeMode. - final DescribeModeEnum describeMode = projection.getDescribeMode() == null ? QueryHints.DEFAULT_DESCRIBE_MODE - : projection.getDescribeMode(); +// final DescribeModeEnum describeMode = projection.getDescribeMode() == null ? QueryHints.DEFAULT_DESCRIBE_MODE +// : projection.getDescribeMode(); + final DescribeModeEnum describeMode = context.getDescribeMode(projection); // final IDescribeCache describeCache = context.getDescribeCache(); // Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2012-09-06 15:58:10 UTC (rev 6540) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2012-09-06 16:19:38 UTC (rev 6541) @@ -119,6 +119,7 @@ import com.bigdata.rdf.rules.BackchainAccessPath; import com.bigdata.rdf.rules.InferenceEngine; import com.bigdata.rdf.sparql.ast.ASTContainer; +import com.bigdata.rdf.sparql.ast.QueryHints; import com.bigdata.rdf.sparql.ast.QueryRoot; import com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper; import com.bigdata.rdf.sparql.ast.service.CustomServiceFactory; @@ -345,7 +346,17 @@ .getName()+ ".namespace"; public static final String DEFAULT_NAMESPACE = "kb"; - + + /** + * Option specifies the algorithm used to compute DESCRIBE responses + * (optional). + * + * @see QueryHints#DESCRIBE_MODE + * @see QueryHints#DEFAULT_DESCRIBE_MODE + */ + public static final String DESCRIBE_MODE = BigdataSail.class + .getPackage().getName() + ".describeMode"; + } /** This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-06 17:36:22
|
Revision: 6542 http://bigdata.svn.sourceforge.net/bigdata/?rev=6542&view=rev Author: thompsonbry Date: 2012-09-06 17:36:15 +0000 (Thu, 06 Sep 2012) Log Message: ----------- Modified the HA code to support the WORM. The common interface is IHABufferStrategy. Code that was specific to the IRWStrategy, RWStrategy, or RWStore has been generalized to IHABufferStrategy. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-06 16:19:38 UTC (rev 6541) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-06 17:36:15 UTC (rev 6542) @@ -4932,9 +4932,8 @@ // set the new root block. _rootBlock = rootBlock; - if (_bufferStrategy instanceof RWStrategy - && quorum.getMember().isFollower( - rootBlock.getQuorumToken())) { + if (quorum.getMember().isFollower( + rootBlock.getQuorumToken())) { /* * Ensure allocators are synced after commit. This * is only done for the followers. The leader has @@ -4943,11 +4942,11 @@ * updating the allocators. */ if (haLog.isInfoEnabled()) - haLog.error("Reloading allocators: serviceUUID=" + haLog.error("Reset from root block: serviceUUID=" + quorum.getMember().getServiceId()); - ((RWStrategy) _bufferStrategy).getStore() + ((IHABufferStrategy) _bufferStrategy) .resetFromHARootBlock(_rootBlock); - } + } // reload the commit record from the new root block. _commitRecord = _getCommitRecord(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java 2012-09-06 16:19:38 UTC (rev 6541) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java 2012-09-06 17:36:15 UTC (rev 6542) @@ -68,4 +68,10 @@ void setExtentForLocalStore(final long extent) throws IOException, InterruptedException; + /** + * Called from {@link AbstractJournal} commit2Phase to ensure is able to + * read committed data that has been streamed directly to the backing store. + */ + public void resetFromHARootBlock(final IRootBlockView rootBlock); + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java 2012-09-06 16:19:38 UTC (rev 6541) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java 2012-09-06 17:36:15 UTC (rev 6542) @@ -774,9 +774,9 @@ return m_store.getOutputStream(context); } -// @Override -// public void resetFromHARootBlock(final IRootBlockView rootBlock) { -// m_store.resetFromHARootBlock(rootBlock); -// } + @Override + public void resetFromHARootBlock(final IRootBlockView rootBlock) { + m_store.resetFromHARootBlock(rootBlock); + } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2012-09-06 16:19:38 UTC (rev 6541) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2012-09-06 17:36:15 UTC (rev 6542) @@ -186,15 +186,15 @@ /** * The service responsible for migrating dirty records onto the backing file * and (for HA) onto the other members of the {@link Quorum}. + * <p> + * This MAY be <code>null</code> for a read-only store or if the write cache + * is disabled. It is required for HA. * - * @todo This MAY be <code>null</code> for a read-only store or if the write - * cache is disabled. - * - * @todo Is HA read-only allowed? If so, then since the - * {@link WriteCacheService} handles failover reads it should be - * enabled for HA read-only. + * TODO This should not really be volatile. For HA, we wind up needing to + * set a new value on this field in {@link #abort()}. The field used to + * be final. Perhaps an {@link AtomicReference} would be appropriate now? */ - private final WriteCacheService writeCacheService; + private volatile WriteCacheService writeCacheService; /** * <code>true</code> iff the backing store has record level checksums. @@ -202,6 +202,13 @@ private final boolean useChecksums; /** + * The #of write cache buffers to use. + * + * @see FileMetadata#writeCacheBufferCount + */ + private final int writeCacheBufferCount; + + /** * <code>true</code> if the backing store will be used in an HA * {@link Quorum} (this is passed through to the {@link WriteCache} objects * which use this flag to conditionally track the checksum of the entire @@ -883,6 +890,8 @@ * handles the write pipeline to the downstream quorum members). */ // final Quorum<?,?> quorum = quorumRef.get(); + + this.writeCacheBufferCount = fileMetadata.writeCacheBufferCount; isHighlyAvailable = quorum != null && quorum.isHighlyAvailable(); @@ -894,40 +903,34 @@ /* * WriteCacheService. */ - try { - this.writeCacheService = new WriteCacheService( - fileMetadata.writeCacheBufferCount, useChecksums, - extent, opener, quorum) { - @Override - public WriteCache newWriteCache(final IBufferAccess buf, - final boolean useChecksum, - final boolean bufferHasData, - final IReopenChannel<? extends Channel> opener, - final long fileExtent) - throws InterruptedException { - return new WriteCacheImpl(0/* baseOffset */, buf, - useChecksum, bufferHasData, - (IReopenChannel<FileChannel>) opener, - fileExtent); - } - }; - this._checkbuf = null; - } catch (InterruptedException e) { - throw new RuntimeException(e); - } + this.writeCacheService = newWriteCacheService(); + this._checkbuf = null; } else { this.writeCacheService = null; this._checkbuf = useChecksums ? ByteBuffer.allocateDirect(4) : null; } -// System.err.println("WARNING: alpha impl: " -// + this.getClass().getName() -// + (writeCacheService != null ? " : writeCacheBuffers=" -// + fileMetadata.writeCacheBufferCount : " : No cache") -// + ", useChecksums=" + useChecksums); + } + private WriteCacheService newWriteCacheService() { + try { + return new WriteCacheService(writeCacheBufferCount, useChecksums, + extent, opener, quorum) { + @Override + public WriteCache newWriteCache(final IBufferAccess buf, + final boolean useChecksum, final boolean bufferHasData, + final IReopenChannel<? extends Channel> opener, + final long fileExtent) throws InterruptedException { + return new WriteCacheImpl(0/* baseOffset */, buf, + useChecksum, bufferHasData, + (IReopenChannel<FileChannel>) opener, fileExtent); + } + }; + } catch (InterruptedException e) { + throw new RuntimeException(e); + } } - + /** * Implementation coordinates writes using the read lock of the * {@link DiskOnlyStrategy#extensionLock}. This is necessary in order to @@ -1076,8 +1079,24 @@ if (writeCacheService != null) { try { + if (quorum != null) { + /** + * When the WORMStrategy is part of an HA quorum, we need to + * close out and then reopen the WriteCacheService every + * time the quorum token is changed. For convenience, this + * is handled by extending the semantics of abort() on the + * Journal and reset() on the WORMStrategy. + * + * @see <a + * href="https://sourceforge.net/apps/trac/bigdata/ticket/530"> + * HA Journal </a> + */ + writeCacheService.close(); + writeCacheService = newWriteCacheService(); + } else { writeCacheService.reset(); writeCacheService.setExtent(extent); + } } catch (InterruptedException e) { throw new RuntimeException(e); } @@ -2253,10 +2272,21 @@ public void writeRawBuffer(final HAWriteMessage msg, final IBufferAccess b) throws IOException, InterruptedException { - writeCacheService.newWriteCache(b, useChecksums, - true/* bufferHasData */, opener, msg.getFileExtent()).flush( - false/* force */); - + /* + * Wrap up the data from the message as a WriteCache object. This will + * build up a RecordMap containing the allocations to be made, and + * including a ZERO (0) data length if any offset winds up being deleted + * (released). + */ + final WriteCache writeCache = writeCacheService.newWriteCache(b, + useChecksums, true/* bufferHasData */, opener, + msg.getFileExtent()); + + /* + * Flush the scattered writes in the write cache to the backing + * store. + */ + writeCache.flush(false/* force */); } public void setExtentForLocalStore(final long extent) throws IOException, @@ -2266,4 +2296,10 @@ } + public void resetFromHARootBlock(final IRootBlockView rootBlock) { + + nextOffset.set(rootBlock.getNextOffset()); + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-06 16:19:38 UTC (rev 6541) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-06 17:36:15 UTC (rev 6542) @@ -125,11 +125,21 @@ switch (bufferMode) { case DiskRW: break; + case DiskWORM: + break; default: throw new IllegalArgumentException(Options.BUFFER_MODE + "=" + bufferMode + " : does not support HA"); } + final boolean writeCacheEnabled = Boolean.valueOf(properties + .getProperty(Options.WRITE_CACHE_ENABLED, + Options.DEFAULT_WRITE_CACHE_ENABLED)); + + if (!writeCacheEnabled) + throw new IllegalArgumentException(Options.WRITE_CACHE_ENABLED + + " : must be true."); + if (properties.get(Options.WRITE_PIPELINE_ADDR) == null) { throw new RuntimeException(Options.WRITE_PIPELINE_ADDR Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-06 16:19:38 UTC (rev 6541) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-06 17:36:15 UTC (rev 6542) @@ -37,7 +37,7 @@ import com.bigdata.jini.start.config.ZookeeperClientConfig; import com.bigdata.jini.util.JiniUtil; import com.bigdata.journal.AbstractJournal; -import com.bigdata.journal.RWStrategy; +import com.bigdata.journal.IHABufferStrategy; import com.bigdata.journal.ha.HAWriteMessage; import com.bigdata.quorum.Quorum; import com.bigdata.quorum.QuorumActor; @@ -393,7 +393,7 @@ && quorum.getMember().isLeader( e.token())) { try { - System.err + System.out// TODO LOG @ INFO .println("Starting NSS"); startNSS(); } catch (Exception e1) { @@ -420,7 +420,7 @@ quorum.start(newQuorumService(logicalServiceId, serviceUUID, haGlueService, journal)); - final QuorumActor actor = quorum.getActor(); + final QuorumActor<?,?> actor = quorum.getActor(); actor.memberAdd(); actor.pipelineAdd(); actor.castVote(journal.getLastCommitTime()); @@ -560,8 +560,8 @@ } }; - ((RWStrategy) journal.getBufferStrategy()).writeRawBuffer(msg, - b); + ((IHABufferStrategy) journal.getBufferStrategy()) + .writeRawBuffer(msg, b); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-06 19:32:21
|
Revision: 6544 http://bigdata.svn.sourceforge.net/bigdata/?rev=6544&view=rev Author: thompsonbry Date: 2012-09-06 19:32:14 +0000 (Thu, 06 Sep 2012) Log Message: ----------- Ok. I have the WORMStrategy replicating correctly through the first commit point. The only differences in the files are that the first root block (rootBlock0) differs between the two files. This is because the leader's root block was not replicated to the follower when the quorum met for the first time. Either the leader or the follower needs to initiate this transfer. That will make the files completely identical on the disk. I had to add a method to let the WORMStrategy override the first offset on the WriteCache in writeRawRecord. I also had to setup the ByteBuffer correctly (pos==limit) since pos=0 and limit=#bytes on receipt, but WriteCache.flush() expected pos==limit==#bytes. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-09-06 17:37:12 UTC (rev 6543) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-09-06 19:32:14 UTC (rev 6544) @@ -321,6 +321,18 @@ final private AtomicLong firstOffset = new AtomicLong(-1L); /** + * Exposed to the WORM for HA support. + * + * @param firstOffset + * The first offset (from the HA message). + */ + protected void setFirstOffset(final long firstOffset) { + + this.firstOffset.set(firstOffset); + + } + + /** * The capacity of the backing buffer. */ final private int capacity; @@ -739,7 +751,7 @@ // by zeroing an address. if (checker != null) { // update the checksum (no side-effects on [data]) - ByteBuffer chkBuf = tmp.asReadOnlyBuffer(); + final ByteBuffer chkBuf = tmp.asReadOnlyBuffer(); chkBuf.position(spos); chkBuf.limit(tmp.position()); checker.update(chkBuf); @@ -993,9 +1005,9 @@ // #of bytes to write on the disk. final int nbytes = tmp.position(); - if (log.isTraceEnabled()) { - log.trace("nbytes=" + nbytes + ", firstOffset=" + getFirstOffset() + ", nflush=" + counters.nflush); - } + if (log.isTraceEnabled()) + log.trace("nbytes=" + nbytes + ", firstOffset=" + + getFirstOffset() + ", nflush=" + counters.nflush); if (nbytes == 0) { @@ -1022,8 +1034,8 @@ remaining = nanos - (System.nanoTime() - begin); // write the data on the disk file. - final boolean ret = writeOnChannel(view, getFirstOffset(), Collections.unmodifiableMap(recordMap), - remaining); + final boolean ret = writeOnChannel(view, getFirstOffset(), + Collections.unmodifiableMap(recordMap), remaining); if (!ret) { throw new TimeoutException("Unable to flush WriteCache"); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-06 17:37:12 UTC (rev 6543) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-06 19:32:14 UTC (rev 6544) @@ -2969,33 +2969,6 @@ } -// // Note: RW store method. -// @Override -// public long write(final ByteBuffer data, final long oldAddr) { -// -// assertCanWrite(); -// -// return _bufferStrategy.write(data, oldAddr); -// -// } - -// public long write(final ByteBuffer data, final long oldAddr, -// final IAllocationContext context) { -// -// assertCanWrite(); -// -// if (_bufferStrategy instanceof IRWStrategy) { -// -// return ((IRWStrategy) _bufferStrategy).write(data, oldAddr, context); -// -// } else { -// -// return _bufferStrategy.write(data, oldAddr); -// -// } -// -// } - public long write(final ByteBuffer data, final IAllocationContext context) { assertCanWrite(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2012-09-06 17:37:12 UTC (rev 6543) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2012-09-06 19:32:14 UTC (rev 6544) @@ -194,7 +194,7 @@ * set a new value on this field in {@link #abort()}. The field used to * be final. Perhaps an {@link AtomicReference} would be appropriate now? */ - private volatile WriteCacheService writeCacheService; + private volatile WORMWriteCacheService writeCacheService; /** * <code>true</code> iff the backing store has record level checksums. @@ -912,24 +912,45 @@ } - private WriteCacheService newWriteCacheService() { + private WORMWriteCacheService newWriteCacheService() { + try { - return new WriteCacheService(writeCacheBufferCount, useChecksums, - extent, opener, quorum) { - @Override - public WriteCache newWriteCache(final IBufferAccess buf, - final boolean useChecksum, final boolean bufferHasData, - final IReopenChannel<? extends Channel> opener, - final long fileExtent) throws InterruptedException { - return new WriteCacheImpl(0/* baseOffset */, buf, - useChecksum, bufferHasData, - (IReopenChannel<FileChannel>) opener, fileExtent); - } - }; + + return new WORMWriteCacheService(writeCacheBufferCount, + useChecksums, extent, opener, quorum); + } catch (InterruptedException e) { + throw new RuntimeException(e); + } + } + + private class WORMWriteCacheService extends WriteCacheService { + + WORMWriteCacheService(final int nbuffers, final boolean useChecksum, + final long fileExtent, + final IReopenChannel<? extends Channel> opener, + final Quorum quorum) throws InterruptedException { + + super(writeCacheBufferCount, useChecksums, extent, opener, quorum); + + } + + @Override + public WriteCacheImpl newWriteCache(final IBufferAccess buf, + final boolean useChecksum, final boolean bufferHasData, + final IReopenChannel<? extends Channel> opener, + final long fileExtent) throws InterruptedException { + + return new WriteCacheImpl(0/* baseOffset */, buf, useChecksum, + bufferHasData, (IReopenChannel<FileChannel>) opener, + fileExtent); + + } + + } /** * Implementation coordinates writes using the read lock of the @@ -953,7 +974,19 @@ } + /** + * {@inheritDoc} + * <p> + * Overridden to expose this method to the {@link WORMStrategy} class. + */ @Override + protected void setFirstOffset(final long firstOffset) { + + super.setFirstOffset(firstOffset); + + } + + @Override protected boolean writeOnChannel(final ByteBuffer data, final long firstOffset, final Map<Long, RecordMetadata> recordMapIsIgnored, @@ -2278,14 +2311,35 @@ * including a ZERO (0) data length if any offset winds up being deleted * (released). */ - final WriteCache writeCache = writeCacheService.newWriteCache(b, + final WriteCacheImpl writeCache = writeCacheService.newWriteCache(b, useChecksums, true/* bufferHasData */, opener, msg.getFileExtent()); + final long firstOffset = msg.getFirstOffset(); + + if (firstOffset < getHeaderSize()) + throw new IllegalArgumentException( + "firstOffset must be beyond header: firstOffset=" + + firstOffset + ", headerSize=" + getHeaderSize()); + + if (firstOffset < getNextOffset()) + throw new IllegalArgumentException( + "firstOffset must be beyond nextOffset: firstOffset=" + + firstOffset + ", nextOffset=" + getNextOffset()); + + writeCache.setFirstOffset(firstOffset); + /* - * Flush the scattered writes in the write cache to the backing - * store. + * Setup buffer for writing. We receive the buffer with pos=0, + * limit=#ofbyteswritten. However, flush() expects pos=limit, will clear + * pos to zero and then write bytes up to the limit. So, we set the + * position to the limit before calling flush. */ + final ByteBuffer bb = b.buffer(); + final int limit = bb.limit(); + bb.position(limit); + + // Flush the write in the write cache to the backing store. writeCache.flush(false/* force */); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-06 17:37:12 UTC (rev 6543) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-06 19:32:14 UTC (rev 6544) @@ -531,7 +531,7 @@ final ByteBuffer data) throws Exception { if (haLog.isInfoEnabled()) - haLog.info("msg=" + msg); + haLog.info("msg=" + msg + ", buf=" + data); /* * Note: the ByteBuffer is owned by the HAReceiveService. This This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-07 16:50:40
|
Revision: 6547 http://bigdata.svn.sourceforge.net/bigdata/?rev=6547&view=rev Author: thompsonbry Date: 2012-09-07 16:50:33 +0000 (Fri, 07 Sep 2012) Log Message: ----------- Modified AbstractJournal to synchronize the root blocks when the quorum meets. Extracted code from RootBlockUtility into a static public method to chose the current root block. This preserves the historical decision to choose rootblock1 when the commit counters are the same. Journal was not logging txLog OPEN events for the WORMStrategy. Modified DumpZookeeper to show the effective session timeout negotiated with the zk quorum. Fixed bug where it was not flushing out the PrintWriter. Modified the DefaultResourceLocator to not use the UNISOLATED view of the GRS when the caller was using a READ-COMMITED view. This addressed some problems with the follower, which is read-only. Modified the DefaultResourceLocator test suite since the read-committed view of the GRS is not actually visible until after a commit on a Journal. Modified the NSS startup to handle startup in a read-only mode (as a quorum follower). Modified Journal.getCounters() to not report the index counters for a read-only journal (this lead to an NPE since _name2addr is null for a read-only journal). Modified the HAJournal configuration files to use a short (5s) timeout for both zookeeper and jini. This makes debugging easier. I am not sure yet what values we will recommend for deployment. One reason to raise these values is that a long GC pause could cause a session timeout or registrar heartbeat drop. @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RootBlockUtility.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/relation/locator/TestDefaultResourceLocator.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/zookeeper/DumpZookeeper.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -2759,6 +2759,7 @@ * HA mode. */ + // Local HA service implementation (non-Remote). final QuorumService<HAGlue> quorumService = quorum.getClient(); boolean didVoteYes = false; @@ -3162,7 +3163,7 @@ assert _fieldReadWriteLock.writeLock().isHeldByCurrentThread(); - if (!isReadOnly()) { // quorumManager.getQuorum().isLeader()) { + if (!isReadOnly()) { /* * Only the leader can accept writes so only the leader will @@ -4636,6 +4637,8 @@ * * We do not need to discard read-only tx since the committed * state should remain valid even when a quorum is lost. + * + * TODO QUORUM TX INTEGRATION */ abort(); @@ -4643,6 +4646,54 @@ quorumToken = newValue; + // This quorum member. + final QuorumService<HAGlue> localService = quorum.getClient(); + + if (_rootBlock.getCommitCounter() == 0 + && localService.isFollower(quorumToken)) { + + // Remote interface for the quorum leader. + final HAGlue leader = localService.getLeader(quorumToken); + + /* + * When the quorum meets for the first time, we need to take + * the root block from the leader and use it to replace both + * of our root blocks (the initial root blocks are + * identical). That will make the root blocks the same on + * all quorum members. + */ + log.info("Fetching root block from leader."); + final ByteBuffer buf; + try { + buf = ByteBuffer.wrap(leader + .getRootBlock(null/* storeUUID */)); + } catch (IOException e) { + throw new RuntimeException(e); + } + + final IRootBlockView rootBlock0 = new RootBlockView( + true/* rootBlock0 */, buf, checker); + + final IRootBlockView rootBlock1 = new RootBlockView( + false/* rootBlock0 */, buf, checker); + + log.info("Synchronizing root blocks with leader."); + + // write root block through to disk and sync. + _bufferStrategy.writeRootBlock(rootBlock0, ForceEnum.Force); + + // write 2nd root block through to disk and sync. + _bufferStrategy.writeRootBlock(rootBlock1, ForceEnum.Force); + + // Choose the "current" root block. + _rootBlock = RootBlockUtility.chooseRootBlock(rootBlock0, + rootBlock1); + + log.info("Synchronized root blocks with leader: rootBlock=" + + _rootBlock); + + } + /* * We need to reset the backing store with the token for the new * quorum. There should not be any active writers since there @@ -4653,6 +4704,7 @@ * Each node in the quorum should handle this locally when it * sees the quorum meet event. */ + _abort(); } else { @@ -4791,7 +4843,7 @@ final long prepareToken = rootBlock.getQuorumToken(); // true the token is valid and this service is the quorum leader - final boolean isLeader = quorum.getMember().isLeader(prepareToken); + final boolean isLeader = quorum.getClient().isLeader(prepareToken); final FutureTask<Boolean> ft = new FutureTaskMon<Boolean>(new Runnable() { @@ -4905,8 +4957,10 @@ // set the new root block. _rootBlock = rootBlock; - if (quorum.getMember().isFollower( - rootBlock.getQuorumToken())) { + final QuorumService<HAGlue> localService = quorum.getClient(); + + if (localService.isFollower(rootBlock.getQuorumToken())) { + /* * Ensure allocators are synced after commit. This * is only done for the followers. The leader has @@ -4914,11 +4968,14 @@ * down the writes. The followers have not be * updating the allocators. */ + if (haLog.isInfoEnabled()) haLog.error("Reset from root block: serviceUUID=" - + quorum.getMember().getServiceId()); + + localService.getServiceId()); + ((IHABufferStrategy) _bufferStrategy) .resetFromHARootBlock(_rootBlock); + } // reload the commit record from the new root block. @@ -5075,8 +5132,9 @@ public byte[] getRootBlock(final UUID storeId) { - if (storeId == null) - throw new IllegalArgumentException(); + // storeId is optional (used in scale-out). +// if (storeId == null) +// throw new IllegalArgumentException(); if (haLog.isInfoEnabled()) haLog.info("storeId=" + storeId); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -355,11 +355,11 @@ } protected void activateTx(final TxState state) { + if (txLog.isInfoEnabled()) + txLog.info("OPEN : txId=" + state.tx + + ", readsOnCommitTime=" + state.readsOnCommitTime); final IBufferStrategy bufferStrategy = Journal.this.getBufferStrategy(); if (bufferStrategy instanceof IRWStrategy) { - if (txLog.isInfoEnabled()) - txLog.info("OPEN : txId=" + state.tx - + ", readsOnCommitTime=" + state.readsOnCommitTime); final IRawTx tx = ((IRWStrategy)bufferStrategy).newTx(); if (m_rawTxs.put(state.tx, tx) != null) { throw new IllegalStateException( @@ -508,8 +508,15 @@ tmp.attach(super.getCounters()); - tmp.makePath(IJournalCounters.indexManager).attach( - _getName2Addr().getIndexCounters()); + if (!isReadOnly()) { + /* + * These index counters are only available for the unisolated + * Name2Addr view. If this is a read-only journal, then we can + * not report out that information. + */ + tmp.makePath(IJournalCounters.indexManager).attach( + _getName2Addr().getIndexCounters()); + } tmp.makePath(IJournalCounters.concurrencyManager) .attach(concurrencyManager.getCounters()); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RootBlockUtility.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RootBlockUtility.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/RootBlockUtility.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -71,9 +71,13 @@ * * @param opener * @param file - * @param validateChecksum - * @param alternateRootBlock + * @param validateChecksum + * @param alternateRootBlock + * When <code>true</code>, the alternate root block will be + * chosen. This flag only makes sense when you have two root + * blocks to choose from and you want to choose the other one. * @param ignoreBadRootBlock + * When <code>true</code>, a bad root block will be ignored. * * @throws IOException * @@ -100,11 +104,15 @@ IRootBlockView rootBlock0 = null, rootBlock1 = null; try { rootBlock0 = new RootBlockView(true, tmp0, checker); + if (log.isInfoEnabled()) + log.info("rootBlock0: " + rootBlock0); } catch (RootBlockException ex) { log.error("Bad root block zero: " + ex); } try { rootBlock1 = new RootBlockView(false, tmp1, checker); + if (log.isInfoEnabled()) + log.info("rootBlock1: " + rootBlock1); } catch (RootBlockException ex) { log.error("Bad root block one: " + ex); } @@ -117,6 +125,68 @@ this.rootBlock0 = rootBlock0; this.rootBlock1 = rootBlock1; + this.rootBlock = chooseRootBlock(rootBlock0, rootBlock1, + ignoreBadRootBlock, alternateRootBlock); + } + + /** + * Return the chosen root block. The root block having the greater + * {@link IRootBlockView#getCommitCounter() commit counter} is chosen by + * default. + * <p> + * Note: For historical compatibility, <code>rootBlock1</code> is chosen if + * both root blocks have the same {@link IRootBlockView#getCommitCounter()}. + * + * @param rootBlock0 + * Root block 0 (may be <code>null</code> if this root block is bad). + * @param rootBlock1 + * Root block 1 (may be <code>null</code> if this root block is bad). + * + * @return The chosen root block. + * + * @throws RuntimeException + * if no root block satisfies the criteria. + */ + public static IRootBlockView chooseRootBlock( + final IRootBlockView rootBlock0, final IRootBlockView rootBlock1) { + + return chooseRootBlock(rootBlock0, rootBlock1, + false/* alternateRootBlock */, false/* ignoreBadRootBlock */); + + } + + /** + * Return the chosen root block. The root block having the greater + * {@link IRootBlockView#getCommitCounter() commit counter} is chosen by + * default. + * <p> + * Note: For historical compatibility, <code>rootBlock1</code> is chosen if + * both root blocks have the same {@link IRootBlockView#getCommitCounter()}. + * + * @param rootBlock0 + * Root block 0 (may be <code>null</code> if this root block is + * bad). + * @param rootBlock1 + * Root block 1 (may be <code>null</code> if this root block is + * bad). + * @param alternateRootBlock + * When <code>true</code>, the alternate root block will be + * chosen. This flag only makes sense when you have two root + * blocks to choose from and you want to choose the other one. + * @param ignoreBadRootBlock + * When <code>true</code>, a bad root block will be ignored. + * + * @return The chosen root block. + * + * @throws RuntimeException + * if no root block satisfies the criteria. + */ + public static IRootBlockView chooseRootBlock( + final IRootBlockView rootBlock0, final IRootBlockView rootBlock1, + final boolean alternateRootBlock,final boolean ignoreBadRootBlock) { + + final IRootBlockView rootBlock; + if (!ignoreBadRootBlock && (rootBlock0 == null || rootBlock1 == null)) { /* @@ -129,28 +199,38 @@ + ", rootBlock1=" + (rootBlock1 == null ? "bad" : "ok")); } - if(alternateRootBlock) { + + if (alternateRootBlock) { + /* * A request was made to use the alternative root block. */ + if (rootBlock0 == null || rootBlock1 == null) { + /* * Note: The [alternateRootBlock] flag only makes sense when you * have two root blocks to choose from and you want to choose * the other one. */ + throw new RuntimeException( "Can not use alternative root block since one root block is damaged."); } else { + log.warn("Using alternate root block"); + } + } + /* * Choose the root block based on the commit counter. * * Note: The commit counters MAY be equal. This will happen if * we rollback the journal and override the current root block - * with the alternate root block. + * with the alternate root block. It is also true when we first + * create a Journal. * * Note: If either root block was damaged then that rootBlock * reference will be null and we will use the other rootBlock @@ -159,21 +239,45 @@ */ final long cc0 = rootBlock0 == null ? -1L : rootBlock0 .getCommitCounter(); + final long cc1 = rootBlock1 == null ? -1L : rootBlock1 .getCommitCounter(); + if (rootBlock0 == null) { + // No choice. The other root block does not exist. rootBlock = rootBlock1; + } else if (rootBlock1 == null) { + // No choice. The other root block does not exist. rootBlock = rootBlock0; + } else { - // A choice exists, compare the timestamps. - this.rootBlock = (cc0 > cc1 // + + /* + * A choice exists, compare the commit counters. + * + * Note: As a historical artifact, this logic will choose + * [rootBlock1] when the two root blocks have the same + * [commitCounter]. With the introduction of HA support, code in + * AbstractJournal#setQuorumToken() now depends on this choice + * policy to decide which root block it will take as the current + * root block. + */ + + rootBlock = (cc0 > cc1 // ? (alternateRootBlock ? rootBlock1 : rootBlock0) // : (alternateRootBlock ? rootBlock0 : rootBlock1)// ); + } + + if (log.isInfoEnabled()) + log.info("chosenRoot: " + rootBlock); + + return rootBlock; + } /** Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -764,19 +764,85 @@ } else { + final SparseRowStore rowStore; + /* * Look up the GRS for a non-read-historical view. * * Note: We do NOT cache the property set on this code path. + * + * Note: This used to use the UNISOLATED view for all such requests. + * I have modified the code (9/7/2012) to use the caller's view. + * This allows a caller using a READ_COMMITTED view to obtain a + * read-only view of the GRS. That is required to support GRS reads + * on followers in an HA quorum (followers are read-only and do not + * have access to the unisolated versions of indices). */ - final SparseRowStore rowStore = indexManager - .getGlobalRowStore(/* timestamp */); + if (timestamp == ITx.UNISOLATED) { + + /* + * The unisolated view. + */ + + rowStore = indexManager.getGlobalRowStore(); + } else if (timestamp == ITx.READ_COMMITTED) { + + /* + * View for the last commit time. + */ + + rowStore = indexManager.getGlobalRowStore(indexManager + .getLastCommitTime()); + + } else if (TimestampUtility.isReadWriteTx(timestamp)) { + + if (indexManager instanceof Journal) { + + final Journal journal = (Journal) indexManager; + + final ITx tx = journal.getTransactionManager().getTx( + timestamp); + + if (tx == null) { + + // No such tx? + throw new IllegalStateException("No such tx: " + + timestamp); + + } + + // Use the view that the tx is reading on. + rowStore = indexManager.getGlobalRowStore(tx + .getReadsOnCommitTime()); + + } else { + + /* + * TODO This should use the readsOnCommitTime on the cluster + * as well. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/266 + * (thin txId) + */ + + rowStore = indexManager.getGlobalRowStore(/* unisolated */); + + } + + + } else { + + throw new AssertionError("timestamp=" + + TimestampUtility.toString(timestamp)); + + } + // Read the properties from the GRS. map = rowStore == null ? null : rowStore.read( RelationSchema.INSTANCE, namespace); - + } if (map == null) { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/relation/locator/TestDefaultResourceLocator.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/relation/locator/TestDefaultResourceLocator.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/relation/locator/TestDefaultResourceLocator.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -311,6 +311,37 @@ ITx.UNISOLATED) == mockRelation); /* + * Note: The read-committed view is not, in fact, locatable. + * This is because the GRS update on the Journal does not + * include a commit. That update will become visible only + * once we do a Journal.commit(). + */ + assertNull(store.getResourceLocator().locate(namespace, + ITx.READ_COMMITTED)); + +// /* +// * The read-committed view of the resource is also locatable. +// */ +// assertNotNull(store.getResourceLocator().locate(namespace, +// ITx.READ_COMMITTED)); +// +// /* +// * The read committed view is not the same instance as the +// * unisolated view. +// */ +// assertTrue(((MockRelation) store.getResourceLocator().locate( +// namespace, ITx.READ_COMMITTED)) != mockRelation); + + } + + // commit time immediately proceeding this commit. + final long priorCommitTime = store.getLastCommitTime(); + + // commit, noting the commit time. + final long lastCommitTime = store.commit(); + + { + /* * The read-committed view of the resource is also locatable. */ assertNotNull(store.getResourceLocator().locate(namespace, @@ -325,12 +356,6 @@ } - // commit time immediately proceeding this commit. - final long priorCommitTime = store.getLastCommitTime(); - - // commit, noting the commit time. - final long lastCommitTime = store.commit(); - if(log.isInfoEnabled()) { log.info("priorCommitTime=" + priorCommitTime); log.info("lastCommitTime =" + lastCommitTime); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-07 16:50:33 UTC (rev 6547) @@ -66,7 +66,7 @@ private static fedname = "benchmark"; // NanoSparqlServer (http) port. - private static nssPort = 8090; + private static nssPort = ConfigMath.add(8090,1); // write replication pipeline port. private static haPort = ConfigMath.add(9090,1); @@ -127,10 +127,10 @@ */ // jini - static private leaseTimeout = ConfigMath.m2ms(60);// 20s=20000; 5m=300000; + static private leaseTimeout = ConfigMath.s2ms(5); // zookeeper - static private sessionTimeout = (int)ConfigMath.m2ms(10);// was 5m 20s=20000; 5m=300000; + static private sessionTimeout = (int)ConfigMath.s2ms(5); /* * Configuration for default KB. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-07 16:50:33 UTC (rev 6547) @@ -127,10 +127,10 @@ */ // jini - static private leaseTimeout = ConfigMath.m2ms(60);// 20s=20000; 5m=300000; + static private leaseTimeout = ConfigMath.s2ms(5); // zookeeper - static private sessionTimeout = (int)ConfigMath.m2ms(10);// was 5m 20s=20000; 5m=300000; + static private sessionTimeout = (int)ConfigMath.s2ms(5); /* * Configuration for default KB. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-07 16:50:33 UTC (rev 6547) @@ -127,10 +127,10 @@ */ // jini - static private leaseTimeout = ConfigMath.m2ms(60);// 20s=20000; 5m=300000; + static private leaseTimeout = ConfigMath.s2ms(5); // zookeeper - static private sessionTimeout = (int)ConfigMath.m2ms(10);// was 5m 20s=20000; 5m=300000; + static private sessionTimeout = (int)ConfigMath.s2ms(5); /* * Configuration for default KB. Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -382,6 +382,10 @@ case QUORUM_MEET: if (jettyServer == null) { /* + * The NSS will start on each service in the quorum. + * However, only the leader will create the default KB + * (if that option is configured). + * * Submit task since we can not do this in the event * thread. */ @@ -389,12 +393,8 @@ new Callable<Void>() { @Override public Void call() throws Exception { - if (jettyServer == null - && quorum.getMember().isLeader( - e.token())) { + if (jettyServer == null) { try { - System.out// TODO LOG @ INFO - .println("Starting NSS"); startNSS(); } catch (Exception e1) { log.error( @@ -597,6 +597,8 @@ NSSConfigurationOptions.PORT, Integer.TYPE, NSSConfigurationOptions.DEFAULT_PORT); + log.warn("Starting NSS: port=" + port); + final Map<String, String> initParams = new LinkedHashMap<String, String>(); { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/zookeeper/DumpZookeeper.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/zookeeper/DumpZookeeper.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/zookeeper/DumpZookeeper.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -38,10 +38,10 @@ import org.apache.log4j.Logger; import org.apache.zookeeper.KeeperException; +import org.apache.zookeeper.KeeperException.NoNodeException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; -import org.apache.zookeeper.KeeperException.NoNodeException; import org.apache.zookeeper.data.Stat; import com.bigdata.io.SerializerUtil; @@ -79,6 +79,8 @@ * @throws InterruptedException * @throws KeeperException * @throws ConfigurationException + * + * TODO Add a listener mode (tail zk events). */ public static void main(final String[] args) throws IOException, InterruptedException, KeeperException, ConfigurationException { @@ -110,14 +112,42 @@ } }); + /* + * The sessionTimeout as negotiated (effective sessionTimeout). + * + * Note: This is not available until we actually request something + * from zookeeper. + */ + { + + try { + z.getData(zooClientConfig.zroot, false/* watch */, null/* stat */); + } catch (NoNodeException ex) { + // Ignore. + } catch (KeeperException ex) { + // Oops. + log.error(ex, ex); + } + + System.out.println("Negotiated sessionTimeout=" + + z.getSessionTimeout() + "ms"); + } + + final PrintWriter w = new PrintWriter(System.out); try { - new DumpZookeeper(z).dump(new PrintWriter(System.out), showData, - zooClientConfig.zroot, 0/*depth*/); + // recursive dump. + new DumpZookeeper(z) + .dump(w, showData, zooClientConfig.zroot, 0/* depth */); + w.println("----"); + + w.flush(); + } finally { z.close(); + w.close(); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2012-09-07 12:17:03 UTC (rev 6546) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2012-09-07 16:50:33 UTC (rev 6547) @@ -48,11 +48,15 @@ import com.bigdata.counters.CounterSet; import com.bigdata.counters.ICounterSetAccess; import com.bigdata.counters.IProcessCounters; +import com.bigdata.ha.HAGlue; +import com.bigdata.ha.QuorumService; import com.bigdata.io.DirectBufferPool; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.ITransactionService; import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; +import com.bigdata.quorum.AsynchronousQuorumCloseException; +import com.bigdata.quorum.Quorum; import com.bigdata.rdf.ServiceProviderHook; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.store.ScaleOutTripleStore; @@ -173,58 +177,101 @@ } - if(create) { - - // Attempt to resolve the namespace. - if (indexManager.getResourceLocator().locate(namespace, - ITx.UNISOLATED) == null) { + if (create) { - log.warn("Creating KB instance: namespace=" + namespace); - - if (indexManager instanceof Journal) { + if (indexManager instanceof Journal) { - /* - * Create a local triple store. - * - * Note: This hands over the logic to some custom code - * located on the BigdataSail. - */ - - final Journal jnl = (Journal) indexManager; + /* + * Create a local triple store. + * + * Note: This hands over the logic to some custom code located + * on the BigdataSail. + */ - final Properties properties = new Properties(jnl - .getProperties()); + final Journal jnl = (Journal) indexManager; - // override the namespace. - properties.setProperty(BigdataSail.Options.NAMESPACE, - namespace); + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = jnl + .getQuorum(); - // create the appropriate as configured triple/quad store. - BigdataSail.createLTS(jnl, properties); + boolean isSoloOrLeader; + if (quorum == null) { + isSoloOrLeader = true; + } else { - + + final long token; + try { + log.warn("Awaiting quorum."); + token = quorum.awaitQuorum(); + } catch (AsynchronousQuorumCloseException e1) { + throw new RuntimeException(e1); + } catch (InterruptedException e1) { + throw new RuntimeException(e1); + } + + if (quorum.getMember().isLeader(token)) { + isSoloOrLeader = true; + } else { + isSoloOrLeader = false; + } + } + + if (isSoloOrLeader) { + + // Attempt to resolve the namespace. + if (indexManager.getResourceLocator().locate(namespace, + ITx.UNISOLATED) == null) { + + log.warn("Creating KB instance: namespace=" + namespace); + + final Properties properties = new Properties( + jnl.getProperties()); + + // override the namespace. + properties.setProperty(BigdataSail.Options.NAMESPACE, + namespace); + + // create the appropriate as configured triple/quad + // store. + BigdataSail.createLTS(jnl, properties); + + } // if( tripleStore == null ) + + } + + } else { + + // Attempt to resolve the namespace. + if (indexManager.getResourceLocator().locate(namespace, + ITx.UNISOLATED) == null) { + /* * Register triple store for scale-out. */ - + + log.warn("Creating KB instance: namespace=" + namespace); + final JiniFederation<?> fed = (JiniFederation<?>) indexManager; - - final Properties properties = fed.getClient().getProperties(); - + + final Properties properties = fed.getClient() + .getProperties(); + final ScaleOutTripleStore lts = new ScaleOutTripleStore( indexManager, namespace, ITx.UNISOLATED, properties); - + lts.create(); - - } - - } // if( tripleStore == null ) - + + } // if( tripleStore == null ) + + } + } // if( create ) - txs = (indexManager instanceof Journal ? ((Journal) indexManager).getTransactionManager() - .getTransactionService() : ((IBigdataFederation<?>) indexManager).getTransactionService()); + txs = (indexManager instanceof Journal ? ((Journal) indexManager) + .getTransactionManager().getTransactionService() + : ((IBigdataFederation<?>) indexManager) + .getTransactionService()); final long timestamp; { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-07 18:03:10
|
Revision: 6549 http://bigdata.svn.sourceforge.net/bigdata/?rev=6549&view=rev Author: thompsonbry Date: 2012-09-07 18:03:03 +0000 (Fri, 07 Sep 2012) Log Message: ----------- Modified HAGlue to extend IService. This makes the HAJournals play nice with ListServices. @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (HA Journal) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-09-07 17:48:19 UTC (rev 6548) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-09-07 18:03:03 UTC (rev 6549) @@ -6,6 +6,7 @@ import java.util.concurrent.Future; import com.bigdata.journal.AbstractJournal; +import com.bigdata.service.IService; /** * A {@link Remote} interface for methods supporting high availability for a set @@ -23,7 +24,7 @@ * the standard jini smart proxy naming pattern. */ public interface HAGlue extends HAGlueBase, HAPipelineGlue, HAReadGlue, - HACommitGlue { + HACommitGlue, IService { /* * Administrative Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2012-09-07 17:48:19 UTC (rev 6548) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2012-09-07 18:03:03 UTC (rev 6549) @@ -25,6 +25,7 @@ import java.io.IOException; import java.net.InetSocketAddress; +import java.rmi.RemoteException; import java.util.UUID; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; @@ -92,4 +93,29 @@ return delegate.receiveAndReplicate(msg); } + @Override + public UUID getServiceUUID() throws IOException { + return delegate.getServiceUUID(); + } + + @Override + public Class getServiceIface() throws IOException { + return delegate.getServiceIface(); + } + + @Override + public String getHostname() throws IOException { + return delegate.getHostname(); + } + + @Override + public String getServiceName() throws IOException { + return delegate.getServiceName(); + } + + @Override + public void destroy() throws RemoteException { + delegate.destroy(); + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-07 17:48:19 UTC (rev 6548) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-07 18:03:03 UTC (rev 6549) @@ -34,6 +34,7 @@ import java.nio.ByteBuffer; import java.nio.channels.Channel; import java.nio.channels.FileChannel; +import java.rmi.RemoteException; import java.util.Iterator; import java.util.LinkedList; import java.util.List; @@ -77,6 +78,7 @@ import com.bigdata.config.IntegerValidator; import com.bigdata.config.LongRangeValidator; import com.bigdata.config.LongValidator; +import com.bigdata.counters.AbstractStatisticsCollector; import com.bigdata.counters.CounterSet; import com.bigdata.counters.Instrument; import com.bigdata.ha.HAGlue; @@ -5180,6 +5182,47 @@ return getProxy(ft); } + /* + * IService + */ + + @Override + public UUID getServiceUUID() throws IOException { + + return getServiceId(); + + } + + @Override + public Class getServiceIface() throws IOException { + + return HAGlue.class; + + } + + @Override + public String getHostname() throws IOException { + + return AbstractStatisticsCollector.fullyQualifiedHostName; + + } + + @Override + public String getServiceName() throws IOException { + + // TODO Configurable service name? + return getServiceIface().getName() + "@" + getHostname() + "#" + + hashCode(); + + } + + @Override + public void destroy() throws RemoteException { + + AbstractJournal.this.destroy(); + + } + }; /** Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-07 17:48:19 UTC (rev 6548) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-07 18:03:03 UTC (rev 6549) @@ -207,6 +207,26 @@ */ /* + * Jini client configuration. + * + * TODO Only used by ListServices. + * TODO leaseTimeout ignored by HAJournalServer + */ +com.bigdata.service.jini.JiniClient { + + groups = bigdata.groups; + + locators = bigdata.locators; + + jiniOptions = new String[] { + + "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout, + + }; + +} + +/* * Server configuration options. */ com.bigdata.journal.jini.ha.HAJournalServer { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-07 17:48:19 UTC (rev 6548) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-07 18:03:03 UTC (rev 6549) @@ -207,6 +207,26 @@ */ /* + * Jini client configuration. + * + * TODO Only used by ListServices. + * TODO leaseTimeout ignored by HAJournalServer + */ +com.bigdata.service.jini.JiniClient { + + groups = bigdata.groups; + + locators = bigdata.locators; + + jiniOptions = new String[] { + + "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout, + + }; + +} + +/* * Server configuration options. */ com.bigdata.journal.jini.ha.HAJournalServer { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-07 17:48:19 UTC (rev 6548) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-07 18:03:03 UTC (rev 6549) @@ -208,6 +208,26 @@ */ /* + * Jini client configuration. + * + * TODO Only used by ListServices. + * TODO leaseTimeout ignored by HAJournalServer + */ +com.bigdata.service.jini.JiniClient { + + groups = bigdata.groups; + + locators = bigdata.locators; + + jiniOptions = new String[] { + + "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout, + + }; + +} + +/* * Server configuration options. */ com.bigdata.journal.jini.ha.HAJournalServer { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-07 19:58:33
|
Revision: 6550 http://bigdata.svn.sourceforge.net/bigdata/?rev=6550&view=rev Author: thompsonbry Date: 2012-09-07 19:58:26 +0000 (Fri, 07 Sep 2012) Log Message: ----------- - Modified the HAJournal.config files to use the recycler rather than session protection. - Added features to the ServiceDescription to indicate scale-out and high availability. - Modified the NSS to refuse UPDATE requests if not the leader. It will respond with BAD_REQUEST and indicate that the service is not the quorum leader. - The leader does not change (or withdraw) it's vote when the quorum breaks. This results in the lack of a consensus when a follower is restarted since the leader is still voting the commit time around which there was a last a consensus. - One problem was that the service was attempting a 2-phase abort rather than a local abort. The fix was to use a local abort (_abort()). The exception from the 2-phase abort was interfering with the AbstractQuorum and causing the protocol to break. I have therefore modified the AbstractQuorum methods that invoke client methods to wrap up thrown exceptions and log them. This should help to preserve the correct execution of the quorum protocol. However, the service still does not attempt to rejoin if the the quorum breaks when a service stops and is then restarted. Instead, we wind up with the service that was not stopped in MEMBERS but not in PIPELINE and without a cast VOTE. I am continuing to look into this. It appears that a service leave is being forced on the non-stopped service when it should only be casting a new vote. https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/MultiTenancyServlet.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -4642,8 +4642,10 @@ * * TODO QUORUM TX INTEGRATION */ - abort(); + // local abort (no quorum, so we can do 2-phase abort). + _abort(); + } else if (didMeet) { quorumToken = newValue; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -51,6 +51,7 @@ import com.bigdata.ha.HAGlue; import com.bigdata.ha.HAPipelineGlue; +import com.bigdata.util.InnerCause; import com.bigdata.util.concurrent.DaemonThreadFactory; /** @@ -449,8 +450,8 @@ * * Note: We can not do this until everything is in place, but we * also must do this before the watcher setups up for discovery - * since that could will sending messages to the client as it - * discovers the distributed quorum state. + * since it will be sending messages to the client as it discovers + * the distributed quorum state. */ client.start(this); /* @@ -495,7 +496,11 @@ /* * Let the service know that it is no longer running w/ the quorum. */ - client.terminate(); + try { + client.terminate(); + } catch (Throwable t) { + launderThrowable(t); + } watcher.terminate(); if (watcherActionService != null) { watcherActionService.shutdown(); @@ -2032,7 +2037,11 @@ * send it a synchronous message so it can handle * that add event. */ - client.memberAdd(); + try { + client.memberAdd(); + } catch (Throwable t) { + launderThrowable(t); + } } } // queue client event. @@ -2071,7 +2080,11 @@ * we send it a synchronous message so it can handle * that add event. */ - client.memberRemove(); + try { + client.memberRemove(); + } catch (Throwable t) { + launderThrowable(t); + } } } // queue client event. @@ -2117,7 +2130,11 @@ * message so it can handle that add event, e.g., by * setting itself up to receive data. */ - client.pipelineAdd(); + try { + client.pipelineAdd(); + } catch (Throwable t) { + launderThrowable(t); + } } if (lastId != null && clientId.equals(lastId)) { /* @@ -2127,8 +2144,12 @@ * this event by configuring itself to send data to * that service. */ - client.pipelineChange(null/* oldDownStream */, - serviceId/* newDownStream */); + try { + client.pipelineChange(null/* oldDownStream */, + serviceId/* newDownStream */); + } catch (Throwable t) { + launderThrowable(t); + } } } // queue client event. @@ -2175,7 +2196,11 @@ * event, e.g., by tearing down its service which is * receiving writes from the pipeline. */ - client.pipelineRemove(); + try { + client.pipelineRemove(); + } catch (Throwable t) { + launderThrowable(t); + } } if (priorNext != null && clientId.equals(priorNext[0])) { /* @@ -2184,8 +2209,13 @@ * to handle this event by configuring itself to * send data to that service. */ - client.pipelineChange(serviceId/* oldDownStream */, - priorNext[1]/* newDownStream */); + try { + client.pipelineChange( + serviceId/* oldDownStream */, + priorNext[1]/* newDownStream */); + } catch (Throwable t) { + launderThrowable(t); + } } if (priorNext != null && priorNext[0] == null && clientId.equals(priorNext[1])) { @@ -2197,7 +2227,11 @@ * configuring itself with an HASendService rather * than an HAReceiveService. */ - client.pipelineElectedLeader(); + try { + client.pipelineElectedLeader(); + } catch (Throwable t) { + launderThrowable(t); + } } } // queue client event. @@ -2268,7 +2302,11 @@ * Tell the client that consensus has been * reached on this last commit time. */ - client.consensus(lastCommitTime); + try { + client.consensus(lastCommitTime); + } catch (Throwable t) { + launderThrowable(t); + } } // queue event. sendEvent(new E(QuorumEventEnum.CONSENSUS, @@ -2372,7 +2410,11 @@ final QuorumMember<S> client = getClientAsMember(); if (client != null) { // Tell the client that the consensus was lost. - client.lostConsensus(); + try { + client.lostConsensus(); + } catch (Throwable t) { + launderThrowable(t); + } } } // found where the service had cast its vote. @@ -2435,7 +2477,11 @@ ); final QuorumMember<S> client = getClientAsMember(); if (client != null) { - client.quorumMeet(token, leaderId); + try { + client.quorumMeet(token, leaderId); + } catch (Throwable t) { + launderThrowable(t); + } } /* * Note: If we send out an event here then any code path that @@ -2565,7 +2611,11 @@ final QuorumMember<S> client = getClientAsMember(); if (client != null) { // Notify the client that the quorum broke. - client.quorumBreak(); + try { + client.quorumBreak(); + } catch(Exception t) { + launderThrowable(t); + } } sendEvent(new E(QuorumEventEnum.QUORUM_BROKE, lastValidToken, token, null/* serviceId */)); @@ -2628,7 +2678,11 @@ */ final UUID clientId = client.getServiceId(); if(serviceId.equals(clientId)) { - client.serviceJoin(); + try { + client.serviceJoin(); + } catch (Throwable t) { + launderThrowable(t); + } } } // queue event. @@ -2778,9 +2832,9 @@ * * Note: Services which are members of the quorum will see * the quorumBreak() message. They MUST handle that message - * by (a) doing an abort() which will any buffered writes - * and reload their current root block; and (b) cast a vote - * for their current commit time. Once a consensus is + * by (a) doing an abort() which will discard any buffered + * writes and reload their current root block; and (b) cast + * a vote for their current commit time. Once a consensus is * reached on the current commit time, the services will * join in the vote order, a new leader will be elected, and * the quorum will meet again. @@ -2790,7 +2844,11 @@ } if (client != null) { // Notify all quorum members that a service left. - client.serviceLeave(); + try { + client.serviceLeave(); + } catch (Throwable t) { + launderThrowable(t); + } } sendEvent(new E(QuorumEventEnum.SERVICE_LEAVE, lastValidToken, token, serviceId)); @@ -2979,4 +3037,29 @@ } + /** + * Launder something thrown by the {@link QuorumClient}. + * + * @param t + * The throwable. + */ + private void launderThrowable(final Throwable t) { + + if (InnerCause.isInnerCause(t, InterruptedException.class)) { + + // Propagate the interrupt. + Thread.currentThread().interrupt(); + + return; + } + + /* + * Log and ignore. We do not want bugs in the QuorumClient to interfere + * with the Quorum protocol. + */ + + log.error(t, t); + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -132,11 +132,14 @@ * Cast a vote on the behalf of the associated service. If the service has * already voted for some other lastCommitTime, then that vote is withdrawn * before the new vote is cast. Services do not withdraw their cast votes - * until a quorum breaks and a new consensus needs to be established. When a - * service needs to synchronize, it will have initially votes its current - * lastCommitTime. Once the service is receiving writes from the write - * pipeline and has synchronized any historical delta, it will update its - * vote and join the quorum at the next commit point (or immediately if + * until a quorum breaks and a new consensus needs to be established. When + * it does, then need to consult their root blocks and vote their then + * current lastCommitTime. + * <p> + * When a service needs to re-synchronize with a quorum, it initially votes + * its current lastCommitTime. Once the service is receiving writes from the + * write pipeline and has synchronized any historical delta, it will update + * its vote and join the quorum at the next commit point (or immediately if * there are no outstanding writes against the quorum). * * @param lastCommitTime Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-07 19:58:26 UTC (rev 6550) @@ -21,6 +21,7 @@ import com.bigdata.journal.BufferMode; import com.bigdata.jini.lookup.entry.*; import com.bigdata.service.IBigdataClient; +import com.bigdata.service.AbstractTransactionService; import com.bigdata.service.jini.*; import com.bigdata.service.jini.lookup.DataServiceFilter; import com.bigdata.service.jini.master.ServicesTemplate; @@ -281,6 +282,8 @@ new NV(IndexMetadata.Options.BTREE_BRANCHING_FACTOR,"128"), + new NV(AbstractTransactionService.Options.MIN_RELEASE_AGE,"1"), + }, bigdata.kb); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-07 19:58:26 UTC (rev 6550) @@ -21,6 +21,7 @@ import com.bigdata.journal.BufferMode; import com.bigdata.jini.lookup.entry.*; import com.bigdata.service.IBigdataClient; +import com.bigdata.service.AbstractTransactionService; import com.bigdata.service.jini.*; import com.bigdata.service.jini.lookup.DataServiceFilter; import com.bigdata.service.jini.master.ServicesTemplate; @@ -281,6 +282,8 @@ new NV(IndexMetadata.Options.BTREE_BRANCHING_FACTOR,"128"), + new NV(AbstractTransactionService.Options.MIN_RELEASE_AGE,"1"), + }, bigdata.kb); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-07 19:58:26 UTC (rev 6550) @@ -21,6 +21,7 @@ import com.bigdata.journal.BufferMode; import com.bigdata.jini.lookup.entry.*; import com.bigdata.service.IBigdataClient; +import com.bigdata.service.AbstractTransactionService; import com.bigdata.service.jini.*; import com.bigdata.service.jini.lookup.DataServiceFilter; import com.bigdata.service.jini.master.ServicesTemplate; @@ -282,6 +283,8 @@ new NV(IndexMetadata.Options.BTREE_BRANCHING_FACTOR,"128"), + new NV(AbstractTransactionService.Options.MIN_RELEASE_AGE,"1"), + }, bigdata.kb); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -420,6 +420,7 @@ quorum.start(newQuorumService(logicalServiceId, serviceUUID, haGlueService, journal)); + // TODO These methods could be moved into QuorumServiceImpl.start(Quorum) final QuorumActor<?,?> actor = quorum.getActor(); actor.memberAdd(); actor.pipelineAdd(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -32,11 +32,16 @@ import javax.servlet.ServletContext; import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; +import com.bigdata.ha.HAGlue; +import com.bigdata.ha.QuorumService; import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.Journal; +import com.bigdata.quorum.Quorum; /** * Useful glue for implementing service actions, but does not directly implement @@ -127,7 +132,65 @@ return getRequiredServletContextAttribute(ATTRIBUTE_INDEX_MANAGER); } + + /** + * Return the {@link Quorum} -or- <code>null</code> if the + * {@link IIndexManager} is not participating in an HA {@link Quorum}. + */ + protected Quorum<HAGlue, QuorumService<HAGlue>> getQuorum() { + + final IIndexManager indexManager = getIndexManager(); + + if (indexManager instanceof Journal) { + + return ((Journal) indexManager).getQuorum(); + + } + + return null; + + } + /** + * If the node is not writable, then commit a response and return + * <code>false</code>. Otherwise return <code>true</code>. + * + * @param req + * @param resp + * + * @return <code>true</code> iff the node is writable. + * + * @throws IOException + */ + protected boolean isWritable(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if(quorum == null) { + + // No quorum. + return true; + + } + + if (quorum.getClient().isLeader(quorum.token())) { + + /* + * There is a quorum. The quorum is met. This is the leader. + */ + + return true; + + } + + buildResponse(resp, HTTP_METHOD_NOT_ALLOWED, MIME_TEXT_PLAIN, + "Not quorum leader."); + + return false; + + } + // /** // * The {@link SparqlCache}. // */ Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -72,6 +72,11 @@ protected void doDelete(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + final String queryStr = req.getParameter("query"); if (queryStr != null) { @@ -227,6 +232,11 @@ protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + final String contentType = req.getContentType(); final String queryStr = req.getParameter("query"); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -22,6 +22,7 @@ */ package com.bigdata.rdf.sail.webapp; +import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; import java.net.URLConnection; @@ -101,18 +102,21 @@ * </p> */ @Override - protected void doPost(HttpServletRequest req, HttpServletResponse resp) { - try { - if (req.getParameter("uri") != null) { - doPostWithURIs(req, resp); - return; - } else { - doPostWithBody(req, resp); - return; - } - } catch (Exception e) { - throw new RuntimeException(e); - } + protected void doPost(HttpServletRequest req, HttpServletResponse resp) + throws IOException { + + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + + if (req.getParameter("uri") != null) { + doPostWithURIs(req, resp); + return; + } else { + doPostWithBody(req, resp); + return; + } } /** @@ -126,7 +130,7 @@ * @throws Exception */ private void doPostWithBody(final HttpServletRequest req, - final HttpServletResponse resp) throws Exception { + final HttpServletResponse resp) throws IOException { final long begin = System.currentTimeMillis(); @@ -263,7 +267,7 @@ * @throws Exception */ private void doPostWithURIs(final HttpServletRequest req, - final HttpServletResponse resp) throws Exception { + final HttpServletResponse resp) throws IOException { final long begin = System.currentTimeMillis(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/MultiTenancyServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/MultiTenancyServlet.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/MultiTenancyServlet.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -105,6 +105,11 @@ protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + if (req.getRequestURI().endsWith("/namespace")) { doCreateNamespace(req, resp); @@ -125,6 +130,11 @@ protected void doDelete(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + final String namespace = getNamespace(req); if (req.getRequestURI().endsWith("/namespace/" + namespace)) { @@ -145,6 +155,11 @@ protected void doPut(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + // Pass through to the SPARQL end point REST API. m_restServlet.doPut(req, resp); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -259,6 +259,11 @@ private void doUpdate(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + final String namespace = getNamespace(req); final long timestamp = ITx.UNISOLATED;//getTimestamp(req); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SD.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -36,6 +36,8 @@ import org.openrdf.model.impl.URIImpl; import org.openrdf.model.vocabulary.RDF; +import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.Journal; import com.bigdata.rdf.axioms.Axioms; import com.bigdata.rdf.axioms.NoAxioms; import com.bigdata.rdf.axioms.OwlAxioms; @@ -44,6 +46,7 @@ import com.bigdata.rdf.store.AbstractTripleStore; import com.bigdata.rdf.store.BD; import com.bigdata.rdf.vocab.decls.VoidVocabularyDecl; +import com.bigdata.service.IBigdataFederation; /** * SPARQL 1.1 Service Description vocabulary class. @@ -156,8 +159,20 @@ static public final URI IsolatableIndices = new URIImpl(BDFNS + "KB/IsolatableIndices"); - + /** + * A highly available deployment. + */ + static public final URI HighlyAvailable = new URIImpl(BDFNS + + "HighlyAvailable"); + + /** + * An {@link IBigdataFederation}. + */ + static public final URI ScaleOut = new URIImpl(BDFNS + + "ScaleOut"); + + /** * The <code>namespace</code> for this KB instance as configured by the * {@link BigdataSail.Options#NAMESPACE} property. */ @@ -616,7 +631,29 @@ g.add(aService, SD.feature, IsolatableIndices); } - + + { + + final IIndexManager indexManager = tripleStore.getIndexManager(); + + if (indexManager instanceof Journal) { + + final Journal jnl = (Journal) indexManager; + + if (jnl.getQuorum() != null) { + + g.add(aService, SD.feature, HighlyAvailable); + + } + + } else if (indexManager instanceof IBigdataFederation) { + + g.add(aService, SD.feature, ScaleOut); + + } + + } + } /** Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2012-09-07 18:03:03 UTC (rev 6549) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2012-09-07 19:58:26 UTC (rev 6550) @@ -75,6 +75,11 @@ protected void doPut(HttpServletRequest req, HttpServletResponse resp) throws IOException { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + final String queryStr = req.getParameter("query"); final String contentType = req.getContentType(); @@ -318,7 +323,12 @@ protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - if (ServletFileUpload.isMultipartContent(req)) { + if (!isWritable(req, resp)) { + // Service must be writable. + return; + } + + if (ServletFileUpload.isMultipartContent(req)) { doUpdateWithBody(req, resp); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-09 15:40:48
|
Revision: 6552 http://bigdata.svn.sourceforge.net/bigdata/?rev=6552&view=rev Author: thompsonbry Date: 2012-09-09 15:40:41 +0000 (Sun, 09 Sep 2012) Log Message: ----------- Resolved problem where a restart of a follower after a 2-phase commit would not result in a quorum meet. The root cause was that the leader was failing to recast its vote for its (then current) last commit time following a quorum break. Some bugs in AbstractQuorum and the test suite were also identified. With this commit, you can now restart either the leader and/or the follower and the quorum will meet. {{{ (*) Found a bug in QuorumActorBase.conditionalCastVote() where it would return immediately if the service had cast ANY vote. The code will now verify that the service has case the desired vote before returning. If any other vote was cast, then the vote will be withdrawn and then a new vote cast. (*) Found a bug in AbstractQuorumTestCase where it was not computing the remaining nanoseconds correctly when awaiting a condition to succeed. The same bug was present in the zk quorum test suite. I modified the zk version to invoke the fixed version on AbstractQuorurmTestCase. (*) Modified QuorumWatcherBase.clearToken() to invoke withdrawVote() rather than serviceLeave() if the service was joined. The quorum CI tests are now green. (*) Further modified QuorumWatcherBase.clearToken() to invoke castVote(lastCommitTime) if the QuorumMember is a QuorumService and moved the call to conditionalAddPipeline() into conditionalCastVote(). You can not stop the follower and the leader will recast its vote for its then current last commit time while remaining in the pipeline (if fact, it is not leaving the pipeline at all during that transition, which is nicer than having it leave and reenter the pipeline). When the follower restarts, the quorum meets. Bingo. If you instead stop the leader and then restart it, the quorum again meets. However, the two services have now switch roles (because they are in a different pipeline order) and B has become the leader. Again, that is exactly right. }}} Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HACommitGlue.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumService.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumServiceBase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/AbstractQuorumTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/AbstractZkQuorumTestCase.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/TestAll.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HACommitGlue.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HACommitGlue.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HACommitGlue.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -32,7 +32,6 @@ import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; -import com.bigdata.concurrent.TimeoutException; import com.bigdata.journal.AbstractJournal; /** Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -53,7 +53,7 @@ return member.getService(serviceId); } - + /** * Cancel the requests on the remote services (RMI). This is a best effort * implementation. Any RMI related errors are trapped and ignored in order Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumService.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumService.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumService.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -58,4 +58,10 @@ public interface QuorumService<S extends HAGlue> extends QuorumMember<S>, QuorumRead<S>, QuorumCommit<S>, QuorumPipeline<S> { + /** + * Return the lastCommitTime for this service (based on its current root + * block). + */ + long getLastCommitTime(); + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumServiceBase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumServiceBase.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/QuorumServiceBase.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -117,6 +117,7 @@ } + @Override public S getService() { return service; @@ -139,6 +140,7 @@ } + @Override public Executor getExecutor() { return getLocalService().getExecutorService(); @@ -154,18 +156,21 @@ * QuorumPipeline */ + @Override public HAReceiveService<HAWriteMessage> getHAReceiveService() { return pipelineImpl.getHAReceiveService(); } + @Override public HASendService getHASendService() { return pipelineImpl.getHASendService(); } + @Override public Future<Void> receiveAndReplicate(HAWriteMessage msg) throws IOException { @@ -173,6 +178,7 @@ } + @Override public Future<Void> replicate(HAWriteMessage msg, ByteBuffer b) throws IOException { @@ -201,6 +207,7 @@ * QuorumCommit. */ + @Override public void abort2Phase(final long token) throws IOException, InterruptedException { @@ -208,6 +215,7 @@ } + @Override public void commit2Phase(final long token, final long commitTime) throws IOException, InterruptedException { @@ -215,6 +223,7 @@ } + @Override public int prepare2Phase(final boolean isRootBlock0, final IRootBlockView rootBlock, final long timeout, final TimeUnit unit) throws InterruptedException, TimeoutException, @@ -224,10 +233,20 @@ } + @Override + public long getLastCommitTime() { + + final L localService = getLocalService(); + + return localService.getLastCommitTime(); + + } + /* * QuorumRead */ + @Override public byte[] readFromQuorum(UUID storeId, long addr) throws InterruptedException, IOException { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -4646,6 +4646,12 @@ // local abort (no quorum, so we can do 2-phase abort). _abort(); + /* + * Note: We can not re-cast our vote until our last vote is + * widthdrawn. That is currently done by QuorumWatcherBase. So, + * we have to wait until we observe that to cast a new vote. + */ + } else if (didMeet) { quorumToken = newValue; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -51,6 +51,7 @@ import com.bigdata.ha.HAGlue; import com.bigdata.ha.HAPipelineGlue; +import com.bigdata.ha.QuorumService; import com.bigdata.util.InnerCause; import com.bigdata.util.concurrent.DaemonThreadFactory; @@ -1291,14 +1292,13 @@ if (!members.contains(serviceId)) throw new QuorumException(ERR_NOT_MEMBER + serviceId); /* - * FIXME This has been modified to automatically add the service - * back to the pipeline, which we need to do following a service - * leave or quorum break. However, this conflicts with the - * pre-/post- conditions declared in QuorumActor. + * Note: This has been modified to automatically add the service + * to the pipeline and the javadoc for the pre-/post- conditions + * declared in QuorumActor has been updated (9/9/2012). The + * change is inside of conditionalCastVoteImpl(). */ // if (!pipeline.contains(serviceId)) // throw new QuorumException(ERR_NOT_PIPELINE + serviceId); - conditionalPipelineAddImpl(); conditionalCastVoteImpl(lastCommitTime); } catch(InterruptedException e) { // propagate the interrupt. @@ -1513,16 +1513,24 @@ private void conditionalCastVoteImpl(final long lastCommitTime) throws InterruptedException { - final Set<UUID> tmp = votes.get(lastCommitTime); - if (tmp != null && tmp.contains(serviceId)) { - // The service has already cast this vote. - return; + final Set<UUID> votesForCommitTime = votes.get(lastCommitTime); + if (votesForCommitTime != null + && votesForCommitTime.contains(serviceId)) { + // The service has already cast *a* vote. + final Long lastCommitTime2 = getCastVote(serviceId); + if (lastCommitTime2 != null + && lastCommitTime2.longValue() == lastCommitTime) { + // The service has already cast *this* vote. + return; + } } if (log.isDebugEnabled()) log.debug("serviceId=" + serviceId + ",lastCommitTime=" + lastCommitTime); // Withdraw any existing vote by this service. conditionalWithdrawVoteImpl(); + // Ensure part of the pipeline. + conditionalPipelineAddImpl(); // Cast a vote. doCastVote(lastCommitTime); Long t = null; @@ -1539,7 +1547,9 @@ while (getCastVote(serviceId) != null) { votesChange.await(); } - } + if (log.isDebugEnabled()) + log.debug("withdrew vote: serviceId=" + serviceId + + ",lastCommitTime=" + lastCommitTime); } } private void conditionalPipelineAddImpl() throws InterruptedException { @@ -2276,17 +2286,18 @@ lock.lock(); try { // Look for a set of votes for that lastCommitTime. - LinkedHashSet<UUID> tmp = votes.get(lastCommitTime); - if (tmp == null) { + LinkedHashSet<UUID> votesForCommitTime = votes + .get(lastCommitTime); + if (votesForCommitTime == null) { // None found, so create an empty set now. - tmp = new LinkedHashSet<UUID>(); + votesForCommitTime = new LinkedHashSet<UUID>(); // And add it to the map. - votes.put(lastCommitTime, tmp); + votes.put(lastCommitTime, votesForCommitTime); } - if (tmp.add(serviceId)) { + if (votesForCommitTime.add(serviceId)) { // The service cast its vote. votesChange.signalAll(); - final int nvotes = tmp.size(); + final int nvotes = votesForCommitTime.size(); if (log.isInfoEnabled()) log.info("serviceId=" + serviceId.toString() + ", lastCommitTime=" + lastCommitTime @@ -2315,7 +2326,7 @@ } if (client != null) { final UUID clientId = client.getServiceId(); - final UUID[] voteOrder = tmp.toArray(new UUID[0]); + final UUID[] voteOrder = votesForCommitTime.toArray(new UUID[0]); if (nvotes == kmeet && clientId.equals(voteOrder[0])) { /* @@ -2400,13 +2411,13 @@ while (itr.hasNext()) { final Map.Entry<Long, LinkedHashSet<UUID>> entry = itr .next(); - final Set<UUID> votes = entry.getValue(); - if (votes.remove(serviceId)) { + final Set<UUID> votesForCommitTime = entry.getValue(); + if (votesForCommitTime.remove(serviceId)) { // The vote was withdrawn. votesChange.signalAll(); sendEvent(new E(QuorumEventEnum.WITHDRAW_VOTE, lastValidToken, token, serviceId)); - if (votes.size() + 1 == kmeet) { + if (votesForCommitTime.size() + 1 == kmeet) { final QuorumMember<S> client = getClientAsMember(); if (client != null) { // Tell the client that the consensus was lost. @@ -2421,7 +2432,7 @@ if (log.isInfoEnabled()) log.info("serviceId=" + serviceId + ", lastCommitTime=" + entry.getKey()); - if (votes.isEmpty()) { + if (votesForCommitTime.isEmpty()) { // remove map entry with no votes cast. itr.remove(); } @@ -2619,15 +2630,61 @@ } sendEvent(new E(QuorumEventEnum.QUORUM_BROKE, lastValidToken, token, null/* serviceId */)); - if (client != null) { - final UUID clientId = client.getServiceId(); - if(joined.contains(clientId)) { - // If our client is joined, then force serviceLeave. -// new Thread() {public void run() {actor.serviceLeave();}}.start(); - doAction(new Runnable() {public void run() {actor.serviceLeave();}}); - } +/* + * Note: Replacing this code with the logic below fixes a problem where a leader + * was failing to update its lastCommitTime after a quorum break caused by + * a follower that was halted. The quorum could not meet after the follower + * was restarted because the leader had not voted for a lastCommitTime. The + * code below addresses that explicitly as long as the QuorumMember is a + * QuorumService. + */ +// if (client != null) { +// final UUID clientId = client.getServiceId(); +// if(joined.contains(clientId)) { +// // If our client is joined, then force serviceLeave. +//// new Thread() {public void run() {actor.serviceLeave();}}.start(); +// doAction(new Runnable() {public void run() {actor.serviceLeave();}}); +// } +// } + if (client != null) { + final UUID clientId = client.getServiceId(); + if (joined.contains(clientId)) { + final QuorumMember<S> member = getMember(); + if (member instanceof QuorumService) { + /* + * Set the last commit time. + * + * Note: After a quorum break, a service MUST + * recast its vote for it's then-current + * lastCommitTime. If it fails to do this, then + * it will be impossible for a consensus to form + * around the then current lastCommitTimes for + * the services. It appears to be quite + * difficult for the service to handle this + * itself since it can not easily recognize when + * it's old vote has been widthdrawn. Therefore, + * the logic to do this has been moved into the + * QuorumWatcherBase. + */ + final long lastCommitTime = ((QuorumService<?>) member) + .getLastCommitTime(); + doAction(new Runnable() { + public void run() { + // recast our vote. + actor.castVote(lastCommitTime); + } + }); + } else { + // just withdraw the vote. + doAction(new Runnable() { + public void run() { + actor.withdrawVote(); + } + }); } } + } + } } finally { lock.unlock(); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/quorum/QuorumActor.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -54,7 +54,7 @@ * <dt>{@link #pipelineAdd()}</dt> * <dd>member.</dd> * <dt>{@link #castVote(long)}</dt> - * <dd>member, pipeline.</dd> + * <dd>member (service implicitly joins pipeline if not present).</dd> * <dt>{@link #serviceJoin()}</dt> * <dd>member, pipeline, consensus around cast vote, predecessor in the vote * order is joined</dd> @@ -129,12 +129,13 @@ void pipelineRemove(); /** - * Cast a vote on the behalf of the associated service. If the service has - * already voted for some other lastCommitTime, then that vote is withdrawn - * before the new vote is cast. Services do not withdraw their cast votes - * until a quorum breaks and a new consensus needs to be established. When - * it does, then need to consult their root blocks and vote their then - * current lastCommitTime. + * Cast a vote on the behalf of the associated service. If the service is + * not part of the pipeline, then it is implicitly added to the pipeline. If + * the service has already voted for some other lastCommitTime, then that + * vote is withdrawn before the new vote is cast. Services do not withdraw + * their cast votes until a quorum breaks and a new consensus needs to be + * established. When it does, then need to consult their root blocks and + * vote their then current lastCommitTime. * <p> * When a service needs to re-synchronize with a quorum, it initially votes * its current lastCommitTime. Once the service is receiving writes from the Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/AbstractQuorumTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/AbstractQuorumTestCase.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/AbstractQuorumTestCase.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -27,14 +27,18 @@ package com.bigdata.quorum; +import java.util.LinkedHashMap; +import java.util.LinkedHashSet; +import java.util.Map; +import java.util.UUID; import java.util.concurrent.TimeUnit; import junit.framework.AssertionFailedError; import junit.framework.TestCase2; import com.bigdata.quorum.MockQuorumFixture.MockQuorum; +import com.bigdata.quorum.MockQuorumFixture.MockQuorum.MockQuorumActor; import com.bigdata.quorum.MockQuorumFixture.MockQuorumMember; -import com.bigdata.quorum.MockQuorumFixture.MockQuorum.MockQuorumActor; /** * Abstract base class for testing using a {@link MockQuorumFixture}. @@ -174,37 +178,41 @@ static public void assertCondition(final Runnable cond, final long timeout, final TimeUnit units) { final long begin = System.nanoTime(); - long nanos = units.toNanos(timeout); - // remaining -= (now - begin) [aka elapsed] - nanos -= System.nanoTime() - begin; + final long nanos = units.toNanos(timeout); + long remaining = nanos; + // remaining = nanos - (now - begin) [aka elapsed] + remaining = nanos - (System.nanoTime() - begin); while (true) { - AssertionFailedError cause = null; try { // try the condition cond.run(); // success. return; - } catch (AssertionFailedError e) { - nanos -= System.nanoTime() - begin; - if (nanos < 0) { + } catch (final AssertionFailedError e) { + remaining = nanos - (System.nanoTime() - begin); + if (remaining < 0) { // Timeout - rethrow the failed assertion. throw e; } - cause = e; + // Sleep up to 10ms or the remaining nanos, which ever is less. + final int millis = (int) Math.min( + TimeUnit.NANOSECONDS.toMillis(remaining), 10); + if (millis > 0) { + // sleep and retry. + try { + Thread.sleep(millis); + } catch (InterruptedException e1) { + // propagate the interrupt. + Thread.currentThread().interrupt(); + return; + } + remaining = nanos - (System.nanoTime() - begin); + if (remaining < 0) { + // Timeout - rethrow the failed assertion. + throw e; + } + } } - // Sleep up to 10ms or the remaining nanos, which ever is less. - final int millis = (int) Math.min(TimeUnit.NANOSECONDS - .toMillis(nanos), 10); - if (log.isInfoEnabled()) - log.info("Will retry: millis=" + millis + ", cause=" + cause); - // sleep and retry. - try { - Thread.sleep(millis); - } catch (InterruptedException e1) { - // propagate the interrupt. - Thread.currentThread().interrupt(); - return; - } } } @@ -225,5 +233,49 @@ assertCondition(cond, 5, TimeUnit.SECONDS); } + + /** + * Helper method provides nice rendering of a votes snapshot. + * <p> + * Note: The snapshot uses a {@link UUID}[] rather than a collection for + * each <code>lastCommitTime</code> key. However, by default toString() for + * an array does not provide a nice rendering. + * + * @param votes + * The votes. + * @return The human readable representation. + */ + public static String toString(final Map<Long, UUID[]> votes) { + + // put things into a ordered Collection. toString() for the Collection is nice. + final Map<Long, LinkedHashSet<UUID>> m = new LinkedHashMap<Long, LinkedHashSet<UUID>>(); + + for(Map.Entry<Long,UUID[]> e : votes.entrySet()) { + + final Long commitTime = e.getKey(); + + final UUID[] a = e.getValue(); + + LinkedHashSet<UUID> votesForCommitTime = m.get(commitTime); + + if(votesForCommitTime == null) { + + votesForCommitTime = new LinkedHashSet<UUID>(); + + m.put(commitTime, votesForCommitTime); + + } + + for (UUID uuid : a) { + + votesForCommitTime.add(uuid); + + } + + } + + return m.toString(); + + } } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestAll.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -103,7 +103,11 @@ */ suite.addTestSuite(TestHA3QuorumSemantics.class); - suite.addTest(StressTestHA3.suite()); + /* + * Run the test HA3 suite a bunch of times. + */ + suite.addTest(StressTestHA3.suite()); + } return suite; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -27,6 +27,7 @@ package com.bigdata.quorum; +import java.util.Map; import java.util.UUID; import com.bigdata.quorum.MockQuorumFixture.MockQuorumMember; @@ -388,7 +389,8 @@ /* * Should be two timestamps for which services have voted (but we can * only. check the one that enacted the change since that is the only - * one for which the update is guaranteed to be visible). + * one for which the update is guaranteed to be visible without awaiting + * the Condition). */ // assertEquals(2, quorum0.getVotes().size()); assertEquals(2, quorum1.getVotes().size()); @@ -403,8 +405,8 @@ // wait for quorums to meet (visibility guarantee). final long token1 = quorum0.awaitQuorum(); - quorum1.awaitQuorum(); - quorum2.awaitQuorum(); + assertEquals(token1, quorum1.awaitQuorum()); + assertEquals(token1, quorum2.awaitQuorum()); // The last consensus timestamp should have been updated for all quorum members. assertEquals(lastCommitTime1, client0.lastConsensusValue); @@ -478,9 +480,12 @@ assertEquals(-1L, client2.lastConsensusValue); // Should be no timestamps for which services have voted. - assertEquals(0, quorum0.getVotes().size()); - assertEquals(0, quorum1.getVotes().size()); - assertEquals(0, quorum2.getVotes().size()); + final Map<Long, UUID[]> votes0 = quorum0.getVotes(); + final Map<Long, UUID[]> votes1 = quorum1.getVotes(); + final Map<Long, UUID[]> votes2 = quorum2.getVotes(); + assertEquals(AbstractQuorumTestCase.toString(votes0), 0, votes0.size()); + assertEquals(AbstractQuorumTestCase.toString(votes1), 0, votes1.size()); + assertEquals(AbstractQuorumTestCase.toString(votes2), 0, votes2.size()); // Verify the specific services voting for each timestamp. assertEquals(null, quorum0.getVotes().get(lastCommitTime1)); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -363,7 +363,7 @@ @Override public void notify(final QuorumEvent e) { - System.err.println("QuorumEvent: "+e); + System.err.println("QuorumEvent: " + e);// FIXME remove logger. switch(e.getEventType()) { case CAST_VOTE: break; @@ -411,8 +411,7 @@ break; case SERVICE_LEAVE: break; - case WITHDRAW_VOTE: - break; + case WITHDRAW_VOTE: } } }); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -512,6 +512,9 @@ // zpath for the lastCommitTime. final String lastCommitTimeZPath = logicalServiceId + "/" + QUORUM + "/" + QUORUM_VOTES + "/" + lastCommitTime; + if (log.isInfoEnabled()) + log.info("lastCommitTime=" + lastCommitTime + + ", lastCommitTimeZPath=" + lastCommitTimeZPath); // get a valid zookeeper connection object. final ZooKeeper zk; try { @@ -587,6 +590,8 @@ // zpath for votes. final String votesZPath = logicalServiceId + "/" + QUORUM + "/" + QUORUM_VOTES; + if (log.isInfoEnabled()) + log.info("votesZPath=" + votesZPath); // get a valid zookeeper connection object. final ZooKeeper zk; try { @@ -1888,12 +1893,12 @@ } @Override - protected void add(UUID serviceId) { + protected void add(final UUID serviceId) { castVote(serviceId, lastCommitTime); } @Override - protected void remove(UUID serviceId) { + protected void remove(final UUID serviceId) { withdrawVote(serviceId); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/AbstractZkQuorumTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/AbstractZkQuorumTestCase.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/AbstractZkQuorumTestCase.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -32,17 +32,12 @@ import java.util.UUID; import java.util.concurrent.TimeUnit; -import junit.framework.AssertionFailedError; - import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; -import com.bigdata.quorum.AbstractQuorum; +import com.bigdata.quorum.AbstractQuorumTestCase; import com.bigdata.quorum.MockQuorumFixture; -import com.bigdata.quorum.Quorum; import com.bigdata.quorum.QuorumActor; -import com.bigdata.quorum.QuorumWatcher; -import com.bigdata.quorum.MockQuorumFixture.MockQuorum; import com.bigdata.zookeeper.AbstractZooTestCase; import com.bigdata.zookeeper.ZooKeeperAccessor; @@ -137,102 +132,17 @@ super.tearDown(); } - /** - * Wait up to a timeout until some condition succeeds. - * <p> - * Whenever more than one {@link AbstractQuorum} is under test there will be - * concurrent indeterminism concerning the precise ordering and timing as - * updates propagate from the {@link AbstractQuorum} which takes some action - * (castVote(), pipelineAdd(), etc.) to the other quorums attached to the - * same {@link MockQuorumFixture}. This uncertainty about the ordering and - * timing state changes is not dissimilar from the uncertainty we face in a - * real distributed system. - * <p> - * While there are times when this uncertainty does not affect the behavior - * of the tests, there are other times when we must have a guarantee that a - * specific vote order or pipeline order was established. For those cases, - * this method may be used to await an arbitrary condition. This method - * simply retries until the condition becomes true, sleeping a little after - * each failure. - * <p> - * Actions executed in the main thread of the unit test will directly update - * the internal state of the {@link MockQuorumFixture}, which is shared - * across the {@link MockQuorum}s. However, uncertainty about ordering can - * arise as a result of the interleaving of the actions taken by the - * {@link QuorumWatcher}s in response to both top-level actions and actions - * taken by other {@link QuorumWatcher}s. For example, the vote order or the - * pipeline order are fully determined based on sequence such as the - * following: - * - * <pre> - * actor0.pipelineAdd(); - * actor2.pipelineAdd(); - * actor1.pipelineAdd(); - * </pre> - * - * When in doubt, or when a unit test displays stochastic behavior, you can - * use this method to wait until the quorum state has been correctly - * replicated to the {@link Quorum}s under test. - * - * @param cond - * The condition, which must throw an - * {@link AssertionFailedError} if it does not succeed. - * @param timeout - * The timeout. - * @param unit - * - * @throws AssertionFailedError - * if the condition does not succeed within the timeout. - */ - public void assertCondition(final Runnable cond, final long timeout, + protected void assertCondition(final Runnable cond, final long timeout, final TimeUnit units) { - final long begin = System.nanoTime(); - long nanos = units.toNanos(timeout); - // remaining -= (now - begin) [aka elapsed] - nanos -= System.nanoTime() - begin; - while (true) { - try { - // try the condition - cond.run(); - // success. - return; - } catch (AssertionFailedError e) { - nanos -= System.nanoTime() - begin; - if (nanos < 0) { - // Timeout - rethrow the failed assertion. - throw e; - } - } - // sleep and retry. - try { - // sleep up to 10ms or nanos, which ever is less. - Thread - .sleep(Math.min(TimeUnit.NANOSECONDS.toMillis(nanos), - 10)); - } catch (InterruptedException e1) { - // propagate the interrupt. - Thread.currentThread().interrupt(); - return; - } - } + + AbstractQuorumTestCase.assertCondition(cond, timeout, units); + } - /** - * Waits up to 5 seconds for the condition to succeed. - * - * @param cond - * The condition, which must throw an - * {@link AssertionFailedError} if it does not succeed. - * - * @throws AssertionFailedError - * if the condition does not succeed within the timeout. - * - * @see #assertCondition(Runnable, long, TimeUnit) - */ - public void assertCondition(final Runnable cond) { + protected void assertCondition(final Runnable cond) { - assertCondition(cond, 5, TimeUnit.SECONDS); + AbstractQuorumTestCase.assertCondition(cond, 5, TimeUnit.SECONDS); } - + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/TestAll.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/TestAll.java 2012-09-09 13:08:27 UTC (rev 6551) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/test/com/bigdata/quorum/zk/TestAll.java 2012-09-09 15:40:41 UTC (rev 6552) @@ -76,8 +76,8 @@ // unit tests for a singleton quorum. suite.addTestSuite(TestZkSingletonQuorumSemantics.class); - // FIXME Enable HA test suite again: unit tests for a highly available quorum. -// suite.addTestSuite(TestZkHA3QuorumSemantics.class); + // unit tests for a highly available quorum. + suite.addTestSuite(TestZkHA3QuorumSemantics.class); return suite; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-10 18:45:52
|
Revision: 6561 http://bigdata.svn.sourceforge.net/bigdata/?rev=6561&view=rev Author: thompsonbry Date: 2012-09-10 18:45:44 +0000 (Mon, 10 Sep 2012) Log Message: ----------- This commit addresses two different issues. 1. Integration of the transaction service for a highly available journal. The followers will delegate the txs methods to the leader. The NSS APIs were also modified to provide BAD_REQUEST responses when reading on a highly available node if the quorum is not met. 2. Working with MartynC on logic to dump out and verify the deferred delete blocks for the RWStore and logic in DumpJournal to dump out the global row store and to decode arbitrary records from the store. This work is not finished (nothing is polished) and is disabled in the committed code. There are constants on DumpJournal that can be used to enable this stuff. @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (HA Journal) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/BytesUtil.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/htree/AbstractHTree.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/AbstractServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/BytesUtil.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/BytesUtil.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/btree/BytesUtil.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -1515,4 +1515,68 @@ } + static private final char[] HEX_CHAR_TABLE = { + '0', '1','2','3', + '4','5','6','7', + '8','9','a','b', + 'c','d','e','f' + }; + + /** + * Utility to convert a byte array to a hex string. + * + * @param buf + * The data. + * + * @return The hex string. + */ + static public String toHexString(final byte[] buf) { + + return toHexString(buf, buf.length); + + } + + /** + * Utility to display byte array of maximum i bytes as hexString. + * + * @param buf + * The data. + * @param n + * The #of bytes to convert. + * + * @return The hex string. + */ + static public String toHexString(final byte[] buf, int n) { + n = n < buf.length ? n : buf.length; + final StringBuffer out = new StringBuffer(); + for (int i = 0; i < n; i++) { + final int v = buf[i] & 0xFF; + out.append(HEX_CHAR_TABLE[v >>> 4]); + out.append(HEX_CHAR_TABLE[v & 0xF]); + } + return out.toString(); + } + + /** + * Formats hex dta into 64 byte rows. + * + * @param sb + * Where to format the data. + * @param hexData + * The data. + */ + static public void printHexString(final StringBuilder sb, + final String hexData) { + + int rem = hexData.length(); + int curs = 0; + while (rem >= 64) { + sb.append(String.format("%8d: ", curs)); + sb.append(hexData.substring(curs, curs + 64) + "\n"); + curs += 64; + rem -= 64; + } + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlue.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -6,6 +6,7 @@ import java.util.concurrent.Future; import com.bigdata.journal.AbstractJournal; +import com.bigdata.journal.ITransactionService; import com.bigdata.service.IService; /** @@ -24,7 +25,7 @@ * the standard jini smart proxy naming pattern. */ public interface HAGlue extends HAGlueBase, HAPipelineGlue, HAReadGlue, - HACommitGlue, IService { + HACommitGlue, ITransactionService, IService { /* * Administrative Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -30,6 +30,7 @@ import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; +import com.bigdata.journal.ValidationError; import com.bigdata.journal.ha.HAWriteMessage; /** @@ -118,4 +119,39 @@ delegate.destroy(); } + @Override + public long nextTimestamp() throws IOException { + return delegate.nextTimestamp(); + } + + @Override + public long newTx(long timestamp) throws IOException { + return delegate.newTx(timestamp); + } + + @Override + public long commit(long tx) throws ValidationError, IOException { + return delegate.commit(tx); + } + + @Override + public void abort(long tx) throws IOException { + delegate.abort(tx); + } + + @Override + public void notifyCommit(long commitTime) throws IOException { + delegate.notifyCommit(commitTime); + } + + @Override + public long getLastCommitTime() throws IOException { + return delegate.getLastCommitTime(); + } + + @Override + public long getReleaseTime() throws IOException { + return delegate.getReleaseTime(); + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/htree/AbstractHTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/htree/AbstractHTree.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/htree/AbstractHTree.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -21,6 +21,7 @@ import com.bigdata.btree.EntryScanIterator; import com.bigdata.btree.HTreeIndexMetadata; import com.bigdata.btree.ICheckpointProtocol; +import com.bigdata.btree.IIndex; import com.bigdata.btree.IRangeQuery; import com.bigdata.btree.ISimpleTreeIndexAccess; import com.bigdata.btree.ITuple; @@ -953,6 +954,16 @@ } /** + * The object responsible for (de-)serializing the nodes and leaves of the + * {@link IIndex}. + */ + final public NodeSerializer getNodeSerializer() { + + return nodeSer; + + } + + /** * The root of the {@link HTree}. This is always a {@link DirectoryPage}. * <p> * The hard reference to the root node is cleared if the index is Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -4640,12 +4640,13 @@ * We do not need to discard read-only tx since the committed * state should remain valid even when a quorum is lost. * - * TODO QUORUM TX INTEGRATION + * FIXME QUORUM TX INTEGRATION (discard running read/write tx). */ // local abort (no quorum, so we can do 2-phase abort). _abort(); - +// getLocalTransactionManager(). + /* * Note: We can not re-cast our vote until our last vote is * widthdrawn. That is currently done by QuorumWatcherBase. So, @@ -4793,12 +4794,14 @@ */ private final AtomicReference<IRootBlockView> prepareRequest = new AtomicReference<IRootBlockView>(); + @Override public UUID getServiceId() { return serviceId; } + @Override public InetSocketAddress getWritePipelineAddr() { return writePipelineAddr; @@ -4825,6 +4828,7 @@ * should all agree on whether they should be writing rootBlock0 or * rootBlock1. */ + @Override public Future<Boolean> prepare2Phase(final boolean isRootBlock0, final byte[] tmp, final long timeout, final TimeUnit unit) { @@ -4842,9 +4846,13 @@ haLog.info("isRootBlock0=" + isRootBlock0 + ", rootBlock=" + rootBlock + ", timeout=" + timeout + ", unit=" + unit); - if (rootBlock.getLastCommitTime() <= getLastCommitTime()) - throw new IllegalStateException(); + if (rootBlock.getLastCommitTime() <= AbstractJournal.this + .getLastCommitTime()) { + throw new IllegalStateException(); + + } + quorum.assertQuorum(rootBlock.getQuorumToken()); prepareRequest.set(rootBlock); @@ -4934,6 +4942,7 @@ } + @Override public Future<Void> commit2Phase(final long commitTime) { if (haLog.isInfoEnabled()) @@ -5035,6 +5044,7 @@ } + @Override public Future<Void> abort2Phase(final long token) { if (haLog.isInfoEnabled()) @@ -5127,6 +5137,7 @@ } + @Override public Future<Void> receiveAndReplicate(final HAWriteMessage msg) throws IOException { @@ -5157,6 +5168,7 @@ } + @Override public byte[] getRootBlock(final UUID storeId) { // storeId is optional (used in scale-out). @@ -5176,6 +5188,7 @@ } + @Override public Future<Void> bounceZookeeperConnection() { final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { @@ -5193,6 +5206,7 @@ * <p> * This implementation does pipeline remove() followed by pipline add(). */ + @Override public Future<Void> moveToEndOfPipeline() { final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { public void run() { @@ -5208,6 +5222,49 @@ } /* + * ITransactionService. + * + * Note: API is mostly implemented by Journal/HAJournal. + */ + + @Override + public long newTx(long timestamp) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long commit(long tx) throws ValidationError, IOException { + throw new UnsupportedOperationException(); + } + + @Override + public void abort(long tx) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public void notifyCommit(long commitTime) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long getLastCommitTime() throws IOException { + + return AbstractJournal.this.getLastCommitTime(); + + } + + @Override + public long getReleaseTime() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long nextTimestamp() throws IOException { + throw new UnsupportedOperationException(); + } + + /* * IService */ Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/DumpJournal.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -28,23 +28,40 @@ package com.bigdata.journal; import java.io.File; +import java.io.IOException; +import java.io.InputStream; import java.nio.ByteBuffer; import java.util.Date; import java.util.Iterator; +import java.util.LinkedList; +import java.util.List; import java.util.Map; import java.util.Properties; +import java.util.Set; import java.util.TreeMap; import org.apache.log4j.Logger; import com.bigdata.btree.AbstractBTree; +import com.bigdata.btree.BytesUtil; import com.bigdata.btree.DumpIndex; import com.bigdata.btree.ICheckpointProtocol; import com.bigdata.btree.ISimpleTreeIndexAccess; import com.bigdata.btree.ITupleIterator; +import com.bigdata.btree.IndexTypeEnum; import com.bigdata.btree.PageStats; +import com.bigdata.htree.AbstractHTree; +import com.bigdata.io.SerializerUtil; import com.bigdata.rawstore.Bytes; +import com.bigdata.relation.RelationSchema; +import com.bigdata.rwstore.IRWStrategy; +import com.bigdata.rwstore.IStore; import com.bigdata.rwstore.RWStore; +import com.bigdata.rwstore.RWStore.DeleteBlockStats; +import com.bigdata.sparse.GlobalRowStoreSchema; +import com.bigdata.sparse.ITPS; +import com.bigdata.sparse.SparseRowStore; +import com.bigdata.stream.Stream; import com.bigdata.util.ChecksumUtility; import com.bigdata.util.InnerCause; @@ -68,7 +85,23 @@ public class DumpJournal { private static final Logger log = Logger.getLogger(DumpJournal.class); + + /** + * Dump out the Global Row Store. + * + * TODO Raise as parameter, put on main(), and clean up the code. + */ + private static final boolean dumpGRS = false; + /** + * Validate the delete blocks (RWStore only). If there are double- deletes + * in the delete blocks, then log out more information about those + * addresses. + * + * TODO Raise as parameter, put on main(), and clean up the code. + */ + private static final boolean validateDeleteBlocks = false; + // public DumpJournal() { // // } @@ -91,6 +124,9 @@ * <dd>Dump the records in the indices.</dd> * </dl> */ +// FIXME feature is not finished. Must differentiate different address types. +// * <dt>-addr ADDR</dt> +// * <dd>Dump the record at that address on the store.</dd> public static void main(final String[] args) { if (args.length == 0) { @@ -111,6 +147,8 @@ boolean showTuples = false; + final List<Long> addrs = new LinkedList<Long>(); + for(; i<args.length; i++) { String arg = args[i]; @@ -146,6 +184,14 @@ } + else if(arg.equals("-addr")) { + + addrs.add(Long.valueOf(args[i + 1])); + + i++; + + } + else throw new RuntimeException("Unknown argument: " + arg); @@ -210,6 +256,17 @@ dumpJournal.dumpJournal(dumpHistory, dumpPages, dumpIndices, showTuples); + for(Long addr : addrs) { + + System.out.println("addr=" + addr + ", offset=" + + journal.getOffset(addr) + ", length=" + + journal.getByteCount(addr)); + + // Best effort attempt to dump the record. + System.out.println(dumpJournal.dumpRawRecord(addr)); + + } + } finally { journal.close(); @@ -336,12 +393,61 @@ final RWStore store = ((RWStrategy) strategy).getStore(); - final StringBuilder sb = new StringBuilder(); + { + final StringBuilder sb = new StringBuilder(); - store.showAllocators(sb); + store.showAllocators(sb); - System.out.println(sb); + System.out.println(sb); + } + + // Validate the logged delete blocks. + if (validateDeleteBlocks) { + + final DeleteBlockStats stats = store.checkDeleteBlocks(journal); + + System.out.println(stats.toString(store)); + + final Set<Integer> duplicateAddrs = stats + .getDuplicateAddresses(); + + if(!duplicateAddrs.isEmpty()) { + + for(int latchedAddr : duplicateAddrs) { + + final byte[] b; + try { + b = store.readFromLatchedAddress(latchedAddr); + } catch (IOException ex) { + log.error("Could not read: latchedAddr=" + + latchedAddr, ex); + continue; + } + final ByteBuffer buf = ByteBuffer.wrap(b); + + final Object obj = decodeData(buf); + + if (obj == null) { + System.err.println("Could not decode: latchedAddr=" + + latchedAddr); + final StringBuilder sb = new StringBuilder(); + BytesUtil.printHexString(sb, + BytesUtil.toHexString(b, b.length)); + System.err.println("Undecoded record:" + + sb.toString()); + } else { + System.err.println("Decoded record: latchedAddr=" + + latchedAddr + " :: class=" + + obj.getClass() + ", object=" + + obj.toString()); + } + } + + } + + } + } final CommitRecordIndex commitRecordIndex = journal @@ -350,6 +456,12 @@ System.out.println("There are " + commitRecordIndex.getEntryCount() + " commit points."); + if (dumpGRS) { + + dumpGlobalRowStore(); + + } + if (dumpHistory) { System.out.println("Historical commit points follow in temporal sequence (first to last):"); @@ -421,6 +533,43 @@ } + public void dumpGlobalRowStore() { + + final SparseRowStore grs = journal.getGlobalRowStore(journal + .getLastCommitTime()); + + { + final Iterator<? extends ITPS> itr = grs + .rangeIterator(GlobalRowStoreSchema.INSTANCE); + + while(itr.hasNext()) { + + final ITPS tps = itr.next(); + + System.out.println(tps.toString()); + + } + + } + + // The schema for "relations". + { + + final Iterator<? extends ITPS> itr = grs + .rangeIterator(RelationSchema.INSTANCE); + + while(itr.hasNext()) { + + final ITPS tps = itr.next(); + + System.out.println(tps.toString()); + + } + + } + + } + /** * Dump metadata about each named index as of the specified commit record. * @@ -515,7 +664,7 @@ } - } + } // while(itr) (next index) if (pageStats != null) { @@ -553,10 +702,234 @@ System.out.println(stats.getDataRow()); - } + } } } // dumpNamedIndicesMetadata + /** + * Utility method dumps the data associated with an address on the backing + * store. A variety of methods are attempted. + * + * @param addr + * The address. + * + * @return + */ + public String dumpRawRecord(final long addr) { + + if (journal.getBufferStrategy() instanceof IRWStrategy) { + /** + * TODO When we address this issue, do this test for all stores. + * + * @see <a + * href="https://sourceforge.net/apps/trac/bigdata/ticket/555"> + * Support PSOutputStream/InputStream at IRawStore </a> + */ + final IStore store = ((IRWStrategy) journal.getBufferStrategy()) + .getStore(); + try { + final InputStream is = store.getInputStream(addr); + try { + // TODO Could dump the stream. + } finally { + try { + is.close(); + } catch (IOException e) { + // Ignore. + } + } + return "Address is stream: addr=" + addr; + } catch (RuntimeException ex) { + // ignore. + } + } + + final ByteBuffer buf; + try { + + buf = journal.read(addr); + + } catch (Throwable t) { + + final String msg = "Could not read: addr=" + addr + ", ex=" + t; + + log.error(msg, t); + + return msg; + } + + if (buf == null) + throw new IllegalArgumentException("Nothing at that address"); + + final Object obj = decodeData(buf); + + if (obj == null) { + + return "Could not decode: addr=" + addr; + + } else { + + return obj.toString(); + + } + + } + + /** + * Attempt to decode data read from some address using a variety of + * mechanisms. + * + * @param b + * The data. + * + * @return The decoded object -or- <code>null</code> if the object could not + * be decoded. + */ + private Object decodeData(final ByteBuffer buf) { + + if(buf == null) + throw new IllegalArgumentException(); + + /* + * Note: Always use buf.duplicate() to avoid a side-effect on the + * ByteBuffer that we are trying to decode! + */ + + try { + /** + * Note: This handles a lot of cases, including: + * + * Checkpoint, IndexMetadata + */ + return SerializerUtil.deserialize(buf.duplicate()); + } catch (RuntimeException ex) { + // fall through + } + + /* + * TODO Root blocks and what else? + */ + + /* + * Try to decode an index node/leaf. + */ + { + final long commitTime = journal.getLastCommitTime(); + + final Iterator<String> nitr = journal.indexNameScan( + null/* prefix */, commitTime); + + while (nitr.hasNext()) { + + // a registered index. + final String name = nitr.next(); + + final ICheckpointProtocol ndx = journal.getIndexLocal(name, + commitTime); + + final IndexTypeEnum indexType = ndx.getCheckpoint() + .getIndexType(); + + switch (indexType) { + case BTree: { + + final AbstractBTree btree = (AbstractBTree) ndx; + + final com.bigdata.btree.NodeSerializer nodeSer = btree + .getNodeSerializer(); + + try { + + final com.bigdata.btree.data.IAbstractNodeData nodeOrLeaf = nodeSer + .decode(buf.duplicate()); + + log.warn("Record decoded from index=" + name); + + return nodeOrLeaf; + + } catch (Throwable t) { + // ignore. + continue; + } + } + case HTree: { + + final AbstractHTree htree = (AbstractHTree)ndx; + + final com.bigdata.htree.NodeSerializer nodeSer = htree.getNodeSerializer(); + + try { + + final com.bigdata.btree.data.IAbstractNodeData nodeOrLeaf = nodeSer + .decode(buf.duplicate()); + + log.warn("Record decoded from index=" + name); + + return nodeOrLeaf; + + } catch (Throwable t) { + // Ignore. + continue; + } + } + case Stream: + final Stream stream = (Stream) ndx; + /* + * Note: We can't do anything here with a Stream, but we do + * try to read on the address as a stream in the caller. + */ + continue; + default: + throw new UnsupportedOperationException( + "Unknown indexType=" + indexType); + } + + } + + } + + // Could not decode. + return null; + + } + + /** + * Return the data in the buffer. + */ + public static byte[] getBytes(ByteBuffer buf) { + + if (buf.hasArray() && buf.arrayOffset() == 0 && buf.position() == 0 + && buf.limit() == buf.capacity()) { + + /* + * Return the backing array. + */ + + return buf.array(); + + } + + /* + * Copy the expected data into a byte[] using a read-only view on the + * buffer so that we do not mess with its position, mark, or limit. + */ + final byte[] a; + { + + buf = buf.asReadOnlyBuffer(); + + final int len = buf.remaining(); + + a = new byte[len]; + + buf.get(a); + + } + + return a; + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -337,6 +337,7 @@ */ final private ConcurrentHashMap<Long, IRawTx> m_rawTxs = new ConcurrentHashMap<Long, IRawTx>(); + // Note: This is the implicit constructor call. { final long lastCommitTime = Journal.this.getLastCommitTime(); @@ -353,7 +354,316 @@ } } + + /* + * HA Quorum Overrides. + * + * Note: The basic pattern is that the quorum must be met, a leader + * executes the operation directly, and a follower delegates the + * operation to the leader. This centralizes the decisions about the + * open transactions, and the read locks responsible for pinning + * commit points, on the leader. + * + * If the journal is not highly available, then the request is + * passed to the base class (JournalTransactionService, which + * extends AbstractTransactionService). + */ + + @Override + public long newTx(final long timestamp) { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if (quorum == null) { + + // Not HA. + return super.newTx(timestamp); + + } + + final long token = getQuorumToken(); + + if (quorum.getMember().isLeader(token)) { + + // HA and this is the leader. + return super.newTx(timestamp); + + } + + /* + * The transaction needs to be allocated by the leader. + * + * Note: Heavy concurrent query on a HAJournal will pin history + * on the leader. However, the lastReleaseTime will advance + * since clients will tend to read against the then current + * lastCommitTime, so we will still recycle the older commit + * points once there is no longer an active reader for those + * commit points. + */ + + final HAGlue leaderService = quorum.getMember() + .getLeader(token); + + final long tx; + try { + + // delegate to the quorum leader. + tx = leaderService.newTx(timestamp); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + // Make sure the quorum is still valid. + quorum.assertQuorum(token); + + return tx; + + } + + @Override + public long commit(final long tx) { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if (quorum == null) { + + // Not HA. + return super.commit(tx); + + } + + final long token = getQuorumToken(); + + if (quorum.getMember().isLeader(token)) { + + // HA and this is the leader. + return super.commit(tx); + + } + + /* + * Delegate to the quorum leader. + */ + + final HAGlue leaderService = quorum.getMember() + .getLeader(token); + + final long commitTime; + try { + + // delegate to the quorum leader. + commitTime = leaderService.commit(tx); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + // Make sure the quorum is still valid. + quorum.assertQuorum(token); + + return commitTime; + + } + @Override + public void abort(final long tx) { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if (quorum == null) { + + // Not HA. + super.abort(tx); + + return; + + } + + final long token = getQuorumToken(); + + if (quorum.getMember().isLeader(token)) { + + // HA and this is the leader. + super.abort(tx); + + return; + + } + + /* + * Delegate to the quorum leader. + */ + + final HAGlue leaderService = quorum.getMember() + .getLeader(token); + + try { + + // delegate to the quorum leader. + leaderService.abort(tx); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + // Make sure the quorum is still valid. + quorum.assertQuorum(token); + + return; + + } + + @Override + public void notifyCommit(final long commitTime) { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if (quorum == null) { + + // Not HA. + super.notifyCommit(commitTime); + + return; + + } + + final long token = getQuorumToken(); + + if (quorum.getMember().isLeader(token)) { + + // HA and this is the leader. + super.notifyCommit(commitTime); + + return; + + } + + /* + * Delegate to the quorum leader. + */ + + final HAGlue leaderService = quorum.getMember() + .getLeader(token); + + try { + + // delegate to the quorum leader. + leaderService.notifyCommit(commitTime); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + // Make sure the quorum is still valid. + quorum.assertQuorum(token); + + return; + + } + + @Override + public long getReleaseTime() { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if (quorum == null) { + + // Not HA. + return super.getReleaseTime(); + + } + + final long token = getQuorumToken(); + + if (quorum.getMember().isLeader(token)) { + + // HA and this is the leader. + return super.getReleaseTime(); + + } + + /* + * Delegate to the quorum leader. + */ + + final HAGlue leaderService = quorum.getMember() + .getLeader(token); + + final long releaseTime; + try { + + // delegate to the quorum leader. + releaseTime = leaderService.getReleaseTime(); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + // Make sure the quorum is still valid. + quorum.assertQuorum(token); + + return releaseTime; + + } + + @Override + public long nextTimestamp(){ + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if (quorum == null) { + + // Not HA. + return super.nextTimestamp(); + + } + + final long token = getQuorumToken(); + + if (quorum.getMember().isLeader(token)) { + + // HA and this is the leader. + return super.nextTimestamp(); + + } + + /* + * Delegate to the quorum leader. + */ + + final HAGlue leaderService = quorum.getMember() + .getLeader(token); + + final long nextTimestamp; + try { + + // delegate to the quorum leader. + nextTimestamp = leaderService.nextTimestamp(); + + } catch (IOException e) { + + throw new RuntimeException(e); + + } + + // Make sure the quorum is still valid. + quorum.assertQuorum(token); + + return nextTimestamp; + + } + protected void activateTx(final TxState state) { if (txLog.isInfoEnabled()) txLog.info("OPEN : txId=" + state.tx Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -38,7 +38,9 @@ import java.util.Collections; import java.util.HashMap; import java.util.Iterator; +import java.util.LinkedHashSet; import java.util.Map; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; @@ -49,6 +51,7 @@ import org.apache.log4j.Logger; import com.bigdata.btree.BTree.Counter; +import com.bigdata.btree.BytesUtil; import com.bigdata.btree.IIndex; import com.bigdata.btree.ITuple; import com.bigdata.btree.ITupleIterator; @@ -1644,23 +1647,24 @@ } } - static private final char[] HEX_CHAR_TABLE = { - '0', '1','2','3', - '4','5','6','7', - '8','9','a','b', - 'c','d','e','f' - }; +// static private final char[] HEX_CHAR_TABLE = { +// '0', '1','2','3', +// '4','5','6','7', +// '8','9','a','b', +// 'c','d','e','f' +// }; // utility to display byte array of maximum i bytes as hexString static private String toHexString(final byte[] buf, int n) { - n = n < buf.length ? n : buf.length; - final StringBuffer out = new StringBuffer(); - for (int i = 0; i < n; i++) { - final int v = buf[i] & 0xFF; - out.append(HEX_CHAR_TABLE[v >>> 4]); - out.append(HEX_CHAR_TABLE[v &0xF]); - } - return out.toString(); +// n = n < buf.length ? n : buf.length; +// final StringBuffer out = new StringBuffer(); +// for (int i = 0; i < n; i++) { +// final int v = buf[i] & 0xFF; +// out.append(HEX_CHAR_TABLE[v >>> 4]); +// out.append(HEX_CHAR_TABLE[v &0xF]); +// } +// return out.toString(); + return BytesUtil.toHexString(buf, n); } public void free(final long laddr, final int sze) { @@ -5170,44 +5174,82 @@ * checkDeleteBlocks, called from DumpJournal. */ public static class DeleteBlockStats { - int m_commitRecords = 0;; - int m_addresses = 0; - int m_blobs = 0; - int m_badAddresses = 0; - HashMap<Integer, Integer> m_freed = new HashMap<Integer, Integer>(); - ArrayList<Long> m_duplicates = new ArrayList<Long>(); - ArrayList<String> m_dupData = new ArrayList<String>(); + private int m_commitRecords = 0;; + private int m_addresses = 0; + private int m_blobs = 0; + private int m_badAddresses = 0; + private final HashMap<Integer, Integer> m_freed = new HashMap<Integer, Integer>(); + /** + * The latched address of each address that appears more than once + * across the delete blocks. + */ + private final Set<Integer> m_duplicates = new LinkedHashSet<Integer>(); +// /** +// * The hexstring version of the data associated with the addresses that +// * are present more than once in the delete blocks. +// */ +// private final ArrayList<String> m_dupData = new ArrayList<String>(); + /** + * The #of commit records that would be processed. + */ public int getCommitRecords() { return m_commitRecords; - } - - public int getddresses() { - return m_addresses; - } - + } + + /** + * Return the #of addresses in the delete blocks acrosss the commit + * records. + */ + public int getAddresses() { + return m_addresses; + } + + /** + * Return the #of addresses that are not committed data across the + * commit records. + */ public int getBadAddresses() { return m_badAddresses; } + + /** + * Return the latched addresses that appear more than once in the delete + * blocks across the commit records. + */ + public Set<Integer> getDuplicateAddresses() { + return m_duplicates; + } - public String toString() { + public String toString(final RWStore store) { final StringBuilder sb = new StringBuilder(); sb.append("CommitRecords: " + m_commitRecords + ", Addresses: " + m_addresses + ", Blobs: " + m_blobs + ", bad: " + + m_badAddresses); - if (m_duplicates.size() > 0) { - for (int i = 0; i < m_duplicates.size(); i++) { - sb.append("\nDuplicate: " + m_duplicates.get(i) + "\n"); - final String hexData = m_dupData.get(i); - int rem = hexData.length(); - int curs = 0; - while (rem >= 64) { - sb.append(String.format("%8d: ", curs)); - sb.append(hexData.substring(curs, curs + 64) + "\n"); - curs += 64; - rem -= 64; - } - } - } + if (!m_duplicates.isEmpty()) { + for (int latchedAddr : m_duplicates) { +// final int latchedAddr = m_duplicates.get(i); + sb.append("\nDuplicate: latchedAddr=" + latchedAddr + "\n"); + /* + * Note: Now dumped by DumpJournal. + */ +// final byte[] data; +// try { +// data = store.readFromLatchedAddress(latchedAddr); +// } catch (IOException ex) { +// final String msg = "Could not read data: addr=" +// + latchedAddr; +// log.error(msg, ex); +// sb.append(msg); +// continue; +// } +// +// final String hexStr = BytesUtil.toHexString(data, +// data.length); +// +// BytesUtil.printHexString(sb, hexStr); + + } + } return sb.toString(); } @@ -5217,18 +5259,23 @@ * Utility to check the deleteBlocks associated with each active CommitRecord */ public DeleteBlockStats checkDeleteBlocks(final AbstractJournal journal) { - DeleteBlockStats stats = new DeleteBlockStats(); + final DeleteBlockStats stats = new DeleteBlockStats(); + /* * Commit can be called prior to Journal initialisation, in which case * the commitRecordIndex will not be set. */ final IIndex commitRecordIndex = journal.getReadOnlyCommitRecordIndex(); - if (commitRecordIndex == null) { // TODO Why is this here? - return stats; + + if (commitRecordIndex == null) { + + return stats; + } - final ITupleIterator<CommitRecordIndex.Entry> commitRecords = commitRecordIndex + @SuppressWarnings("unchecked") + final ITupleIterator<CommitRecordIndex.Entry> commitRecords = commitRecordIndex .rangeIterator(); while (commitRecords.hasNext()) { @@ -5265,25 +5312,42 @@ return stats; } + /** + * Utility method to verify the deferred delete blocks. + * + * @param blockAddr + * The address of a deferred delete block. + * @param commitTime + * The commitTime associated with the {@link ICommitRecord}. + * @param stats + * Where to collect statistics. + */ private void checkDeferrals(final long blockAddr, - final long lastReleaseTime, final DeleteBlockStats stats) { + final long commitTime, final DeleteBlockStats stats) { + + /** + * Debug flag. When true, writes all frees onto stderr so they can be + * read into a worksheet for analysis. + */ + final boolean writeAll = false; + final int addr = (int) (blockAddr >> 32); final int sze = (int) blockAddr & 0xFFFFFF; if (log.isTraceEnabled()) log.trace("freeDeferrals at " + physicalAddress(addr) + ", size: " - + sze + " releaseTime: " + lastReleaseTime); + + sze + " releaseTime: " + commitTime); final byte[] buf = new byte[sze + 4]; // allow for checksum getData(addr, buf); final DataInputStream strBuf = new DataInputStream( new ByteArrayInputStream(buf)); m_allocationLock.lock(); - int totalFreed = 0; +// int totalFreed = 0; try { int nxtAddr = strBuf.readInt(); - int cnt = 0; +// int cnt = 0; while (nxtAddr != 0) { // while (false && addrs-- > 0) { @@ -5300,32 +5364,65 @@ stats.m_badAddresses++; } - if (stats.m_freed.containsKey(nxtAddr)) { - final FixedAllocator alloc = getBlockByAddress(nxtAddr); - final byte[] data = new byte[alloc.m_size]; - - final ByteBuffer bb = ByteBuffer.wrap(data); - final int offset = getOffset(nxtAddr); - final long paddr = alloc.getPhysicalAddress(offset); - FileChannelUtility.readAll(m_reopener, bb, paddr); - - stats.m_dupData.add(toHexString(data, data.length)); - stats.m_duplicates.add(paddr); - } else { - stats.m_freed.put(nxtAddr, nxtAddr); + if (stats.m_freed.containsKey(nxtAddr)) { + stats.m_duplicates.add(nxtAddr); + if (writeAll) { + System.err.println("" + commitTime + " " + nxtAddr + + " FREE DUP"); + } + } else { + stats.m_freed.put(nxtAddr, nxtAddr); + if (writeAll) { + System.err.println("" + commitTime + " " + nxtAddr + + " FREE"); + } } nxtAddr = strBuf.readInt(); } // now check delete block assert isCommitted(addr); - } catch (IOException e) { - throw new RuntimeException("Problem freeing deferrals", e); - } finally { - m_allocationLock.unlock(); - } - } + } catch (IOException e) { + throw new RuntimeException("Problem checking deferrals: " + e, e); + } finally { + m_allocationLock.unlock(); + } + } + /** + * A low level utility method that reads directly from the backing + * {@link FileChannel}. + * <p> + * Note: The latched address does not encode the actual length of the data. + * Therefore, all data in the slot addressed by the latched address will be + * returned. + * + * @param nxtAddr + * The latched address. + * + * @return The byte[] in the addressed slot. + * + * @throws IOException + */ + public final byte[] readFromLatchedAddress(final int nxtAddr) + throws IOException { + + final FixedAllocator alloc = getBlockByAddress(nxtAddr); + + final byte[] data = new byte[alloc.m_size]; + + final ByteBuffer bb = ByteBuffer.wrap(data); + + final int offset = getOffset(nxtAddr); + + final long paddr = alloc.getPhysicalAddress(offset); + + FileChannelUtility.readAll(m_reopener, bb, paddr); + + return data; + + } + // /** // * Only blacklist the addr if not already available, in other words // * a blacklisted address only makes sense if it for previously Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -180,7 +180,7 @@ /** * @param fed * @param listener - * @param logicalServiceZPath + * @param zpath * This zpath of the logicalService instance. * @param attributes * This provides the information required to restart a persistent Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -76,6 +76,7 @@ import com.bigdata.Banner; import com.bigdata.counters.AbstractStatisticsCollector; import com.bigdata.counters.PIDUtil; +import com.bigdata.ha.HAGlue; import com.bigdata.jini.lookup.entry.Hostname; import com.bigdata.jini.lookup.entry.ServiceUUID; import com.bigdata.jini.util.JiniUtil; @@ -122,11 +123,13 @@ * Services may be <em>destroyed</em> using {@link DestroyAdmin}, e.g., through * the Jini service browser. Note that all persistent data associated with that * service is also destroyed! - * <p> - * Note: This class was cloned from the com.bigdata.service.jini package. + * + * TODO This class was cloned from the com.bigdata.service.jini package. * Zookeeper support was stripped out and the class was made to align with a * write replication pipeline for {@link HAJournal} rather than with a - * federation of bigdata services. + * federation of bigdata services. However, {@link HAGlue} now extends + * {@link IService} and we are using zookeeper, so maybe we can line these base + * classes up again? * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ @@ -139,11 +142,6 @@ public interface ConfigurationOptions { /** - * The pathname of the service directory as a {@link File} (required). - */ - String SERVICE_DIR = "serviceDir"; - - /** * A {@link String}[] whose values are the group(s) to be used for * discovery (no default). Note that multicast discovery is always used * if {@link LookupDiscovery#ALL_GROUPS} (a <code>null</code>) is @@ -168,6 +166,13 @@ String ENTRIES = "entries"; /** + * The pathname of the service directory as a {@link File} (required). + */ + String SERVICE_DIR = "serviceDir"; + +// String LOGICAL_SERVICE_ZPATH = "logicalServiceZPath"; + + /** * This object is used to export the service proxy. The choice here * effects the protocol that will be used for communications between the * clients and the service. The default {@link Exporter} if none is @@ -175,7 +180,7 @@ * {@link TcpServerEndpoint}. */ String EXPORTER = "exporter"; - + /** * The timeout in milliseconds to await the discovery of a service if * there is a cache miss (default {@value #DEFAULT_CACHE_MISS_TIMEOUT}). Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -39,6 +39,7 @@ import com.bigdata.ha.QuorumService; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; +import com.bigdata.journal.ValidationError; import com.bigdata.quorum.Quorum; import com.bigdata.quorum.zk.ZKQuorumImpl; import com.bigdata.service.proxy.ThickFuture; @@ -182,7 +183,53 @@ } + /* + * ITransactionService + * + * This interface is delegated to the Journal's local transaction + * service. This service MUST be the quorum leader. + * + * Note: If the quorum breaks, the service which was the leader will + * invalidate all open transactions. This is handled in AbstractJournal. + * + * FIXME We should really pair the quorum token with the transaction + * identifier in order to guarantee that the quorum token does not + * change (e.g., that the quorum does not break) across the scope of the + * transaction. That will require either changing the + * ITransactionService API and/or defining an HA variant of that API. + */ + @Override + public long newTx(final long timestamp) throws IOException { + + getQuorum().assertLeader(getQuorumToken()); + + // Delegate to the Journal's local transaction service. + return HAJournal.this.newTx(timestamp); + + } + + @Override + public long commit(final long tx) throws ValidationError, IOException { + + getQuorum().assertLeader(getQuorumToken()); + + // Delegate to the Journal's local transaction service. + return HAJournal.this.commit(tx); + + } + + @Override + public void abort(final long tx) throws IOException { + + getQuorum().assertLeader(getQuorumToken()); + + // Delegate to the Journal's local transaction service. + HAJournal.this.abort(tx); + + } + + @Override public Future<Void> bounceZookeeperConnection() { final FutureTask<Void> ft = new FutureTaskMon<Void>(new Runnable() { @SuppressWarnings("rawtypes") Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -140,6 +140,9 @@ * recognizing a critical set of [k] distinct services. We could also handle * this by explicitly declaring the UUIDs of those services. That tends to * be robust. + * + * This also needs to be reconciled with the federation. The federation uses + * ephemeral sequential to create the logical service identifiers. */ private String logicalServiceId; @@ -175,13 +178,20 @@ } try { + if (jettyServer != null) { + // Shut down the NanoSparqlServer. jettyServer.stop(); + jettyServer = null; + } + } catch (Exception e) { + log.error(e); + } super.terminate(); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -1210,15 +1210,9 @@ super.start(); - /* - * FIXME Resolve source of recursion and how to handle them. - */ watcherServiceRef.set(Executors .newSingleThreadExecutor(new DaemonThreadFactory(getClass() .getName()))); -// watcherServiceRef.set(Executors -// .newCachedThreadPool(new DaemonThreadFactory(getClass() -// .getName()))); try { /* Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/AbstractServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/AbstractServer.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/AbstractServer.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -178,7 +178,9 @@ String SERVICE_DIR = "serviceDir"; /** - * @todo javadoc. + * The zpath of the logical service. The service must use + * {@link CreateMode#EPHEMERAL_SEQUENTIAL} to create a child of this + * zpath to represent itself. */ String LOGICAL_SERVICE_ZPATH = "logicalServiceZPath"; Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -191,6 +191,46 @@ } + /** + * If the node is not writable, then commit a response and return + * <code>false</code>. Otherwise return <code>true</code>. + * + * @param req + * @param resp + * + * @return <code>true</code> iff the node is writable. + * + * @throws IOException + */ + protected boolean isReadable(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final Quorum<HAGlue, QuorumService<HAGlue>> quorum = getQuorum(); + + if(quorum == null) { + + // No quorum. + return true; + + } + + if (quorum.isQuorumMet()) { + + /* + * There is a quorum. The quorum is met. + */ + + return true; + + } + + buildResponse(resp, HTTP_METHOD_NOT_ALLOWED, MIME_TEXT_PLAIN, + "Quorum is not met."); + + return false; + + } + // /** // * The {@link SparqlCache}. // */ Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2012-09-10 16:07:16 UTC (rev 6560) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2012-09-10 18:45:44 UTC (rev 6561) @@ -422,6 +422,11 @@ void doQuery(final HttpServletRequest r... [truncated message content] |
From: <tho...@us...> - 2012-09-11 11:54:57
|
Revision: 6564 http://bigdata.svn.sourceforge.net/bigdata/?rev=6564&view=rev Author: thompsonbry Date: 2012-09-11 11:54:46 +0000 (Tue, 11 Sep 2012) Log Message: ----------- Moved the logic to pre-incremental the RWStore (really IStore) native tx counter into the inner class in Journal that extends the JournalTransactionService. This ensures that the pre-increment pattern is followed for all class to newTx() on the Journal, including those done through the HA API or by callers that directly access the ITransactionService on the Journal. @see https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) @see https://sourceforge.net/apps/trac/bigdata/ticket/440 (BTree can not be case to Name2Addr) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-09-11 09:59:54 UTC (rev 6563) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-09-11 11:54:46 UTC (rev 6564) @@ -65,6 +65,7 @@ import com.bigdata.counters.httpd.CounterSetHTTPD; import com.bigdata.ha.HAGlue; import com.bigdata.ha.QuorumService; +import com.bigdata.journal.jini.ha.HAJournal; import com.bigdata.quorum.Quorum; import com.bigdata.rawstore.IRawStore; import com.bigdata.relation.locator.DefaultResourceLocator; @@ -377,7 +378,7 @@ if (quorum == null) { // Not HA. - return super.newTx(timestamp); + return this._newTx(timestamp); } @@ -386,7 +387,7 @@ if (quorum.getMember().isLeader(token)) { // HA and this is the leader. - return super.newTx(timestamp); + return this._newTx(timestamp); } @@ -423,6 +424,58 @@ } + /** + * Core impl. + * <p> + * This code pre-increments the active transaction count within the + * RWStore before requesting a new transaction from the transaction + * service. This ensures that the RWStore does not falsely believe + * that there are no open transactions during the call to + * AbstractTransactionService#newTx(). + * <p> + * Note: This code was moved into the inner class extending the + * {@link JournalTransactionService} in order to ensure that we + * follow this pre-incremental pattern for an {@link HAJournal} as + * well. + * + * @see <a + * href="https://sourceforge.net/apps/trac/bigdata/ticket/440#comment:13"> + * BTree can not be case to Name2Addr </a> + * @see <a + * href="https://sourceforge.net/apps/trac/bigdata/ticket/530"> + * Journal HA </a> + */ + private final long _newTx(final long timestamp) { + + IRawTx tx = null; + try { + + if (getBufferStrategy() instanceof IRWStrategy) { + + // pre-increment the active tx count. + tx = ((IRWStrategy) getBufferStrategy()).newTx(); + } + + return super.newTx(timestamp); + + } finally { + + if (tx != null) { + + /* + * If we had pre-incremented the transaction counter in + * the RWStore, then we decrement it before leaving this + * method. + */ + + tx.close(); + + } + + } + + } + @Override public long commit(final long tx) { @@ -1341,6 +1394,9 @@ /** * Create a new transaction on the {@link Journal}. + * <p> + * Note: This is a convenience method. The implementation of this method is + * delegated to the object returned by {@link #getTransactionService()}. * * @param timestamp * A positive timestamp for a historical read-only transaction as @@ -1355,64 +1411,86 @@ * * @see ITransactionService#newTx(long) */ - public long newTx(final long timestamp) { + final public long newTx(final long timestamp) { - IRawTx tx = null; +// IRawTx tx = null; +// try { +// if (getBufferStrategy() instanceof IRWStrategy) { +// +// /* +// * This code pre-increments the active transaction count within +// * the RWStore before requesting a new transaction from the +// * transaction service. This ensures that the RWStore does not +// * falsely believe that there are no open transactions during +// * the call to AbstractTransactionService#newTx(). +// * +// * @see https://sourceforge.net/apps/trac/bigdata/ticket/440#comment:13 +// */ +// tx = ((IRWStrategy) getBufferStrategy()).newTx(); +// } +// try { +// +// return getTransactionService().newTx(timestamp); +// +// } catch (IOException ioe) { +// +// /* +// * Note: IOException is declared for RMI but will not be thrown +// * since the transaction service is in fact local. +// */ +// +// throw new RuntimeException(ioe); +// +// } +// +// } finally { +// +// if (tx != null) { +// +// /* +// * If we had pre-incremented the transaction counter in the +// * RWStore, then we decrement it before leaving this method. +// */ +// +// tx.close(); +// +// } +// +// } + + /* + * Note: The RWStore native tx pre-increment logic is now handled by + * _newTx() in the inner class that extends JournalTransactionService. + */ try { - if (getBufferStrategy() instanceof IRWStrategy) { - - /* - * This code pre-increments the active transaction count within - * the RWStore before requesting a new transaction from the - * transaction service. This ensures that the RWStore does not - * falsely believe that there are no open transactions during - * the call to AbstractTransactionService#newTx(). - * - * @see https://sourceforge.net/apps/trac/bigdata/ticket/440#comment:13 - */ - tx = ((IRWStrategy) getBufferStrategy()).newTx(); - } - try { - return getTransactionService().newTx(timestamp); + return getTransactionService().newTx(timestamp); - } catch (IOException ioe) { + } catch (IOException ioe) { - /* - * Note: IOException is declared for RMI but will not be thrown - * since the transaction service is in fact local. - */ + /* + * Note: IOException is declared for RMI but will not be thrown + * since the transaction service is in fact local. + */ - throw new RuntimeException(ioe); + throw new RuntimeException(ioe); - } - - } finally { - - if (tx != null) { - - /* - * If we had pre-incremented the transaction counter in the - * RWStore, then we decrement it before leaving this method. - */ - - tx.close(); - - } - } } /** * Abort a transaction. + * <p> + * Note: This is a convenience method. The implementation of this method is + * delegated to the object returned by {@link #getTransactionService()}. * * @param tx * The transaction identifier. * * @see ITransactionService#abort(long) */ - public void abort(final long tx) { + final public void abort(final long tx) { try { @@ -1439,6 +1517,9 @@ /** * Commit a transaction. + * <p> + * Note: This is a convenience method. The implementation of this method is + * delegated to the object returned by {@link #getTransactionService()}. * * @param tx * The transaction identifier. @@ -1447,7 +1528,7 @@ * * @see ITransactionService#commit(long) */ - public long commit(final long tx) throws ValidationError { + final public long commit(final long tx) throws ValidationError { try { @@ -1457,7 +1538,7 @@ * protocol. */ - return localTransactionManager.getTransactionService().commit(tx); + return getTransactionService().commit(tx); } catch (IOException e) { @@ -1487,12 +1568,17 @@ /** * Returns the next timestamp from the {@link ILocalTransactionManager}. + * <p> + * Note: This is a convenience method. The implementation of this method is + * delegated to the object returned by {@link #getTransactionService()}. * * @deprecated This is here for historical reasons and is only used by the - * test suite. Use {@link #getLocalTransactionManager()} and + * test suite. Use {@link #getLocalTransactionManager()} and * {@link ITransactionService#nextTimestamp()}. + * + * @see ITransactionService#nextTimestamp() */ - public long nextTimestamp() { + final public long nextTimestamp() { return localTransactionManager.nextTimestamp(); @@ -1697,13 +1783,15 @@ public ILocalTransactionManager getTransactionManager() { - return concurrencyManager.getTransactionManager(); +// return concurrencyManager.getTransactionManager(); + return localTransactionManager; } public ITransactionService getTransactionService() { - return getTransactionManager().getTransactionService(); +// return getTransactionManager().getTransactionService(); + return localTransactionManager.getTransactionService(); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-11 09:59:54 UTC (rev 6563) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2012-09-11 11:54:46 UTC (rev 6564) @@ -70,6 +70,7 @@ * pipeline is in a globally consistent order that excludes the down node. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/530"> Journal HA </a> */ public class HAJournal extends Journal { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-11 09:59:54 UTC (rev 6563) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-11 11:54:46 UTC (rev 6564) @@ -60,8 +60,10 @@ * An administratable server for an {@link HAJournal}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/530"> Journal + * HA </a> * - * TODO Make sure that ganglia reporting can be enabled. + * TODO Make sure that ganglia reporting can be enabled. */ public class HAJournalServer extends AbstractServer { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-09-11 16:27:19
|
Revision: 6565 http://bigdata.svn.sourceforge.net/bigdata/?rev=6565&view=rev Author: thompsonbry Date: 2012-09-11 16:27:07 +0000 (Tue, 11 Sep 2012) Log Message: ----------- Added the concept of a logicalServiceId to the HAJournalServer. Unlike the federation, which creates logical service ids automatically, the logical service id for the HAJournal is configured explicitly for the HAJournalServer component. I also removed the PIPELINE entry in the HAJournalServer configuration. The quorum members now gather around the logicalServiceId. This means that it is possible to have multiple HA Journal quorums running in the same zookeeper + river environment. Modified the HAJournalServer to use the standard JiniClientConfig. Added support for the JoinManager.leaseTimeout to the sample configuration files. Note that you MUST override the LeaseRenewalManager if you want to use a lease timeout LT 20s (the round trip time assumption is 10s and the lease renewal manager is practive, so if the lease duration is 20s and the expected round trip time is 10s, it will renew every 10s). The JiniClientConfig had a constructor parameter "className" that was no longer used. I have updated the javadoc to reflect this fact and deprecated the PROPERTIES configuration option in JiniClientConfig.Options since it really belongs in the JiniClient class (it is used with the component name com.bigdata.service.JiniClient) and not the JiniClientConfig class. https://sourceforge.net/apps/trac/bigdata/ticket/530 (Journal HA) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/log4j-dev.properties branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/logging.properties branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniClientConfig.java Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/log4j-dev.properties =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/log4j-dev.properties 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/log4j-dev.properties 2012-09-11 16:27:07 UTC (rev 6565) @@ -40,7 +40,8 @@ #log4j.logger.com.bigdata.rdf.sparql.ast.cache=ALL #log4j.logger.com.bigdata.rdf.sail.sparql=ALL #log4j.logger.com.bigdata.rdf.sparql.ast.optimizers.ASTBatchResolveTermsOptimizer=ALL -log4j.logger.com.bigdata.rdf.sparql.ast.cache.DescribeBindingsCollector=INFO +#log4j.logger.com.bigdata.rdf.sparql.ast.cache=INFO +#log4j.logger.com.bigdata.rdf.sparql.ast.eval.CBD=ALL #log4j.logger.com.bigdata.rdf.rio.ntriples.BigdataNTriplesParserTestCase=ALL #log4j.logger.com.bigdata.rdf.rio.StatementBuffer=ALL @@ -268,6 +269,19 @@ # Normal data loader (single threaded). log4j.logger.com.bigdata.rdf.store.DataLoader=INFO +#log4j.logger.com.bigdata.ha=ALL +#log4j.logger.com.bigdata.txLog=ALL +#log4j.logger.com.bigdata.haLog=ALL +#log4j.logger.com.bigdata.rwstore=ALL +#log4j.logger.com.bigdata.journal=ALL +#log4j.logger.com.bigdata.journal.AbstractBufferStrategy=ALL +#log4j.logger.com.bigdata.journal.jini.ha=ALL +#log4j.logger.com.bigdata.service.jini.lookup=ALL +#log4j.logger.com.bigdata.quorum=ALL +#log4j.logger.com.bigdata.quorum.zk=ALL +#log4j.logger.com.bigdata.quorum.quorumState=ALL,destPlain +#log4j.logger.com.bigdata.io.writecache=ALL + log4j.logger.benchmark.bigdata.TestBSBM=INFO # Test suite logger. @@ -291,12 +305,12 @@ # See http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html log4j.appender.dest2=org.apache.log4j.ConsoleAppender log4j.appender.dest2.layout=org.apache.log4j.PatternLayout -log4j.appender.dest2.layout.ConversionPattern=%-5p: %r %X{hostname} %X{serviceUUID} %X{taskname} %X{timestamp} %X{resources} %t %l: %m%n +log4j.appender.dest2.layout.ConversionPattern=%-5p: %r %d{ISO8601} %X{hostname} %X{serviceUUID} %X{taskname} %X{timestamp} %X{resources} %t %l: %m%n ## destPlain -#log4j.appender.destPlain=org.apache.log4j.ConsoleAppender -#log4j.appender.destPlain.layout=org.apache.log4j.PatternLayout -#log4j.appender.destPlain.layout.ConversionPattern= +log4j.appender.destPlain=org.apache.log4j.ConsoleAppender +log4j.appender.destPlain.layout=org.apache.log4j.PatternLayout +log4j.appender.destPlain.layout.ConversionPattern= ## # Summary query evaluation log (tab delimited file). Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/logging.properties =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/logging.properties 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata/src/resources/logging/logging.properties 2012-09-11 16:27:07 UTC (rev 6565) @@ -39,8 +39,8 @@ java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter -# Limit the message that are printed on the console to INFO and above. -java.util.logging.ConsoleHandler.level = INFO +# Optionally limit the message that are printed on the console to INFO and above. +# java.util.logging.ConsoleHandler.level = ALL java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2012-09-11 16:27:07 UTC (rev 6565) @@ -130,7 +130,7 @@ public final Properties properties; public final String[] jiniOptions; - protected void toString(StringBuilder sb) { + protected void toString(final StringBuilder sb) { super.toString(sb); Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/jini/util/ConfigMath.java 2012-09-11 16:27:07 UTC (rev 6565) @@ -28,13 +28,12 @@ package com.bigdata.jini.util; import java.io.File; -import java.lang.reflect.Array; import java.util.concurrent.TimeUnit; +import net.jini.config.Configuration; + import com.sun.jini.config.ConfigUtil; -import net.jini.config.Configuration; - /** * A utility class to help with {@link Configuration}s. * @@ -67,6 +66,18 @@ return a * b; } + public static int divide(int a, int b) { + return a / b; + } + + public static long divide(long a, long b) { + return a / b; + } + + public static double divide(double a, double b) { + return a / b; + } + /** * Useful for enums which can't be handled otherwise. * @@ -74,7 +85,7 @@ * * @return */ - public static String toString(Object o) { + public static String toString(final Object o) { return o.toString(); @@ -252,4 +263,59 @@ } + /** + * Trinary logic operator (if-then-else). + * + * @param condition + * The boolean condition. + * @param ifTrue + * The result if the condition is <code>true</code>. + * @param ifFalse + * The result if the condition is <code>false</code>. + * + * @return The appropriate argument depending on whether the + * <i>condition</i> is <code>true</code> or <code>false</code>. + */ + public static <T> T trinary(final boolean condition, final T ifTrue, + final T ifFalse) { + + if(condition) { + + return ifTrue; + + } + + return ifFalse; + + } + + /** + * Return <code>true</code> iff the argument is <code>null</code>. + * + * @param o + * The argument. + */ + public static boolean isNull(final Object o) { + + return o == null; + + } + + /** + * Return <code>true</code> iff the argument is not <code>null</code>. + * + * @param o + * The argument. + */ + public static boolean isNotNull(final Object o) { + +// ConfigMath.trinary(ConfigMath.isNull(bigdata.service) +// , new Comment("Auto-generated ServiceID") +// , new ServiceUUID( bigdata.serviceId ) +// ); + + return o != null; + + } + } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/AbstractServer.java 2012-09-11 16:27:07 UTC (rev 6565) @@ -51,13 +51,11 @@ import net.jini.config.Configuration; import net.jini.config.ConfigurationException; import net.jini.config.ConfigurationProvider; -import net.jini.core.discovery.LookupLocator; import net.jini.core.entry.Entry; import net.jini.core.lookup.ServiceID; import net.jini.core.lookup.ServiceRegistrar; import net.jini.discovery.DiscoveryEvent; import net.jini.discovery.DiscoveryListener; -import net.jini.discovery.LookupDiscovery; import net.jini.discovery.LookupDiscoveryManager; import net.jini.export.Exporter; import net.jini.jeri.BasicILFactory; @@ -79,12 +77,14 @@ import com.bigdata.ha.HAGlue; import com.bigdata.jini.lookup.entry.Hostname; import com.bigdata.jini.lookup.entry.ServiceUUID; +import com.bigdata.jini.start.config.ZookeeperClientConfig; import com.bigdata.jini.util.JiniUtil; import com.bigdata.service.AbstractService; import com.bigdata.service.IService; import com.bigdata.service.IServiceShutdown; import com.bigdata.service.jini.DataServer.AdministrableDataService; import com.bigdata.service.jini.FakeLifeCycle; +import com.bigdata.service.jini.JiniClientConfig; import com.sun.jini.admin.DestroyAdmin; import com.sun.jini.start.LifeCycle; import com.sun.jini.start.NonActivatableServiceDescriptor; @@ -139,39 +139,26 @@ final static private Logger log = Logger.getLogger(AbstractServer.class); + /** + * Configuration options. + * <p> + * Note: The {@link ServiceID} is optional and may be specified using the + * {@link Entry}[] for the {@link JiniClientConfig} as a {@link ServiceUUID} + * . If it is not specified, then a random {@link ServiceID} will be + * assigned. Either way, the {@link ServiceID} is written into the + * {@link #SERVICE_DIR} and is valid for the life cycle of the persistent + * service. + * + * @see JiniClientConfig + * @see ZookeeperClientConfig + */ public interface ConfigurationOptions { /** - * A {@link String}[] whose values are the group(s) to be used for - * discovery (no default). Note that multicast discovery is always used - * if {@link LookupDiscovery#ALL_GROUPS} (a <code>null</code>) is - * specified. {@link LookupDiscovery#NO_GROUPS} is the symbolic constant - * for an empty String[]. - */ - String GROUPS = "groups"; - - /** - * An array of one or more {@link LookupLocator}s specifying unicast - * URIs of the form <code>jini://host/</code> or - * <code>jini://host:port/</code> (no default) -or- an empty array if - * you want to use multicast discovery <strong>and</strong> you have - * specified {@link #GROUPS} as {@link LookupDiscovery#ALL_GROUPS} (a - * <code>null</code>). - */ - String LOCATORS = "locators"; - - /** - * {@link Entry}[] attributes used to describe the client or service. - */ - String ENTRIES = "entries"; - - /** * The pathname of the service directory as a {@link File} (required). */ String SERVICE_DIR = "serviceDir"; -// String LOGICAL_SERVICE_ZPATH = "logicalServiceZPath"; - /** * This object is used to export the service proxy. The choice here * effects the protocol that will be used for communications between the @@ -205,6 +192,8 @@ * {@link ServiceID} is assigned by jini, then it is assigned the * asynchronously after the service has discovered a * {@link ServiceRegistrar}. + * + * @see ConfigurationOptions */ private ServiceID serviceID; @@ -549,8 +538,7 @@ List<Entry> entries = null; final String COMPONENT = getClass().getName(); - final String[] groups; - final LookupLocator[] locators; + final JiniClientConfig jiniClientConfig; try { config = ConfigurationProvider.getInstance(args); @@ -559,22 +547,15 @@ ConfigurationOptions.CACHE_MISS_TIMEOUT, Long.TYPE, ConfigurationOptions.DEFAULT_CACHE_MISS_TIMEOUT); - groups = (String[]) config.getEntry( - COMPONENT, ConfigurationOptions.GROUPS, String[].class); + jiniClientConfig = new JiniClientConfig( + JiniClientConfig.Options.NAMESPACE, config); - locators = (LookupLocator[]) config.getEntry( - COMPONENT, ConfigurationOptions.LOCATORS, LookupLocator[].class); - // convert Entry[] to a mutable list. - entries = new LinkedList<Entry>(Arrays.asList((Entry[]) config - .getEntry(COMPONENT, ConfigurationOptions.ENTRIES, - Entry[].class, new Entry[0]))); + entries = new LinkedList<Entry>( + Arrays.asList((Entry[]) jiniClientConfig.entries)); - if (log.isInfoEnabled()) { - log.info(ConfigurationOptions.GROUPS + "=" + Arrays.toString(groups)); - log.info(ConfigurationOptions.LOCATORS + "=" + Arrays.toString(locators)); - log.info(ConfigurationOptions.ENTRIES+ "=" + entries); - } + if (log.isInfoEnabled()) + log.info(jiniClientConfig.toString()); /* * Make sure that the parent directory exists. @@ -824,8 +805,9 @@ * an alternative, you can use LookupDiscovery, which always does * multicast discovery. */ - lookupDiscoveryManager = new LookupDiscoveryManager(groups, - locators, this /* DiscoveryListener */, config); + lookupDiscoveryManager = new LookupDiscoveryManager( + jiniClientConfig.groups, jiniClientConfig.locators, + this /* DiscoveryListener */, config); /* * Setup a helper class that will be notified as services join or @@ -959,10 +941,6 @@ } /* - * TODO Not needed since the ServiceID is being set from the config - * file, but we do not yet verify that it *is* set from the config file. - * Do that, and then remove this code, - * * Note: This is synchronized in case set via listener by the * JoinManager, which would be rather fast action on its part. */ Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-B.config 2012-09-11 16:27:07 UTC (rev 6565) @@ -69,23 +69,19 @@ // NanoSparqlServer (http) port. private static nssPort = ConfigMath.add(8090,1); - // write replication pipeline port. + // write replication pipeline port (listener). private static haPort = ConfigMath.add(9090,1); // The #of services in the write pipeline. private static replicationFactor = 3; - // The ServiceID for *this* service. - private static serviceId = UUID.fromString("a6120400-d63d-40d6-8ddb-3c283d0d5e3c"); + // The logical service identifier shared by all members of the quorum. + private static logicalServiceId = "test-1"; - // The write replication pipeline. - private static pipeline = new UUID[] { - UUID.fromString("3c7e7639-78bf-452c-9ca9-2960caec17dc"), - UUID.fromString("a6120400-d63d-40d6-8ddb-3c283d0d5e3c"), - UUID.fromString("d609dcf7-860c-40f1-bd2f-eebdce20556c"), - }; - - // service directory. + // The ServiceID for *this* service -or- null to assign it dynamically. + private static serviceId = UUID.fromString("a6120400-d63d-40d6-8ddb-3c283d0d5e3c"); + + // The service directory (if serviceId is null, then you must override). private static serviceDir = new File(fedname,""+serviceId); // journal data directory. @@ -125,10 +121,15 @@ * be noticed in roughly the same period of time for either * system. A value larger than the zookeeper default helps to * prevent client disconnects under sustained heavy load. + * + * If you use a short lease timeout (LT 20s), then you need to override + * properties properties for the net.jini.lease.LeaseRenewalManager + * or it will run in a tight loop (it's default roundTripTime is 10s + * and it schedules lease renewals proactively.) */ // jini - static private leaseTimeout = ConfigMath.s2ms(5); + static private leaseTimeout = ConfigMath.s2ms(20); // zookeeper static private sessionTimeout = (int)ConfigMath.s2ms(5); @@ -209,24 +210,30 @@ /* * Jini client configuration. - * - * TODO Only used by ListServices. - * TODO leaseTimeout ignored by HAJournalServer */ com.bigdata.service.jini.JiniClient { groups = bigdata.groups; locators = bigdata.locators; - - jiniOptions = new String[] { - "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout, - + entries = new Entry[] { + + (Entry) ConfigMath.trinary(ConfigMath.isNull(bigdata.serviceId) + , new Comment("Auto-generated ServiceID") + , new ServiceUUID( bigdata.serviceId ) + ) + }; } +net.jini.lookup.JoinManager { + + maxLeaseDuration = bigdata.leaseTimeout; + +} + /* * Server configuration options. */ @@ -234,20 +241,8 @@ serviceDir = bigdata.serviceDir; - groups = bigdata.groups; - - locators = bigdata.locators; + logicalServiceId = bigdata.logicalServiceId; - entries = new Entry[] { - - new ServiceUUID(bigdata.serviceId), - - }; - - // TODO Support this : The lease timeout for jini joins. - // "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout - - // Where the service will expose its write replication listener. writePipelineAddr = new InetSocketAddress("localhost",bigdata.haPort); /* Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config 2012-09-11 16:27:07 UTC (rev 6565) @@ -69,23 +69,19 @@ // NanoSparqlServer (http) port. private static nssPort = ConfigMath.add(8090,2); - // write replication pipeline port. + // write replication pipeline port (listener). private static haPort = ConfigMath.add(9090,2); // The #of services in the write pipeline. private static replicationFactor = 3; - // The ServiceID for *this* service. + // The logical service identifier shared by all members of the quorum. + private static logicalServiceId = "test-1"; + + // The ServiceID for *this* service -or- null to assign it dynamically. private static serviceId = UUID.fromString("d609dcf7-860c-40f1-bd2f-eebdce20556c"); - // The write replication pipeline. - private static pipeline = new UUID[] { - UUID.fromString("3c7e7639-78bf-452c-9ca9-2960caec17dc"), - UUID.fromString("a6120400-d63d-40d6-8ddb-3c283d0d5e3c"), - UUID.fromString("d609dcf7-860c-40f1-bd2f-eebdce20556c"), - }; - - // service directory. + // The service directory (if serviceId is null, then you must override). private static serviceDir = new File(fedname,""+serviceId); // journal data directory. @@ -125,10 +121,15 @@ * be noticed in roughly the same period of time for either * system. A value larger than the zookeeper default helps to * prevent client disconnects under sustained heavy load. + * + * If you use a short lease timeout (LT 20s), then you need to override + * properties properties for the net.jini.lease.LeaseRenewalManager + * or it will run in a tight loop (it's default roundTripTime is 10s + * and it schedules lease renewals proactively.) */ // jini - static private leaseTimeout = ConfigMath.s2ms(5); + static private leaseTimeout = ConfigMath.s2ms(20); // zookeeper static private sessionTimeout = (int)ConfigMath.s2ms(5); @@ -209,24 +210,30 @@ /* * Jini client configuration. - * - * TODO Only used by ListServices. - * TODO leaseTimeout ignored by HAJournalServer */ com.bigdata.service.jini.JiniClient { groups = bigdata.groups; locators = bigdata.locators; - - jiniOptions = new String[] { - "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout, - + entries = new Entry[] { + + (Entry) ConfigMath.trinary(ConfigMath.isNull(bigdata.serviceId) + , new Comment("Auto-generated ServiceID") + , new ServiceUUID( bigdata.serviceId ) + ) + }; } +net.jini.lookup.JoinManager { + + maxLeaseDuration = bigdata.leaseTimeout; + +} + /* * Server configuration options. */ @@ -234,20 +241,8 @@ serviceDir = bigdata.serviceDir; - groups = bigdata.groups; - - locators = bigdata.locators; + logicalServiceId = bigdata.logicalServiceId; - entries = new Entry[] { - - new ServiceUUID(bigdata.serviceId), - - }; - - // TODO Support this : The lease timeout for jini joins. - // "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout - - // Where the service will expose its write replication listener. writePipelineAddr = new InetSocketAddress("localhost",bigdata.haPort); /* Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.config 2012-09-11 16:27:07 UTC (rev 6565) @@ -65,27 +65,23 @@ bigdata { private static fedname = "benchmark"; - + // NanoSparqlServer (http) port. private static nssPort = 8090; - // write replication pipeline port. + // write replication pipeline port (listener). private static haPort = 9090; // The #of services in the write pipeline. private static replicationFactor = 3; - // The ServiceID for *this* service. + // The logical service identifier shared by all members of the quorum. + private static logicalServiceId = "test-1"; + + // The ServiceID for *this* service -or- null to assign it dynamically. private static serviceId = UUID.fromString("3c7e7639-78bf-452c-9ca9-2960caec17dc"); - // The write replication pipeline. - private static pipeline = new UUID[] { - UUID.fromString("3c7e7639-78bf-452c-9ca9-2960caec17dc"), - UUID.fromString("a6120400-d63d-40d6-8ddb-3c283d0d5e3c"), - UUID.fromString("d609dcf7-860c-40f1-bd2f-eebdce20556c"), - }; - - // service directory. + // The service directory (if serviceId is null, then you must override). private static serviceDir = new File(fedname,""+serviceId); // journal data directory. @@ -125,10 +121,15 @@ * be noticed in roughly the same period of time for either * system. A value larger than the zookeeper default helps to * prevent client disconnects under sustained heavy load. + * + * If you use a short lease timeout (LT 20s), then you need to override + * properties properties for the net.jini.lease.LeaseRenewalManager + * or it will run in a tight loop (it's default roundTripTime is 10s + * and it schedules lease renewals proactively.) */ // jini - static private leaseTimeout = ConfigMath.s2ms(5); + static private leaseTimeout = ConfigMath.s2ms(20); // zookeeper static private sessionTimeout = (int)ConfigMath.s2ms(5); @@ -210,9 +211,6 @@ /* * Jini client configuration. - * - * TODO Only used by ListServices. - * TODO leaseTimeout ignored by HAJournalServer */ com.bigdata.service.jini.JiniClient { @@ -220,12 +218,21 @@ locators = bigdata.locators; - jiniOptions = new String[] { + entries = new Entry[] { + + (Entry) ConfigMath.trinary(ConfigMath.isNull(bigdata.serviceId) + , new Comment("Auto-generated ServiceID") + , new ServiceUUID( bigdata.serviceId ) + ) + + }; - "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout, +} - }; +net.jini.lookup.JoinManager { + maxLeaseDuration = bigdata.leaseTimeout; + } /* @@ -235,20 +242,8 @@ serviceDir = bigdata.serviceDir; - groups = bigdata.groups; + logicalServiceId = bigdata.logicalServiceId; - locators = bigdata.locators; - - entries = new Entry[] { - - new ServiceUUID(bigdata.serviceId), - - }; - - // TODO Support this : The lease timeout for jini joins. - // "net.jini.lookup.JoinManager.maxLeaseDuration="+bigdata.leaseTimeout - - // Where the service will expose its write replication listener. writePipelineAddr = new InetSocketAddress("localhost",bigdata.haPort); /* Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2012-09-11 16:27:07 UTC (rev 6565) @@ -91,11 +91,25 @@ * exposes its write pipeline interface (required). */ String WRITE_PIPELINE_ADDR = "writePipelineAddr"; - + /** - * The ordered {@link UUID} each service in the write replication + * The logical service identifier for this highly available journal. + * There may be multiple logical highly available journals, each + * comprised of <em>k</em> physical services. The logical service + * identifier is used to differentiate these different logical HA + * journals. The service {@link UUID} is used to differentiate the + * physical service instances. By assigning a logical service identifier + * to an {@link HAJournalServer} you associate that server instance with + * the specified logical highly available journal. + * <p> + * The identifier may be any legal znode node. + * + * TODO This needs to be reconciled with the federation. The federation + * uses ephemeral sequential to create the logical service identifiers. + * Here they are being assigned manually. This is basically the "flex" + * versus "static" issue. */ - String PIPELINE_UUIDS = "pipelineUUIDs"; + String LOGICAL_SERVICE_ID = "logicalServiceId"; } @@ -131,24 +145,21 @@ private HAGlue haGlueService; private ZookeeperClientConfig zkClientConfig; - /* - * TODO Explicit configuration of this property. We will need to add another - * level in the zpath for the logicalService under the HAJournal layer so we - * can have multiple HA Journals in a zk ensemble (and we need to ensure - * that znode gets created). We will no longer need to explicitly declare - * the UUID of the service, but we should report an error if the service - * becomes over subscribed (too many quorum members). Without a limit on the - * #of services that can join a quorum we can erode the HA guarantee by not - * recognizing a critical set of [k] distinct services. We could also handle - * this by explicitly declaring the UUIDs of those services. That tends to - * be robust. + /** + * The znode name for the logical service. * - * This also needs to be reconciled with the federation. The federation uses - * ephemeral sequential to create the logical service identifiers. + * @see ConfigurationOptions#LOGICAL_SERVICE_ID */ private String logicalServiceId; /** + * The zpath for the logical service. + * + * @see ConfigurationOptions#LOGICAL_SERVICE_ID + */ + private String logicalServiceZPath; + + /** * The {@link Quorum} for the {@link HAJournal}. */ private Quorum<HAGlue, QuorumService<HAGlue>> quorum; @@ -255,8 +266,17 @@ zkClientConfig = new ZookeeperClientConfig(config); - logicalServiceId = zkClientConfig.zroot + "/" + // znode name for the logical service. + logicalServiceId = (String) config.getEntry( + ConfigurationOptions.COMPONENT, + ConfigurationOptions.LOGICAL_SERVICE_ID, String.class); + + final String logicalServiceZPathPrefix = zkClientConfig.zroot + "/" + HAJournalServer.class.getName(); + + // zpath for the logical service. + logicalServiceZPath = logicalServiceZPathPrefix + "/" + + logicalServiceId; final int replicationFactor = (Integer) config.getEntry( ConfigurationOptions.COMPONENT, @@ -270,11 +290,6 @@ ConfigurationOptions.WRITE_PIPELINE_ADDR, InetSocketAddress.class); - // The write replication pipeline. - final UUID[] pipelineUUIDs = (UUID[]) config.getEntry( - ConfigurationOptions.COMPONENT, - ConfigurationOptions.PIPELINE_UUIDS, UUID[].class); - /* * Configuration properties for this HAJournal. */ @@ -319,11 +334,20 @@ } try { zka.getZookeeper() - .create(logicalServiceId, new byte[] {/* data */}, - acl, CreateMode.PERSISTENT); + .create(logicalServiceZPathPrefix, + new byte[] {/* data */}, acl, + CreateMode.PERSISTENT); } catch (NodeExistsException ex) { // ignore. } + try { + zka.getZookeeper() + .create(logicalServiceZPath, + new byte[] {/* data */}, acl, + CreateMode.PERSISTENT); + } catch (NodeExistsException ex) { + // ignore. + } quorum = new ZKQuorumImpl<HAGlue, QuorumService<HAGlue>>( replicationFactor, zka, acl); @@ -376,7 +400,8 @@ @Override public void notify(final QuorumEvent e) { - System.err.println("QuorumEvent: " + e);// FIXME remove logger. + if (log.isTraceEnabled()) + System.err.print("QuorumEvent: " + e); switch(e.getEventType()) { case CAST_VOTE: break; @@ -429,8 +454,8 @@ } }); - quorum.start(newQuorumService(logicalServiceId, serviceUUID, haGlueService, - journal)); + quorum.start(newQuorumService(logicalServiceZPath, serviceUUID, + haGlueService, journal)); // TODO These methods could be moved into QuorumServiceImpl.start(Quorum) final QuorumActor<?,?> actor = quorum.getActor(); @@ -481,12 +506,12 @@ * supporting HA operations. */ private QuorumServiceBase<HAGlue, AbstractJournal> newQuorumService( - final String logicalServiceId, + final String logicalServiceZPath, final UUID serviceId, final HAGlue remoteServiceImpl, final AbstractJournal store) { - return new QuorumServiceBase<HAGlue, AbstractJournal>(logicalServiceId, - serviceId, remoteServiceImpl, store) { + return new QuorumServiceBase<HAGlue, AbstractJournal>( + logicalServiceZPath, serviceId, remoteServiceImpl, store) { @Override public void start(final Quorum<?,?> quorum) { @@ -655,6 +680,7 @@ serviceURL = new URL("http", hostAddr, actualPort, ""/* file */) .toExternalForm(); + System.out.println("logicalServiceZPath: " + logicalServiceZPath); System.out.println("serviceURL: " + serviceURL); } Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java 2012-09-11 16:27:07 UTC (rev 6565) @@ -1267,7 +1267,8 @@ */ public void run() { try { - log.warn(e.toString()); + if (log.isInfoEnabled()) + log.info(e.toString()); switch (e.getState()) { case Disconnected: return; @@ -1922,9 +1923,10 @@ final QuorumTokenState tokenState = (QuorumTokenState) SerializerUtil .deserialize(data); - log.warn("Starting with quorum that has already met in the past: " - + tokenState); - + if (log.isInfoEnabled()) + log.info("Starting with quorum that has already met in the past: " + + tokenState); + if (tokenState.token() != NO_QUORUM && !isQuorumMet()) { try { Modified: branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniClientConfig.java =================================================================== --- branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniClientConfig.java 2012-09-11 11:54:46 UTC (rev 6564) +++ branches/BIGDATA_RELEASE_1_2_0/bigdata-jini/src/java/com/bigdata/service/jini/JiniClientConfig.java 2012-09-11 16:27:07 UTC (rev 6565) @@ -31,7 +31,7 @@ public interface Options { /** - * The namespace for these options. + * The component name for these {@link Configuration} options. */ String NAMESPACE = JiniClient.class.getName(); @@ -69,15 +69,44 @@ * merged, overwriting those specified for {@link JiniClient} directly. * This allows both general defaults and both additional service * properties and service specific overrides of the general defaults. + * + * @deprecated This is used by the {@link JiniClient}, not the + * {@link JiniClientConfig}. It's presents on this interface + * is therefore confusing. It should be moved to the + * {@link JiniClient.Options}. This symbolic constant can + * show up in {@link Configuration} files, so we probably + * need to leave in a reference here to redirect people to + * the {@link JiniClient.Options} interface. + * <p> + * The historical presence of this property on the + * {@link JiniClientConfig} class is the reason why the + * {@link JiniClientConfig} constructor has an ignored class + * name argument. */ String PROPERTIES = "properties"; } - + + /** + * The {@link Entry}[] and never null. This will be an empty {@link Entry}[] + * if no {@link Entry}s were specified. + * + * @see Options#ENTRIES + */ final public Entry[] entries; + /** + * The join group(s). + * + * @see Options#GROUPS + */ final public String[] groups; + /** + * The locators. + * + * @see Options#LOCATORS + */ final public LookupLocator[] locators; // final public Properties properties; @@ -94,7 +123,7 @@ } /** - * @param className + * @param classNameIsIgnored * The class name of the client or service (optional). When * specified, properties defined for that class in the * configuration will be used and will override those specified @@ -108,8 +137,8 @@ * * @see Options */ - public JiniClientConfig(final String className, final Configuration config) - throws ConfigurationException { + public JiniClientConfig(final String classNameIsIgnored, + final Configuration config) throws ConfigurationException { /* * Extract how the service will advertise itself from the Configuration This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |