From: <tho...@us...> - 2010-08-02 17:10:32
|
Revision: 3392 http://bigdata.svn.sourceforge.net/bigdata/?rev=3392&view=rev Author: thompsonbry Date: 2010-08-02 17:10:15 +0000 (Mon, 02 Aug 2010) Log Message: ----------- Merging trunk into branch JOURNAL_HA_BRANCH [r2980 : r3391]. Resolved the following conflicts: Incoming version but also replaced KeyBuilder with the version from the trunk to get decodeUUID(): C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata/src/test/com/bigdata/btree/keys/TestKeyBuilder.java Resolved by edit. The conflict was just comments around parallel edits made in the HA branch and the trunk: C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata/src/java/com/bigdata/btree/Node.java Used the version in the HA branch: C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata/src/java/com/bigdata/rwstore/RWStore.java Used the version in the HA branch. There were no edits of significance in the trunk. C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata/src/java/com/bigdata/journal/FileMetadata.java Reconciled conflicts in StoreCounters.getCounters(): C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata/src/java/com/bigdata/journal/WORMStrategy.java Reconciled by hand. The conflict dealt with the reorganization of the AbstractJournal contructor handling of the initialization of the buffer strategy in the HA branch and the cutover in the trunk so that Disk means the same thing as DiskWORM (that is, it uses the WORMStrategy). C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata/src/java/com/bigdata/journal/AbstractJournal.java Resolved conflicts by hand. There was some new code in the HA branch to provide details on the RWStore allocation policy. I've isolated that in a getStoreInfo() method. C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java Used the incoming version and the reorganized the imports - conflicts were just on imports. C C:/Documents and Settings/Bryan Thompson/workspace/bigdata-journal-HA-branch/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/.classpath branches/JOURNAL_HA_BRANCH/bigdata/src/architecture/mergePriority.xls branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/LRUNexus.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/bfs/BlobOverflowHandler.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BigdataMap.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BloomFilterFactory.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/Checkpoint.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/DefaultTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/IndexMetadata.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegment.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/NOPTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/Node.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/NodeSerializer.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/isolation/IsolatedFusedView.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/keys/CollatorEnum.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/keys/DefaultKeyBuilderFactory.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/keys/ICUSortKeyGenerator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/keys/IKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/keys/JDKSortKeyGenerator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/keys/KeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/proc/AbstractKeyArrayIndexProcedure.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/view/FusedView.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/HardReferenceGlobalLRU.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/counters/AbstractStatisticsCollector.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/io/DirectBufferPool.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/BufferMode.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/DiskOnlyStrategy.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/Options.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/WORMStrategy.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/IndexPartitionCause.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/LocalPartitionMetadata.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/MetadataIndex.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/PartitionLocator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/IRelation.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/accesspath/SameVariableConstraint.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/IPredicate.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/Rule.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/Var.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/DefaultRuleTaskFactory.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/QueryTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/RuleState.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/JoinTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/LocalJoinTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/CompactingMergeTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/IncrementalBuildTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/IndexManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/JoinIndexPartitionTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/JournalIndex.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/MoveTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/OverflowManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/ResourceEvents.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/ScatterSplitTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/SplitIndexPartitionTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/SplitUtility.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/StoreManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/ViewMetadata.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/search/ReadIndexTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractEmbeddedLoadBalancerService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractFederation.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/CommitTimeIndex.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/DataService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/DefaultServiceFederationDelegate.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/ILoadBalancerService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/LoadBalancerService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/MetadataService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/ndx/RawDataServiceTupleIterator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/ndx/pipeline/KVOC.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/ndx/pipeline/KVOList.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/proxy/RemoteAsynchronousIteratorImpl.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/proxy/RemoteChunk.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/AbstractAtomicRowReadOrWrite.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/GlobalRowStoreHelper.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/KeyDecoder.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/Schema.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/SparseRowStore.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/TPS.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/sparse/TPSTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/striterator/IKeyOrder.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/concurrent/ExecutionExceptions.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/concurrent/Latch.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/concurrent/QueueSizeMovingAverageTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/concurrent/ThreadPoolExecutorBaseStatisticsTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/concurrent/ThreadPoolExecutorStatisticsTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/config/NicUtil.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/httpd/NanoHTTPD.java branches/JOURNAL_HA_BRANCH/bigdata/src/resources/logging/log4j.properties branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/AbstractBTreeTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/AbstractIndexSegmentTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/AbstractTupleCursorTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestAll_IndexSegment.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestBTreeLeafCursors.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestBigdataMap.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestChunkedIterators.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestCopyOnWrite.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestDirtyIterators.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIncrementalWrite.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexPartitionFencePosts.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentBuilderCacheInteraction.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentBuilderWithCompactingMerge.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentBuilderWithIncrementalBuild.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentBuilderWithLargeTrees.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentBuilderWithSmallTree.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentWithBloomFilter.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestInsertLookupRemoveKeysInRootLeaf.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIterators.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestLeafSplitShortestSeparatorKey.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestLinearListMethods.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestMutableBTreeCursors.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestReopen.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestSplitJoinRootLeaf.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestSplitJoinThreeLevels.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestSplitRootLeaf.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestTouch.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestTransientBTree.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/filter/TestTupleFilters.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/keys/TestICUUnicodeKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/keys/TestJDKUnicodeKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/keys/TestKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/keys/TestSuccessorUtil.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/raba/codec/AbstractRabaCoderTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/raba/codec/RandomURIGenerator.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestRingBuffer.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/counters/httpd/TestCounterSetHTTPDServer.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestConcurrentJournal.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestJournalBasics.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestRootBlockView.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/relation/accesspath/TestSameVariableConstraint.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/relation/locator/TestDefaultResourceLocator.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/relation/rule/AbstractRuleTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/relation/rule/TestRule.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/AbstractResourceManagerTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestBuildTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestBuildTask2.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestMergeTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestOverflow.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestResourceManagerBootstrap.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestSegSplitter.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/search/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/search/TestKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/search/TestSearchRestartSafe.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestBasicIndexStuff.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestDistributedTransactionService.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestMove.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestRangeQuery.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestResourceService.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestRestartSafe.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestScatterSplit.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestSplitJoin.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTaskWithSplits.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/sparse/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/sparse/TestKeyEncodeDecode.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/test/ExperimentDriver.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/util/concurrent/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/util/concurrent/TestLatch.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/AbstractServicesManagerService.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/MonitorCreatePhysicalServiceLocksTask.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/ServicesManagerServer.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/AbstractHostConstraint.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/MaxClientServicesPerHostConstraint.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/MaxDataServicesPerHostConstraint.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/ZookeeperServerConfiguration.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/ZookeeperServerEntry.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/process/JiniCoreServicesProcessHelper.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/process/ZookeeperProcessHelper.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/AbstractServer.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/LoadBalancerServer.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/benchmark/ThroughputMaster.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/master/FileSystemScanner.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/master/MappedTaskMaster.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/util/BroadcastSighup.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/util/DumpFederation.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/util/LookupStarter.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/util/config/lookup.config branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/ZooHelper.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/resources/config/bigdataStandaloneTesting.config branches/JOURNAL_HA_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/config/TestZookeeperServerEntry.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/testReggie.config branches/JOURNAL_HA_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/testStartJini.config branches/JOURNAL_HA_BRANCH/bigdata-jini/src/test/com/bigdata/service/jini/AbstractServerTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/test/com/bigdata/service/jini/TestBigdataClient.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/testzoo.config branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/axioms/Axioms.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/axioms/BaseAxioms.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainOwlSameAsIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainOwlSameAsPropertiesIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainOwlSameAsPropertiesPIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainOwlSameAsPropertiesPOIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainOwlSameAsPropertiesSPIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainOwlSameAsPropertiesSPOIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/BackchainTypeResourceIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/FullyBufferedJustificationIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/Justification.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/OwlSameAsPropertiesExpandingIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/RdfTypeRdfsResourceFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/SPOAssertionBuffer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/TruthMaintenance.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/BigdataRDFFullTextIndex.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/ITermIndexCodes.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Id2TermTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Id2TermWriteProc.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/KVOTermIdComparator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/ReverseIndexWriterTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdWriteProc.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdWriteTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/TermIdEncoder.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/AbstractRDFTaskFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFDataLoadMaster.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFFileLoadTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/RDFFilenameFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/RDFLoadTaskFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/RDFVerifyTaskFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/SingleResourceReaderTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/VerifyStatementBuffer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/IMagicTuple.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/IRISUtils.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicAccessPath.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicIndexWriteProc.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicKeyOrder.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicPredicate.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicRelation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicTuple.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/MagicTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BNodeContextFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataBNodeImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataLiteralImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataResourceImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataStatementImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataURIImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValue.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueIdComparator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/StatementEnum.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/TermIdComparator2.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/AsynchronousStatementBufferFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/BasicRioLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRioLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/PresortRioLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/StatementBuffer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/AbstractRuleDistinctTermScan.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/AbstractRuleFastClosure_11_13.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/AbstractRuleFastClosure_3_5_6_7_9.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/AbstractRuleFastClosure_5_6_7_9.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/BackchainAccessPath.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/DoNotAddFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/FastClosure.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/MatchRule.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/RDFJoinNexus.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/RuleFastClosure3.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/DefaultGraphSolutionExpander.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/DistinctMultiTermAdvancer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/DistinctSPOIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/DistinctTermAdvancer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/ExplicitSPOFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/ISPO.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/InGraphBinarySearchFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/InGraphHashSetFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/InferredSPOFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/JustificationRemover.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/JustificationTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/NamedGraphSolutionExpander.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/NoAxiomFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/OSPComparator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/POSComparator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPO.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOAccessPath.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOComparator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOKeyOrder.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOPredicate.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPORelation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOSortKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOStarJoin.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataSolutionResolverator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataStatementIteratorImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataValueIteratorImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DataLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/IRawTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/ITripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/ScaleOutTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/TripleStoreUtility.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/BaseVocabulary.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/Vocabulary.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/axioms/TestAxioms.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestAddTerms.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestComparators.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestCompletionScan.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestFullTextIndex.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestId2TermTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestTerm2IdTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestVocabulary.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/load/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/magic/TestIRIS.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/magic/TestMagicKeyOrderStrategy.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/magic/TestMagicStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/metrics/TestMetrics.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/model/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/AbstractRIOTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/EDSAsyncLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAsynchronousStatementBufferFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRDFXMLInterchangeWithStatementIdentifiers.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestBackchainOwlSameAsPropertiesIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestBackchainTypeResourceIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestDatabaseAtOnceClosure.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestDistinctTermScan.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestJustifications.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestOptionals.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestOwlSameAsPropertiesExpandingIterator.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleExpansion.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleFastClosure_11_13.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleFastClosure_3_5_6_7_9.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestSlice.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestTruthMaintenance.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestDefaultGraphAccessPath.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPO.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOAccessPath.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOKeyCoders.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOKeyOrder.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOPredicate.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPORelation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOStarJoin.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOTupleSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPOValueCoders.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/AbstractDistributedTripleStoreTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/AbstractEmbeddedTripleStoreTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/AbstractServerTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/AbstractTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/RDFLoadAndValidateHelper.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestBulkFilter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestLocalTripleStoreTransactionSemantics.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestRestartSafe.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestStatementIdentifiers.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestTripleStoreBasics.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataConstructIterator.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStatistics.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl2.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailGraphQuery.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailHelper.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepository.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/HitConvertor.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/BigdataLoader.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlClient.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEvaluationStrategyImpl.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuadsAndPipelineJoins.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestJoinScope.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestNamedGraphs.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestOptionals.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestProvenanceQuery.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestQuery.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestUnions.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataConnectionTest.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest2.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataStoreTest.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/stress/LoadClosureAndQueryTest.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/org/openrdf/query/parser/sparql/SPARQLQueryTest.java branches/JOURNAL_HA_BRANCH/build.properties branches/JOURNAL_HA_BRANCH/build.xml branches/JOURNAL_HA_BRANCH/src/resources/analysis/queries/benchmark.txt branches/JOURNAL_HA_BRANCH/src/resources/analysis/queries/rdfDataLoad.txt branches/JOURNAL_HA_BRANCH/src/resources/bin/config/browser.config branches/JOURNAL_HA_BRANCH/src/resources/bin/config/reggie.config branches/JOURNAL_HA_BRANCH/src/resources/bin/config/serviceStarter.config branches/JOURNAL_HA_BRANCH/src/resources/bin/config/zookeeper.config branches/JOURNAL_HA_BRANCH/src/resources/bin/pstart branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster.config branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster16.config branches/JOURNAL_HA_BRANCH/src/resources/config/jini/reggie.config branches/JOURNAL_HA_BRANCH/src/resources/config/jini/startAll.config branches/JOURNAL_HA_BRANCH/src/resources/config/log4j.properties branches/JOURNAL_HA_BRANCH/src/resources/config/logging.properties branches/JOURNAL_HA_BRANCH/src/resources/config/standalone/bigdataStandalone.config branches/JOURNAL_HA_BRANCH/src/resources/scripts/archiveRun.sh branches/JOURNAL_HA_BRANCH/src/resources/scripts/bigdata branches/JOURNAL_HA_BRANCH/src/resources/scripts/extractCounters.sh Added Paths: ----------- branches/JOURNAL_HA_BRANCH/bigdata/lib/bnd-0.0.384.jar branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentMultiBlockIterator.java branches/JOURNAL_HA_BRANCH/bigdata/src/releases/RELEASE_0_83_0.txt branches/JOURNAL_HA_BRANCH/bigdata/src/releases/RELEASE_0_83_1.txt branches/JOURNAL_HA_BRANCH/bigdata/src/releases/RELEASE_0_83_2.txt branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentMultiBlockIterators.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/keys/AbstractUnicodeKeyBuilderTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ServiceInfo.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/DiscoveryTool.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/config/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/config/disco.config branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/config/logging.properties branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/Util.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ConfigDeployUtil.java branches/JOURNAL_HA_BRANCH/bigdata-perf/ branches/JOURNAL_HA_BRANCH/bigdata-perf/README.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/RWStore.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/WORMStore.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/build.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/build.xml branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/lib/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/lib/jdom.jar branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/lib/log4j-1.2.12.jar branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/lib/ssj.jar branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/DateGenerator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/Generator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/NormalDistGenerator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/NormalDistRangeGenerator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/ParetoDistGenerator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/RandomBucket.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/TextGenerator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/ValueGenerator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/BSBMResource.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/Offer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/Person.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/Producer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/Product.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/ProductFeature.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/ProductType.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/RatingSite.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/Review.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/Vendor.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/qualification/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/qualification/Qualification.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/qualification/QualificationDefaultValues.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/qualification/QueryResult.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/NTriples.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/ObjectBundle.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/SQLSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/Serializer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/TriG.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/Turtle.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/VirtSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/XMLSerializer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/AbstractParameterPool.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/ClientManager.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/ClientThread.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/CompiledQuery.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/CompiledQueryMix.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/LocalSPARQLParameterPool.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/NetQuery.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/PreCalcParameterPool.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/Query.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/QueryMix.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/SPARQLConnection.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/SQLConnection.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/SQLParameterPool.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/ServerConnection.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/TestDriver.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/TestDriverDefaultValues.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/tools/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/tools/ResultTransform.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/BSBM.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/DC.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/FOAF.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/ISO3166.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/RDF.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/RDFS.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/REV.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/XSD.java branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/givennames.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/ignoreQueries.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query1.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query10.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query10desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query10valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query11.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query11desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query11valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query12.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query12desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query12valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query1desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query1valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query2.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query2desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query2valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query3.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query3desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query3valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query4-original.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query4-rewritten.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query4.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query4desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query4valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query5.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query5desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query5valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query6.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query6desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query6valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query7.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query7desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query7valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query8.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query8desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query8valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query9-modified.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query9-original.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query9.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query9desc.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/query9valid.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/querymix.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/titlewords.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/logging/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/resources/logging/log4j.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/test/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/test/benchmark/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/ branches/JOURNAL_HA_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/TestBSBM.java branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/ branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/RWStore.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/Splitter.config branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/WORMStore.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/build.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/build.xml branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/src/ branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/src/resources/ branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/src/resources/logging/ branches/JOURNAL_HA_BRANCH/bigdata-perf/btc/src/resources/logging/log4j.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/LEGAL/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/LEGAL/LICENSE.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/README.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/RWStore.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/WORMStore.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/build.properties branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/build.xml branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/lib/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/CompressEnum.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/DamlWriter.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/Generator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/OwlWriter.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/RdfWriter.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/Writer.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/readme.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/ConfigParser.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/KbConfigParser.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/KbSpecification.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/QueryConfigParser.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/QuerySpecification.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/QueryTestResult.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/RepositoryCreator.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/Test.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/Atom.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/Query.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/QueryResult.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/Repository.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/RepositoryFactory.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/bigdata/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/bigdata/SparqlRepository.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/bigdata/SparqlRepositoryFactory.java branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/config.kb.example.dldb branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/config.kb.example.sesame branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/config.query.example.dldb branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/config.query.example.sesame branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/readme.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/univ-bench.owl branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/answers_query14.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/answers_query6.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/answers_query8.txt branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/reference query answers.url branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/ branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.kb.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query1.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query10.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query14.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query2.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query3.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query4.sparql branches/JOURNAL_HA_BRANCH/bigdata-perf/lubm/src/resources/config/config.query6.sparql branches/JOURNAL_HA_B... [truncated message content] |
From: <tho...@us...> - 2010-08-09 15:43:18
|
Revision: 3438 http://bigdata.svn.sourceforge.net/bigdata/?rev=3438&view=rev Author: thompsonbry Date: 2010-08-09 15:43:08 +0000 (Mon, 09 Aug 2010) Log Message: ----------- Merge trunk to branch [r3391:r3437]. Note: The edits by BrianM to fix the test data URIs are at least partially missing in the HA branch. Therefore we need to reconcile the branch against the trunk in depth in the file system (winmerge, ediff) before merging from the HA branch back into the trunk. It looks like these changes should have been introduced in r2599, which is before just the start of the HA branch, and possibly r3305. Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/bfs/BigdataFileSystem.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/DumpIndexSegment.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/io/DirectBufferPool.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractLocalTransactionManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/WriteExecutorService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/PartitionLocator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/AbstractResource.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/IMutableResource.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/RelationFusedView.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/ILocatableResource.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/DistributedJoinTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/AsynchronousOverflowTask.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/BTreeMetadata.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/IndexManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/OverflowManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/StoreManager.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractFederation.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractScaleOutFederation.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/DataService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/striterator/ChunkedConvertingIterator.java branches/JOURNAL_HA_BRANCH/bigdata/src/resources/logging/log4j.properties branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/AbstractIndexSegmentTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentMultiBlockIterators.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/btree/keys/TestKeyBuilder.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestRingBuffer.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/concurrent/StressTestNonBlockingLockManagerWithTxDag.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestTransactionService.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/AbstractResourceManagerBootstrapTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/AbstractResourceManagerTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/resources/TestReleaseResources.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/AbstractEmbeddedFederationTestCase.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/StressTestConcurrent.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestDistributedTransactionServiceRestart.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/TestMove.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTask.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/TransactionServer.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/lookup/AbstractCachingServiceClient.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/util/DumpFederation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/DefaultExtensionFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IExtension.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IExtensionFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/ILexiconConfiguration.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IVUtility.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/LexiconConfiguration.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDDecimalIV.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdWriteProc.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/Term2IdWriteTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFDataLoadMaster.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/AsynchronousStatementBufferFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/AbstractRuleFastClosure_3_5_6_7_9.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/RDFJoinNexus.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/LocalTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.config branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/ColorsEnumExtension.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/EpochExtension.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/SampleExtensionFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/TestEncodeDecodeKeys.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/AbstractRIOTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAsynchronousStatementBufferFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/AbstractTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestScaleOutTripleStoreWithEmbeddedFederation.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/TestScaleOutTripleStoreWithJiniFederation.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl2.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuadsAndPipelineJoins.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuadsAndPipelineJoinsWithoutInlining.java branches/JOURNAL_HA_BRANCH/build.xml branches/JOURNAL_HA_BRANCH/src/resources/config/README branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster.config branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster16.config Added Paths: ----------- branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataStandalone.config branches/JOURNAL_HA_BRANCH/src/resources/scripts/dumpFed.sh branches/JOURNAL_HA_BRANCH/src/resources/scripts/nanoSparqlServer.sh Property Changed: ---------------- branches/JOURNAL_HA_BRANCH/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/JOURNAL_HA_BRANCH/bigdata-perf/ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/ branches/JOURNAL_HA_BRANCH/dsi-utils/src/java/it/ branches/JOURNAL_HA_BRANCH/dsi-utils/src/test/it/unimi/ branches/JOURNAL_HA_BRANCH/osgi/ branches/JOURNAL_HA_BRANCH/src/resources/config/ Property changes on: branches/JOURNAL_HA_BRANCH ___________________________________________________________________ Modified: svn:ignore - ant-build src bin bigdata*.jar ant-release standalone test* countersfinal.xml events.jnl .settings *.jnl TestInsertRate.out SYSTAP-BBT-result.txt U10load+query *.hprof com.bigdata.cache.TestHardReferenceQueueWithBatchingUpdates.exp.csv commit-log.txt eventLog dist bigdata-test com.bigdata.rdf.stress.LoadClosureAndQueryTest.*.csv + ant-build src bin bigdata*.jar ant-release standalone test* countersfinal.xml events.jnl .settings *.jnl TestInsertRate.out SYSTAP-BBT-result.txt U10load+query *.hprof com.bigdata.cache.TestHardReferenceQueueWithBatchingUpdates.exp.csv commit-log.txt eventLog dist bigdata-test com.bigdata.rdf.stress.LoadClosureAndQueryTest.*.csv DIST.*.tgz REL.*.tgz Modified: svn:mergeinfo - /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/bugfix-btm:2594-2779 /trunk:2763-2785,2918-2980 + /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/bugfix-btm:2594-2779 /trunk:2763-2785,2918-2980,3392-3437 Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/bfs/BigdataFileSystem.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/bfs/BigdataFileSystem.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/bfs/BigdataFileSystem.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -420,7 +420,7 @@ } } - + /** * Note: A commit is required in order for a read-committed view to have * access to the registered indices. When running against an Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -2840,7 +2840,8 @@ * might also want to limit the maximum size of the reads. */ - final DirectBufferPool pool = DirectBufferPool.INSTANCE_10M; +// final DirectBufferPool pool = DirectBufferPool.INSTANCE_10M; + final DirectBufferPool pool = DirectBufferPool.INSTANCE; if (true && ((flags & REVERSE) == 0) Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -644,7 +644,18 @@ this.lastCommitTime = lastCommitTime; } - private long lastCommitTime = 0L;// Until the first commit. + + /** + * The lastCommitTime of the {@link Checkpoint} record from which the + * {@link BTree} was loaded. + * <p> + * Note: Made volatile on 8/2/2010 since it is not otherwise obvious what + * would guarantee visibility of this field, through I do seem to remember + * that visibility might be guaranteed by how the BTree class is discovered + * and returned to the class. Still, it does no harm to make this a volatile + * read. + */ + volatile private long lastCommitTime = 0L;// Until the first commit. /** * Return the {@link IDirtyListener}. @@ -1525,45 +1536,63 @@ } - /** - * Load an instance of a {@link BTree} or derived class from the store. The - * {@link BTree} or derived class MUST declare a constructor with the - * following signature: <code> + /** + * Load an instance of a {@link BTree} or derived class from the store. The + * {@link BTree} or derived class MUST declare a constructor with the + * following signature: <code> * * <i>className</i>(IRawStore store, Checkpoint checkpoint, BTreeMetadata metadata, boolean readOnly) * * </code> - * - * @param store - * The store. - * @param addrCheckpoint - * The address of a {@link Checkpoint} record for the index. - * @param readOnly - * When <code>true</code> the {@link BTree} will be marked as - * read-only. Marking has some advantages relating to the locking - * scheme used by {@link Node#getChild(int)} since the root node - * is known to be read-only at the time that it is allocated as - * per-child locking is therefore in place for all nodes in the - * read-only {@link BTree}. It also results in much higher - * concurrency for {@link AbstractBTree#touch(AbstractNode)}. - * - * @return The {@link BTree} or derived class loaded from that - * {@link Checkpoint} record. - */ + * + * @param store + * The store. + * @param addrCheckpoint + * The address of a {@link Checkpoint} record for the index. + * @param readOnly + * When <code>true</code> the {@link BTree} will be marked as + * read-only. Marking has some advantages relating to the locking + * scheme used by {@link Node#getChild(int)} since the root node + * is known to be read-only at the time that it is allocated as + * per-child locking is therefore in place for all nodes in the + * read-only {@link BTree}. It also results in much higher + * concurrency for {@link AbstractBTree#touch(AbstractNode)}. + * + * @return The {@link BTree} or derived class loaded from that + * {@link Checkpoint} record. + * + * @throws IllegalArgumentException + * if store is <code>null</code>. + */ @SuppressWarnings("unchecked") public static BTree load(final IRawStore store, final long addrCheckpoint, final boolean readOnly) { + if (store == null) + throw new IllegalArgumentException(); + /* * Read checkpoint record from store. */ - final Checkpoint checkpoint = Checkpoint.load(store, addrCheckpoint); + final Checkpoint checkpoint; + try { + checkpoint = Checkpoint.load(store, addrCheckpoint); + } catch (Throwable t) { + throw new RuntimeException("Could not load Checkpoint: store=" + + store + ", addrCheckpoint=" + + store.toString(addrCheckpoint), t); + } - /* - * Read metadata record from store. - */ - final IndexMetadata metadata = IndexMetadata.read(store, checkpoint - .getMetadataAddr()); + /* + * Read metadata record from store. + */ + final IndexMetadata metadata; + try { + metadata = IndexMetadata.read(store, checkpoint.getMetadataAddr()); + } catch (Throwable t) { + throw new RuntimeException("Could not read IndexMetadata: store=" + + store + ", checkpoint=" + checkpoint, t); + } if (log.isInfoEnabled()) { Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/DumpIndexSegment.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/DumpIndexSegment.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/DumpIndexSegment.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -36,6 +36,7 @@ import org.apache.log4j.Logger; import com.bigdata.btree.IndexSegment.ImmutableNodeFactory.ImmutableLeaf; +import com.bigdata.io.DirectBufferPool; import com.bigdata.journal.DumpJournal; import com.bigdata.rawstore.IRawStore; @@ -154,6 +155,16 @@ } + // multi-block scan of the index segment. + boolean multiBlockScan = false; // @todo command line option. + if (multiBlockScan) { + + writeBanner("dump leaves using multi-block forward scan"); + + dumpLeavesMultiBlockForwardScan(store); + + } + // dump the leaves using a fast reverse scan. boolean fastReverseScan = true;// @todo command line option if (fastReverseScan) { @@ -524,6 +535,36 @@ } + /** + * Dump leaves using the {@link IndexSegmentMultiBlockIterator}. + * + * @param store + */ + static void dumpLeavesMultiBlockForwardScan(final IndexSegmentStore store) { + + final long begin = System.currentTimeMillis(); + + final IndexSegment seg = store.loadIndexSegment(); + + final ITupleIterator<?> itr = new IndexSegmentMultiBlockIterator(seg, DirectBufferPool.INSTANCE, + null/* fromKey */, null/* toKey */, IRangeQuery.DEFAULT/* flags */); + + int nscanned = 0; + + while(itr.hasNext()) { + + itr.next(); + + nscanned++; + + } + + final long elapsed = System.currentTimeMillis() - begin; + + System.out.println("Visited "+nscanned+" tuples using multi-block forward scan in "+elapsed+" ms"); + + } + static void writeBanner(String s) { System.out.println(bar); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -387,12 +387,12 @@ if (index < 0 || index >= size) throw new IllegalArgumentException(); - if (index + 1 == size) { - - // remove the LRU position. - return remove(); - - } +// if (index + 1 == size) { +// +// // remove the LRU position. +// return remove(); +// +// } /* * Otherwise we are removing some non-LRU element. @@ -409,7 +409,7 @@ for (;;) { - int nexti = (i + 1) % capacity; // update index. + final int nexti = (i + 1) % capacity; // update index. if (nexti != head) { @@ -581,6 +581,9 @@ public boolean contains(final Object ref) { + if (ref == null) + throw new NullPointerException(); + // MRU to LRU scan. for (int n = 0, i = tail; n < size; n++) { @@ -601,7 +604,8 @@ throw new NullPointerException(); if (c == this) - throw new IllegalArgumentException(); + return true; +// throw new IllegalArgumentException(); for( Object e : c ) { Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/io/DirectBufferPool.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/io/DirectBufferPool.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/io/DirectBufferPool.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -218,12 +218,12 @@ */ public final static DirectBufferPool INSTANCE; - /** - * A JVM-wide pool of direct {@link ByteBuffer}s with a default - * {@link Options#BUFFER_CAPACITY} of <code>10 MB</code>. The main use case - * for the 10M buffers are multi-block IOs for the {@link IndexSegment}s. - */ - public final static DirectBufferPool INSTANCE_10M; +// /** +// * A JVM-wide pool of direct {@link ByteBuffer}s with a default +// * {@link Options#BUFFER_CAPACITY} of <code>10 MB</code>. The main use case +// * for the 10M buffers are multi-block IOs for the {@link IndexSegment}s. +// */ +// public final static DirectBufferPool INSTANCE_10M; /** * An unbounded list of all {@link DirectBufferPool} instances. @@ -251,11 +251,11 @@ bufferCapacity// ); - INSTANCE_10M = new DirectBufferPool(// - "10M",// - Integer.MAX_VALUE, // poolCapacity - 10 * Bytes.megabyte32 // bufferCapacity - ); +// INSTANCE_10M = new DirectBufferPool(// +// "10M",// +// Integer.MAX_VALUE, // poolCapacity +// 10 * Bytes.megabyte32 // bufferCapacity +// ); /* * This configuration will block if there is a concurrent demand for Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractLocalTransactionManager.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractLocalTransactionManager.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractLocalTransactionManager.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -7,6 +7,7 @@ import com.bigdata.counters.CounterSet; import com.bigdata.counters.Instrument; +import com.bigdata.resources.StoreManager; import com.bigdata.service.IBigdataFederation; import com.bigdata.service.IDataService; @@ -171,16 +172,18 @@ * Delay between attempts reach the remote service (ms). */ final long delay = 10L; - - /** - * #of attempts to reach the remote service. - * - * Note: delay*maxtries == 1000ms of trying before we give up. - * - * If this is not enough, then consider adding an optional parameter giving - * the time the caller will wait and letting the StoreManager wait longer - * during startup to discover the timestamp service. - */ + + /** + * #of attempts to reach the remote service. + * <p> + * Note: delay*maxtries == 1000ms of trying before we give up, plus however + * long we are willing to wait for service discovery if the problem is + * locating the {@link ITransactionService}. + * <p> + * If this is not enough, then consider adding an optional parameter giving + * the time the caller will wait and letting the {@link StoreManager} wait + * longer during startup to discover the timestamp service. + */ final int maxtries = 100; /** Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/WriteExecutorService.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/WriteExecutorService.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/WriteExecutorService.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -1723,11 +1723,11 @@ // // } - /** - * Flag may be set to force overflow processing during the next group - * commit. The flag is cleared once an overflow has occurred. - */ - public final AtomicBoolean forceOverflow = new AtomicBoolean(false); +// /** +// * Flag may be set to force overflow processing during the next group +// * commit. The flag is cleared once an overflow has occurred. +// */ +// public final AtomicBoolean forceOverflow = new AtomicBoolean(false); /** * Return <code>true</code> if the pre-conditions for overflow processing @@ -1736,7 +1736,8 @@ private boolean isShouldOverflow() { return resourceManager.isOverflowEnabled() - && (forceOverflow.get() || resourceManager.shouldOverflow()); +// && (forceOverflow.get() || resourceManager.shouldOverflow()); + && resourceManager.shouldOverflow(); } @@ -1786,10 +1787,10 @@ log.error("Overflow error: "+serviceName+" : "+t, t); - } finally { - - // clear force flag. - forceOverflow.set(false); +// } finally { +// +// // clear force flag. +// forceOverflow.set(false); } Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/PartitionLocator.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/PartitionLocator.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/mdi/PartitionLocator.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -185,8 +185,17 @@ } - // Note: used by assertEquals in the test cases. - public boolean equals(Object o) { + /* + * @todo There are some unit tests which depend on this implementation of + * equals. However, since the partition locator Id for a given scale out + * index SHOULD be immutable, running code can rely on partitionId == + * o.partitionId. Therefore the unit tests should be modified to extract an + * "assertSamePartitionLocator" method and rely on that. We could then + * simplify this method to just test the partitionId. That would reduce the + * effort when maintaining hash tables based on the PartitionLocator since + * we would not be comparing the keys, UUIDs, etc. + */ + public boolean equals(final Object o) { if (this == o) return true; Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/AbstractResource.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/AbstractResource.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/AbstractResource.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -582,9 +582,21 @@ } /** + * The default implementation only logs the event. + */ + public AbstractResource<E> init() { + + if (log.isInfoEnabled()) + log.info(toString()); + + return this; + + } + + /** * * @todo Lock service supporting shared locks, leases and lease renewal, - * excalation of shared locks to exclusive locks, deadlock detection, + * escalation of shared locks to exclusive locks, deadlock detection, * and possibly a resource hierarchy. Leases should be Callable * objects that are submitted by the client to its executor service so * that they will renew automatically until cancelled (and will cancel Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/IMutableResource.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/IMutableResource.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/IMutableResource.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -38,7 +38,10 @@ public interface IMutableResource<T> extends ILocatableResource<T> { /** - * Create any logically contained resources (relations, indices). + * Create any logically contained resources (relations, indices). There is + * no presumption that {@link #init()} is suitable for invocation from + * {@link #create()}. Instead, you are responsible for invoking {@link #init()} + * from this method IFF it is appropriate to reuse its initialization logic. */ void create(); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/RelationFusedView.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/RelationFusedView.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/RelationFusedView.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -21,8 +21,8 @@ */ public class RelationFusedView<E> implements IRelation<E> { - private IRelation<E> relation1; - private IRelation<E> relation2; + final private IRelation<E> relation1; + final private IRelation<E> relation2; public IRelation<E> getRelation1() { @@ -36,6 +36,13 @@ } + // NOP + public RelationFusedView<E> init() { + + return this; + + } + /** * * @param relation1 Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/DefaultResourceLocator.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -586,6 +586,8 @@ properties // }); + r.init(); + if(INFO) { log.info("new instance: "+r); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/ILocatableResource.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/ILocatableResource.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/locator/ILocatableResource.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -45,6 +45,13 @@ */ public interface ILocatableResource<T> { + /** + * Deferred initialization method is automatically invoked when the resource + * is materialized by the {@link IResourceLocator}. The implementation is + * encouraged to strengthen the return type. + */ + public ILocatableResource<T> init(); + /** * The identifying namespace. */ Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/DistributedJoinTask.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/DistributedJoinTask.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/DistributedJoinTask.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -1074,16 +1074,20 @@ final UUID sinkUUID = locator.getDataServiceUUID(); + final IDataService dataService; if (sinkUUID.equals(fed.getServiceUUID())) { - /* - * @todo As an optimization, special case when the downstream - * data service is _this_ data service. - */ + /* + * As an optimization, special case when the downstream + * data service is _this_ data service. + */ + dataService = (IDataService)fed.getService(); + } else { + + dataService = fed.getDataService(sinkUUID); + } - - final IDataService dataService = fed.getDataService(sinkUUID); sink = new JoinTaskSink(fed, locator, this); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/AsynchronousOverflowTask.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/AsynchronousOverflowTask.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/AsynchronousOverflowTask.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -310,6 +310,7 @@ private final OverflowActionEnum action; private final ViewMetadata vmd; + private final boolean forceCompactingMerge; private final AbstractTask<T> task; /** @@ -319,11 +320,17 @@ * @param vmd * The {@link ViewMetadata} for the index partition for which * that action will be taken. + * @param forceCompactingMerge + * if a compacting merge should be taken even if the view was + * simply copied to the new journal. * @param task * The task which implements that action. */ - public AtomicCallable(final OverflowActionEnum action, - final ViewMetadata vmd, final AbstractTask<T> task) { + public AtomicCallable(final OverflowActionEnum action,// + final ViewMetadata vmd,// + final boolean forceCompactingMerge, // + final AbstractTask<T> task// + ) { if (action == null) throw new IllegalArgumentException(); @@ -337,6 +344,8 @@ this.action = action; this.vmd = vmd; + + this.forceCompactingMerge = forceCompactingMerge; this.task = task; @@ -407,110 +416,112 @@ } - /** - * Schedule a build for each shard and a merge for each shard with a - * non-zero merge priority. Whether a build or a merge is performed for a - * shard will depend on which action is initiated first. When an build or - * merge action is initiated, that choice is atomically registered on the - * {@link ViewMetadata} and any subsequent attempt (within this method - * invocation) to start a build or merge for the same shard will be dropped. - * Processing ends once all tasks scheduled on a "build" service are - * complete. - * <p> - * After actions are considered for each shard for which a compacting merge - * is executed. These after actions can cause a shard split, join, or move. - * Deferring such actions until we have a compact view (comprised of one - * journal and one index segment) greatly improves our ability to decide - * whether a shard should be split or joined and simplifies the logic and - * effort required to split, join or move a shard. - * <p> - * The following is a brief summary of some after actions on compact shards. - * <dl> - * <dt>split</dt> - * <dd>A shard is split when its size on the disk exceeds the (adjusted) - * nominal size of a shard (overflow). By waiting until the shard view is - * compact we have exact information about the size of the shard (it is - * contained in a single {@link IndexSegment}) and we are able to easily - * select the separator key to split the shard.</dd> - * <dt>tailSplit</dt> - * <dd>A tail split may be selected for a shard which has a mostly append - * access pattern. For such access patterns, a normal split would leave the - * left sibling 50% full and the right sibling would quickly fill up with - * continued writes on the tail of the key range. To compensate for this - * access pattern, a tail split chooses a separator key near the end of the - * key range of a shard. This results in a left sibling which is mostly full - * and a right sibling which is mostly empty. If the pattern of heavy tail - * append continues, then the left sibling will remain mostly full and the - * new writes will flow mostly into the right sibling.</dd> - * <dt>scatterSplit</dt> - * <dd>A scatter split breaks the first shard for a new scale-out index into - * N shards and scatters those shards across the data services in a - * federation in order to improve the data distribution and potential - * concurrency of the index. By waiting until the shard view is compact we - * are able to quickly select appropriate separator keys for the shard - * splits.</dd> - * <dt>move</dt> - * <dd>A move transfer a shard from this data service to another data - * service in order to reduce the load on this data service. By waiting - * until the shard view is compact we are able to rapidly transfer the bulk - * of the data in the form of a single {@link IndexSegment}.</dd> - * <dt>join</dt> - * <dd>A join combines a shard which is under 50% of its (adjusted) nominal - * maximum size on the disk (underflow) with its right sibling. Joins are - * driven by deletes of tuples from a key range. Since deletes are handled - * as writes where a delete marker is set on the tuple, neither the shard - * size on the disk nor the range count of the shard will decrease until a - * compacting merge. A join is indicated if the size on disk for the shard - * has shrunk considerably since the last time a compacting merge was - * performed for the view (this covers both the case of deletes, which - * reduce the range count, and updates which replace the values in the - * tuples with more compact data). <br> - * There are actually three cases for a join. - * <ol> - * <li>If the right sibling is local, then the shard will be joined with its - * right sibling.</li> - * <li>If the right sibling is remote, then the shard will be moved to the - * data service on which the right sibling is found.</li> - * <li>If the right sibling does not exist, then nothing is done (the last - * shard in a scale-out index does not have a right sibling). The right most - * sibling will remain undercapacity until and unless its left sibling also - * underflows, at which point the left sibling will cause itself to be - * joined with the right sibling (this is done to simplify the logic which - * searches for a sibling with which to join an undercapacity shard).</li> - * </ol> - * </dl> - * - * @param forceCompactingMerges - * When <code>true</code> a compacting merge will be forced for - * each non-compact view. - * - * @throws InterruptedException - * - * @todo The size of the merge queue (or its sum of priorities) may be an - * indication of the load of the node which could be used to decide - * that index partitions should be shed/moved. - * - * @todo For HA, this needs to be a shared priority queue using zk or the - * like since any node in the failover set could do the merge (or - * build). [Alternatively, nodes do the build/merge for the shards for - * which they have the highest affinity out of the failover set.] - * - * FIXME tailSplits currently operate on the mutable BTree rather than - * a compact view). This task does not require a compact view (at - * least, not yet) and generating one for it might be a waste of time. - * Instead it examines where the inserts are occurring in the index - * and splits of the tail if the index is heavy for write append. It - * probably could defer that choice until a compact view was some - * percentage of a split (maybe .6?) So, probably an after action for - * the mergeQ. - * - * FIXME joins must track metadata about the previous size on disk of - * the compact view in order to decide when underflow has resulted. In - * order to handle the change in the value of the acceleration factor, - * this data should be stored as the percentage of an adjusted split - * of the last compact view. We can update that metadata each time we - * do a compacting merge. - */ + /** + * Schedule a build for each shard and a merge for each shard with a + * non-zero merge priority. Whether a build or a merge is performed for a + * shard will depend on which action is initiated first. When an build or + * merge action is initiated, that choice is atomically registered on the + * {@link ViewMetadata} and any subsequent attempt (within this method + * invocation) to start a build or merge for the same shard will be dropped. + * Processing ends once all tasks scheduled on a "build" service are + * complete. + * <p> + * After actions are considered for each shard for which a compacting merge + * is executed. These after actions can cause a shard split, join, or move. + * Deferring such actions until we have a compact view (comprised of one + * journal and one index segment) greatly improves our ability to decide + * whether a shard should be split or joined and simplifies the logic and + * effort required to split, join or move a shard. + * <p> + * The following is a brief summary of some after actions on compact shards. + * <dl> + * <dt>split</dt> + * <dd>A shard is split when its size on the disk exceeds the (adjusted) + * nominal size of a shard (overflow). By waiting until the shard view is + * compact we have exact information about the size of the shard (it is + * contained in a single {@link IndexSegment}) and we are able to easily + * select the separator key to split the shard.</dd> + * <dt>tailSplit</dt> + * <dd>A tail split may be selected for a shard which has a mostly append + * access pattern. For such access patterns, a normal split would leave the + * left sibling 50% full and the right sibling would quickly fill up with + * continued writes on the tail of the key range. To compensate for this + * access pattern, a tail split chooses a separator key near the end of the + * key range of a shard. This results in a left sibling which is mostly full + * and a right sibling which is mostly empty. If the pattern of heavy tail + * append continues, then the left sibling will remain mostly full and the + * new writes will flow mostly into the right sibling.</dd> + * <dt>scatterSplit</dt> + * <dd>A scatter split breaks the first shard for a new scale-out index into + * N shards and scatters those shards across the data services in a + * federation in order to improve the data distribution and potential + * concurrency of the index. By waiting until the shard view is compact we + * are able to quickly select appropriate separator keys for the shard + * splits.</dd> + * <dt>move</dt> + * <dd>A move transfer a shard from this data service to another data + * service in order to reduce the load on this data service. By waiting + * until the shard view is compact we are able to rapidly transfer the bulk + * of the data in the form of a single {@link IndexSegment}.</dd> + * <dt>join</dt> + * <dd>A join combines a shard which is under 50% of its (adjusted) nominal + * maximum size on the disk (underflow) with its right sibling. Joins are + * driven by deletes of tuples from a key range. Since deletes are handled + * as writes where a delete marker is set on the tuple, neither the shard + * size on the disk nor the range count of the shard will decrease until a + * compacting merge. A join is indicated if the size on disk for the shard + * has shrunk considerably since the last time a compacting merge was + * performed for the view (this covers both the case of deletes, which + * reduce the range count, and updates which replace the values in the + * tuples with more compact data). <br> + * There are actually three cases for a join. + * <ol> + * <li>If the right sibling is local, then the shard will be joined with its + * right sibling.</li> + * <li>If the right sibling is remote, then the shard will be moved to the + * data service on which the right sibling is found.</li> + * <li>If the right sibling does not exist, then nothing is done (the last + * shard in a scale-out index does not have a right sibling). The right most + * sibling will remain undercapacity until and unless its left sibling also + * underflows, at which point the left sibling will cause itself to be + * joined with the right sibling (this is done to simplify the logic which + * searches for a sibling with which to join an undercapacity shard).</li> + * </ol> + * </dl> + * + * @param forceCompactingMerges + * When <code>true</code> a compacting merge will be forced for + * each non-compact view. Compacting merges will be taken in + * priority order and will continue until finished or until the + * journal is nearing its nominal maximum extent. + * + * @throws InterruptedException + * + * @todo The size of the merge queue (or its sum of priorities) may be an + * indication of the load of the node which could be used to decide + * that index partitions should be shed/moved. + * + * @todo For HA, this needs to be a shared priority queue using zk or the + * like since any node in the failover set could do the merge (or + * build). [Alternatively, nodes do the build/merge for the shards for + * which they have the highest affinity out of the failover set.] + * + * FIXME tailSplits currently operate on the mutable BTree rather than + * a compact view). This task does not require a compact view (at + * least, not yet) and generating one for it might be a waste of time. + * Instead it examines where the inserts are occurring in the index + * and splits of the tail if the index is heavy for write append. It + * probably could defer that choice until a compact view was some + * percentage of a split (maybe .6?) So, probably an after action for + * the mergeQ. + * + * FIXME joins must track metadata about the previous size on disk of + * the compact view in order to decide when underflow has resulted. In + * order to handle the change in the value of the acceleration factor, + * this data should be stored as the percentage of an adjusted split + * of the last compact view. We can update that metadata each time we + * do a compacting merge. + */ private List<Future<?>> scheduleAndAwaitTasks( final boolean forceCompactingMerges) throws InterruptedException { @@ -554,21 +565,30 @@ if (log.isInfoEnabled()) log.info("was copied : " + vmd); - continue; + } else { + buildList.add(new Priority<ViewMetadata>(vmd.buildPriority, vmd)); + } - buildList.add(new Priority<ViewMetadata>(vmd.buildPriority, vmd)); + if (vmd.mergePriority > 0d || forceCompactingMerges) { - if (vmd.mergePriority > 0d) { + /* + * Schedule a merge if the priority is non-zero or if compacting + * merges are being forced. + */ - mergeList - .add(new Priority<ViewMetadata>(vmd.mergePriority, vmd)); + mergeList + .add(new Priority<ViewMetadata>(vmd.mergePriority, vmd)); } } // itr.hasNext() + if(log.isInfoEnabled()) { + log.info("Scheduling tasks: buildList="+buildList.size()+", mergeList="+mergeList.size()); + } + /* * Schedule build and merge tasks and await their futures. The tasks are * submitted from a PriorityQueue, so the order in which the tasks are @@ -606,18 +626,23 @@ resourceManager.mergeServiceCorePoolSize); // Schedule merge tasks. - if (!forceCompactingMerges) { - for (Priority<ViewMetadata> p : mergeList) { final ViewMetadata vmd = p.v; - if (vmd.mergePriority > 0) { + if (vmd.mergePriority > 0 || forceCompactingMerges) { + if(forceCompactingMerges && OverflowActionEnum.Copy.equals(vmd.getAction())) { + + vmd.clearCopyAction(); + + } + // Schedule a compacting merge. final FutureTask<?> ft = new FutureTask( new AtomicCallable(OverflowActionEnum.Merge, - vmd, new CompactingMergeTask(vmd))); + vmd, forceCompactingMerges, + new CompactingMergeTask(vmd))); mergeFutures.add(ft); mergeService.execute(ft); @@ -625,8 +650,6 @@ } - } - // Schedule build tasks. for (Priority<ViewMetadata> p : buildList) { @@ -636,7 +659,8 @@ // Force a compacting merge. final FutureTask<?> ft = new FutureTask(new AtomicCallable( - OverflowActionEnum.Merge, vmd, + OverflowActionEnum.Merge, vmd, + forceCompactingMerges, new CompactingMergeTask(vmd))); mergeFutures.add(ft); mergeService.execute(ft); @@ -646,6 +670,7 @@ // Schedule a build. final FutureTask<?> ft = new FutureTask(new AtomicCallable( OverflowActionEnum.Build, vmd, + forceCompactingMerges, new IncrementalBuildTask(vmd))); buildFutures.add(ft); buildService.execute(ft); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/BTreeMetadata.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/BTreeMetadata.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/BTreeMetadata.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -280,6 +280,25 @@ actionRef.set(action); } + + /** + * Used to force clear a {@link OverflowActionEnum#Copy} action + * when we will force a compacting merge. This allows us to do + * compacting merges on shard views which would otherwise simply + * be copied onto the new journal. + */ + void clearCopyAction() { + + lock.lock(); + try { + if(actionRef.get().equals(OverflowActionEnum.Copy)) { + actionRef.set(null/*clear*/); + } + } finally { + lock.unlock(); + } + + } /** * Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/IndexManager.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/IndexManager.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/IndexManager.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -1684,16 +1684,28 @@ final StringBuilder sb = new StringBuilder(); final AbstractJournal journal = getJournal(timestamp); + + if (journal == null) { + /* + * This condition can occur if there are no shard views on the + * previous journal and the releaseAge is zero since the previous + * journal can be purged (deleted) before this method is invoked. + * This situation arises in a few of the unit tests which begin with + * an empty journal and copy everything onto the new journal such + * that the old journal can be immediately released. + */ + return "No journal: timestamp=" + timestamp; + } sb.append("timestamp="+timestamp+"\njournal="+journal.getResourceMetadata()); // historical view of Name2Addr as of that timestamp. - final ITupleIterator itr = journal.getName2Addr(timestamp) + final ITupleIterator<?> itr = journal.getName2Addr(timestamp) .rangeIterator(); while (itr.hasNext()) { - final ITuple tuple = itr.next(); + final ITuple<?> tuple = itr.next(); final Entry entry = EntrySerializer.INSTANCE .deserialize(new DataInputBuffer(tuple.getValue())); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/OverflowManager.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/OverflowManager.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/OverflowManager.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -287,6 +287,14 @@ */ protected final AtomicBoolean asyncOverflowEnabled = new AtomicBoolean(true); + /** + * Flag may be set to force overflow processing during the next group + * commit. The flag is cleared by {@link #overflow()}. + * + * @see DataService#forceOverflow(boolean, boolean) + */ + public final AtomicBoolean forceOverflow = new AtomicBoolean(false); + /** * A flag that may be set to force the next asynchronous overflow to perform * a compacting merge for all indices that are not simply copied over to the @@ -295,6 +303,8 @@ * made compact and SHOULD NOT be used for deployed federations</strong>). * The state of the flag is cleared each time asynchronous overflow * processing begins. + * + * @see DataService#forceOverflow(boolean, boolean) */ public final AtomicBoolean compactingMerge = new AtomicBoolean(false); @@ -1704,7 +1714,7 @@ } if(overflowEnabled) { - + // @todo defer allocation until init() outside of ctor. overflowService = Executors.newFixedThreadPool(1, new DaemonThreadFactory((serviceName == null ? "" : serviceName + "-") @@ -1849,6 +1859,19 @@ */ public boolean shouldOverflow() { + if(forceOverflow.get()) { + + /* + * Note: forceOverflow trumps everything else. + */ + + if (log.isInfoEnabled()) + log.info("Forcing overflow."); + + return true; + + } + if (isTransient()) { /* @@ -1886,7 +1909,7 @@ return false; } - + /* * Look for overflow condition on the "live" journal. */ @@ -1959,8 +1982,18 @@ */ public Future<Object> overflow() { - assert overflowAllowed.get(); +// assert overflowAllowed.get(); + /* + * Atomically test and clear the flag. The local boolean is inspected + * below. When true, asynchronous overflow processing will occur unless + * an error occurs during synchronous overflow processing. This ensures + * that we can force a compacting merge on the shards of a data service + * even if that data service has not buffer sufficient writes to warrant + * a build on any of the index segments. + */ + final boolean forceOverflow = this.forceOverflow.getAndSet(false/* newValue */); + final Event e = new Event(getFederation(), new EventResource(), EventType.SynchronousOverflow).addDetail( "synchronousOverflowCounter", @@ -1982,7 +2015,12 @@ if (asyncOverflowEnabled.get()) { - if (overflowMetadata.postProcess) { + /* + * Do overflow processing if overflow is being forced OR if we + * need to do a build for at least one index partition. + */ + + if (forceOverflow || overflowMetadata.postProcess) { /* * Post-processing SHOULD be performed. Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/StoreManager.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/StoreManager.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/resources/StoreManager.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -674,7 +674,7 @@ protected final long accelerateOverflowThreshold; /** - * Used to run the {@link Startup}. + * Used to run the {@link Startup}. @todo defer to init() outside of ctor. Also, defer {@link Startup} until init() outside of ctor. */ private final ExecutorService startupService = Executors .newSingleThreadExecutor(new DaemonThreadFactory @@ -1416,22 +1416,45 @@ * Verify that the concurrency manager has been set and wait a while * it if is not available yet. */ - if (log.isInfoEnabled()) - log.info("Waiting for concurrency manager"); - for (int i = 0; i < 5; i++) { - try { - getConcurrencyManager(); - } catch (IllegalStateException ex) { - Thread.sleep(100/* ms */); - } + { + int nwaits = 0; + while (true) { + try { + getConcurrencyManager(); + break; + } catch (IllegalStateException ex) { + Thread.sleep(100/* ms */); + if (++nwaits % 50 == 0) + log.warn("Waiting for concurrency manager"); + } + } } - getConcurrencyManager(); - if (Thread.interrupted()) - throw new InterruptedException(); - /* - * Look for pre-existing data files. - */ + try { + final IBigdataFederation<?> fed = getFederation(); + if (fed == null) { + /* + * Some of the unit tests do not start the txs until after + * the DataService. For those unit tests getFederation() + * will return null during startup() of the DataService. To + * have a common code path, we throw the exception here + * which is caught below. + */ + throw new UnsupportedOperationException(); + } + while (true) { + if (fed.getTransactionService() != null) { + break; + } + log.warn("Waiting for transaction service discovery"); + } + } catch (UnsupportedOperationException ex) { + log.warn("Federation not available - running in test case?"); + } + + /* + * Look for pre-existing data files. + */ if (!isTransient) { if (log.isInfoEnabled()) Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractFederation.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractFederation.java 2010-08-09 12:38:45 UTC (rev 3437) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/service/AbstractFederation.java 2010-08-09 15:43:08 UTC (rev 3438) @@ -829,7 +829,7 @@ } /** - * Delegated. + * Delegated. {@inheritDoc} */ public T getService() { @@ -840,7 +840,7 @@ } /** - * Delegated. + * Delegated. {@inheritDoc} */ public String getServiceName() { @@ -851,7 +851,7 @@ } /** - * Delegated. + * Delegated. {@inheritDoc} */ public Class getServiceIface() { @@ -862,7 +862,7 @@ } /** - * Delegated. + * Delegated. {@inheritDoc} */ public UUID getSer... [truncated message content] |
From: <tho...@us...> - 2010-09-28 12:14:10
|
Revision: 3655 http://bigdata.svn.sourceforge.net/bigdata/?rev=3655&view=rev Author: thompsonbry Date: 2010-09-28 12:14:00 +0000 (Tue, 28 Sep 2010) Log Message: ----------- Merged trunk to branch [r3438:r3654]. No conflicts reported. JOURNAL_HA_BRANCH. Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/BufferMode.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/config/NicUtil.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestRingBuffer.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestTransactionService.java branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTask.java branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFDataLoadMaster.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFFileLoadTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/SingleResourceReaderTask.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/AsynchronousStatementBufferFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/BasicRioLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRioLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/PresortRioLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPO.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DataLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/AbstractRIOTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/EDSAsyncLoader.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAsynchronousStatementBufferFactory.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestOptionals.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/fastload.properties branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/stress/LoadClosureAndQueryTest.java branches/JOURNAL_HA_BRANCH/build.xml branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster.config branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster16.config branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataStandalone.config Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/cache/RingBuffer.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -154,7 +154,7 @@ public boolean add(final T ref) throws IllegalStateException { if (ref == null) - throw new IllegalArgumentException(); + throw new NullPointerException(); beforeOffer( ref ); @@ -178,7 +178,7 @@ public boolean offer(final T ref) { if (ref == null) - throw new IllegalArgumentException(); + throw new NullPointerException(); beforeOffer( ref ); @@ -491,10 +491,9 @@ */ final public boolean scanHead(final int nscan, final T ref) { - assert nscan > 0; -// if (nscan <= 0) -// throw new IllegalArgumentException(); -// + if (nscan <= 0) + throw new IllegalArgumentException(); + if (ref == null) throw new IllegalArgumentException(); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/BufferMode.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/BufferMode.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/BufferMode.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -59,15 +59,16 @@ Transient(false/* stable */, true/* fullyBuffered */,StoreTypeEnum.WORM), /** + * <strong>This mode is not being actively developed and should not be used + * outside of unit tests.</strong> * <p> - * A direct buffer is allocated for the file image. Writes are applied - * to the buffer. The buffer tracks dirty slots regardless of the - * transaction that wrote them and periodically writes dirty slots - * through to disk. On commit, any dirty index or allocation nodes are - * written onto the buffer and all dirty slots on the buffer. Dirty - * slots in the buffer are then synchronously written to disk, the - * appropriate root block is updated, and the file is (optionally) - * flushed to disk. + * A direct buffer is allocated for the file image. Writes are applied to + * the buffer. The buffer tracks dirty slots regardless of the transaction + * that wrote them and periodically writes dirty slots through to disk. On + * commit, any dirty index or allocation nodes are written onto the buffer + * and all dirty slots on the buffer. Dirty slots in the buffer are then + * synchronously written to disk, the appropriate root block is updated, and + * the file is (optionally) flushed to disk. * </p> * <p> * This option offers wires an image of the journal file into memory and @@ -79,6 +80,9 @@ Direct(true/* stable */, true/* fullyBuffered */,StoreTypeEnum.WORM), /** + * <strong>This mode is not being actively developed and should not be used + * outside of unit tests. Memory mapped IO has the fatal weakness under Java + * that you can not reliably close or extend the backing file.</strong> * <p> * A memory-mapped buffer is allocated for the file image. Writes are * applied to the buffer. Reads read from the buffer. On commit, the map is Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/config/NicUtil.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/config/NicUtil.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/util/config/NicUtil.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -26,28 +26,20 @@ package com.bigdata.util.config; import java.io.IOException; -import java.net.InetAddress; import java.net.Inet4Address; +import java.net.InetAddress; import java.net.InterfaceAddress; -import java.net.MalformedURLException; import java.net.NetworkInterface; import java.net.SocketException; import java.net.UnknownHostException; +import java.util.Collections; +import java.util.Enumeration; import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.Enumeration; -import java.util.Collections; -import java.util.logging.LogRecord; import org.apache.log4j.Level; -import org.apache.log4j.Logger; -import net.jini.config.Configuration; -import net.jini.config.ConfigurationException; -import com.sun.jini.config.Config; -import com.sun.jini.logging.Levels; - /** * Utility class that provides a set of static convenience methods * related to processing information about the current node's Network @@ -400,34 +392,34 @@ return macAddr; } - /** - * Three-argument version of <code>getInetAddress</code> that retrieves - * the desired interface name from the given <code>Configuration</code> - * parameter. - */ - public static InetAddress getInetAddress(Configuration config, - String componentName, - String nicNameEntry) - { - String nicName = "NoNetworkInterfaceName"; - try { - nicName = (String)Config.getNonNullEntry(config, - componentName, - nicNameEntry, - String.class, - "eth0"); - } catch(ConfigurationException e) { - jiniConfigLogger.log(WARNING, e - +" - [componentName="+componentName - +", nicNameEntry="+nicNameEntry+"]"); - utilLogger.log(Level.WARN, e - +" - [componentName="+componentName - +", nicNameEntry="+nicNameEntry+"]"); - e.printStackTrace(); - return null; - } - return ( getInetAddress(nicName, 0, null, false) ); - } +// /** +// * Three-argument version of <code>getInetAddress</code> that retrieves +// * the desired interface name from the given <code>Configuration</code> +// * parameter. +// */ +// public static InetAddress getInetAddress(Configuration config, +// String componentName, +// String nicNameEntry) +// { +// String nicName = "NoNetworkInterfaceName"; +// try { +// nicName = (String)Config.getNonNullEntry(config, +// componentName, +// nicNameEntry, +// String.class, +// "eth0"); +// } catch(ConfigurationException e) { +// jiniConfigLogger.log(WARNING, e +// +" - [componentName="+componentName +// +", nicNameEntry="+nicNameEntry+"]"); +// utilLogger.log(Level.WARN, e +// +" - [componentName="+componentName +// +", nicNameEntry="+nicNameEntry+"]"); +// e.printStackTrace(); +// return null; +// } +// return ( getInetAddress(nicName, 0, null, false) ); +// } // What follows are a number of versions of the getIpAddress method // provided for convenience. Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestRingBuffer.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestRingBuffer.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/cache/TestRingBuffer.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -425,8 +425,8 @@ try { buffer.add(null); - fail("Expecting: " + IllegalArgumentException.class); - } catch (IllegalArgumentException ex) { + fail("Expecting: " + NullPointerException.class); + } catch (NullPointerException ex) { if (log.isInfoEnabled()) log.info("Ignoring expected exception: " + ex); } @@ -438,8 +438,8 @@ try { buffer.offer(null); - fail("Expecting: " + IllegalArgumentException.class); - } catch (IllegalArgumentException ex) { + fail("Expecting: " + NullPointerException.class); + } catch (NullPointerException ex) { if (log.isInfoEnabled()) log.info("Ignoring expected exception: " + ex); } Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -93,7 +93,25 @@ suite.addTest( TestTransientJournal.suite() ); - suite.addTest( TestDirectJournal.suite() ); + /* + * Commented out since this mode is not used and there is an occasional + * test failure in: + * + * com.bigdata.journal.TestConcurrentJournal.test_concurrentReadersAreOk + * + * This error is stochastic and appears to be restricted to + * BufferMode#Direct. This is a journal mode based by a fixed capacity + * native ByteBuffer serving as a write through cache to the disk. Since + * the buffer can not be extended, that journal mode is not being + * excercised by anything. If you like, I can deprecate the Direct + * BufferMode and turn disable its test suite. (There is also a "Mapped" + * BufferMode whose tests we are not running due to problems with Java + * releasing native heap ByteBuffers and closing memory mapped files. + * Its use is strongly discouraged in the javadoc, but it has not been + * excised from the code since it might be appropriate for some + * applications.) + */ +// suite.addTest( TestDirectJournal.suite() ); /* * Note: The mapped journal is somewhat problematic and its tests are Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestTransactionService.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestTransactionService.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/journal/TestTransactionService.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -40,6 +40,7 @@ import com.bigdata.service.AbstractTransactionService; import com.bigdata.service.CommitTimeIndex; import com.bigdata.service.TxServiceRunState; +import com.bigdata.util.MillisecondTimestampFactory; /** * Unit tests of the {@link AbstractTransactionService} using a mock client. @@ -259,6 +260,24 @@ } + /** + * FIXME This currently waits until at least two milliseconds have + * elapsed. This is a workaround for + * {@link TestTransactionService#test_newTx_readOnly()} until <a href= + * "https://sourceforge.net/apps/trac/bigdata/ticket/145" >ISSUE#145 + * </a> is resolved. This override of {@link #nextTimestamp()} should + * be removed once that issue is fixed. + */ + @Override + public long nextTimestamp() { + + // skip at least one millisecond. + MillisecondTimestampFactory.nextMillis(); + + return MillisecondTimestampFactory.nextMillis(); + + } + } /** @@ -596,17 +615,25 @@ * GT the lastCommitTime since that could allow data not yet committed to * become visible during the transaction (breaking isolation). * <p> - * A commitTime is identified by looking up the callers timestamp in a log of - * the historical commit times and returning the first historical commit + * A commitTime is identified by looking up the callers timestamp in a log + * of the historical commit times and returning the first historical commit * time LTE the callers timestamp. * <p> * The transaction start time is then chosen from the half-open interval * <i>commitTime</i> (inclusive lower bound) : <i>nextCommitTime</i> * (exclusive upper bound). * - * @throws IOException + * @throws IOException * - * @todo This test fails occasionally. I have not figured out why yet. BBT + * @todo This test fails occasionally. This occurs if the timestamps + * assigned by the {@link MockTransactionService} are only 1 unit + * apart. When that happens, there are not enough distinct values + * available to allow 2 concurrent read-only transactions. See <a + * href= + * "https://sourceforge.net/apps/trac/bigdata/ticket/145">ISSUE#145 + * </a>. Also see {@link MockTransactionService#nextTimestamp()} + * which has been overridden to guarantee that there are at least + * two distinct values such that this test will pass. */ public void test_newTx_readOnly() throws IOException { Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTask.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTask.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTask.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -194,20 +194,34 @@ * * @throws InterruptedException * @throws ExecutionException + * + * @todo This test now logs a warning rather than failing pending resolution + * of https://sourceforge.net/apps/trac/bigdata/ticket/147 */ public void test_stress_startWriteStop2() throws InterruptedException, ExecutionException { - for (int i = 0; i < 10000; i++) { + final int LIMIT = 10000; + int nerr = 0; + for (int i = 0; i < LIMIT; i++) { try { doStartWriteStop2Test(); } catch (Throwable t) { - fail("Pass#=" + i, t); + // fail("Pass#=" + i, t); + log.warn("Would have failed: pass#=" + i + ", cause=" + t); + nerr++; } } + if (nerr > 0) { + + log.error("Test would have failed: nerrs=" + nerr + " out of " + + LIMIT + " trials"); + + } + } /** Modified: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -34,7 +34,6 @@ import java.io.OutputStreamWriter; import java.io.StringWriter; import java.io.Writer; -import java.net.InetAddress; import java.util.Arrays; import java.util.Date; import java.util.Enumeration; @@ -131,8 +130,6 @@ public final Properties properties; public final String[] jiniOptions; - private final String serviceIpAddr; - protected void toString(StringBuilder sb) { super.toString(sb); @@ -178,12 +175,6 @@ } else { log.warn("groups = " + Arrays.toString(this.groups)); } - - try { - this.serviceIpAddr = NicUtil.getIpAddress("default.nic", "default", false); - } catch(IOException e) { - throw new ConfigurationException(e.getMessage(), e); - } } /** @@ -480,6 +471,9 @@ final ServiceDir serviceDir = new ServiceDir(this.serviceDir); + String serviceIpAddr = NicUtil.getIpAddress ( "default.nic", "default", false ) ; + if ( null == serviceIpAddr ) + throw new IOException ( "Can't get a host ip address" ) ; final Hostname hostName = new Hostname(serviceIpAddr); final ServiceUUID serviceUUID = new ServiceUUID(this.serviceUUID); Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFDataLoadMaster.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFDataLoadMaster.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFDataLoadMaster.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -272,7 +272,18 @@ // // /** {@value #DEFAULT_MAX_TRIES} */ // int DEFAULT_MAX_TRIES = 3; - + + /** + * The value that will be used for the graph/context co-ordinate when + * loading data represented in a triple format into a quad store. + */ + String DEFAULT_GRAPH = "defaultGraph" ; + + /** + * TODO Should we always enforce a real value? i.e. provide a real default + * or abort the load. + */ + String DEFAULT_DEFAULT_GRAPH = null ; } /** @@ -402,6 +413,12 @@ private transient RDFFormat rdfFormat; /** + * The value that will be used for the graph/context co-ordinate when + * loading data represented in a triple format into a quad store. + */ + public final String defaultGraph ; + + /** * Force the load of the NxParser integration class and its registration * of the NQuadsParser#nquads RDFFormat. * @@ -496,6 +513,8 @@ sb.append(", " + ConfigurationOptions.RDF_FORMAT + "=" + rdfFormat); + sb.append(", " + ConfigurationOptions.DEFAULT_GRAPH + "=" + defaultGraph) ; + sb.append(", " + ConfigurationOptions.FORCE_OVERFLOW_BEFORE_CLOSURE + "=" + forceOverflowBeforeClosure); @@ -601,6 +620,10 @@ } + defaultGraph = (String) config.getEntry(component, + ConfigurationOptions.DEFAULT_GRAPH, String.class, + ConfigurationOptions.DEFAULT_DEFAULT_GRAPH); + rejectedExecutionDelay = (Long) config.getEntry( component, ConfigurationOptions.REJECTED_EXECUTION_DELAY, Long.TYPE, @@ -979,6 +1002,7 @@ jobState.ontology,//file jobState.ontology.getPath(),//baseURI jobState.getRDFFormat(),// + jobState.defaultGraph, jobState.ontologyFileFilter // ); Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFFileLoadTask.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFFileLoadTask.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/MappedRDFFileLoadTask.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -223,6 +223,7 @@ jobState.valuesInitialCapacity,// jobState.bnodesInitialCapacity,// jobState.getRDFFormat(), // + jobState.defaultGraph, parserOptions,// false, // deleteAfter is handled by the master! jobState.parserPoolSize, // Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/SingleResourceReaderTask.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/SingleResourceReaderTask.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/SingleResourceReaderTask.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -186,7 +186,7 @@ // run the parser. // @todo reuse the same underlying parser instance? - loader.loadRdf(reader, baseURL, rdfFormat, parserOptions); + loader.loadRdf(reader, baseURL, rdfFormat, null, parserOptions); success = true; Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/AsynchronousStatementBufferFactory.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/AsynchronousStatementBufferFactory.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/AsynchronousStatementBufferFactory.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -356,8 +356,14 @@ * The default {@link RDFFormat}. */ private final RDFFormat defaultFormat; - + /** + * The value that will be used for the graph/context co-ordinate when + * loading data represented in a triple format into a quad store. + */ + private final String defaultGraph; + + /** * Options for the {@link RDFParser}. */ private final RDFParserOptions parserOptions; @@ -1423,7 +1429,7 @@ try { // run the parser. new PresortRioLoader(buffer).loadRdf(reader, baseURL, - rdfFormat, parserOptions); + rdfFormat, defaultGraph, parserOptions); } finally { reader.close(); } @@ -1490,6 +1496,9 @@ * {@link BNode}s parsed from a single document. * @param defaultFormat * The default {@link RDFFormat} which will be assumed. + * @param defaultGraph + * The value that will be used for the graph/context co-ordinate when + * loading data represented in a triple format into a quad store. * @param parserOptions * Options for the {@link RDFParser}. * @param deleteAfter @@ -1529,6 +1538,7 @@ final int valuesInitialCapacity,// final int bnodesInitialCapacity, // final RDFFormat defaultFormat,// + final String defaultGraph,// final RDFParserOptions parserOptions,// final boolean deleteAfter,// final int parserPoolSize,// @@ -1566,6 +1576,8 @@ this.defaultFormat = defaultFormat; + this.defaultGraph = defaultGraph; + this.parserOptions = parserOptions; this.deleteAfter = deleteAfter; Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/BasicRioLoader.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/BasicRioLoader.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/BasicRioLoader.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -37,6 +37,8 @@ import org.openrdf.rio.RDFParser; import org.openrdf.rio.Rio; +import com.bigdata.rdf.model.BigdataURI; + /** * Parses data but does not load it into the indices. * @@ -74,6 +76,8 @@ private final ValueFactory valueFactory; + protected String defaultGraph; + public BasicRioLoader(final ValueFactory valueFactory) { if (valueFactory == null) @@ -153,18 +157,20 @@ } final public void loadRdf(final InputStream is, final String baseURI, - final RDFFormat rdfFormat, final RDFParserOptions options) + final RDFFormat rdfFormat, final String defaultGraph, + final RDFParserOptions options) throws Exception { - loadRdf2(is, baseURI, rdfFormat, options); + loadRdf2(is, baseURI, rdfFormat, defaultGraph, options); } final public void loadRdf(final Reader reader, final String baseURI, - final RDFFormat rdfFormat, final RDFParserOptions options) + final RDFFormat rdfFormat, final String defaultGraph, + final RDFParserOptions options) throws Exception { - loadRdf2(reader, baseURI, rdfFormat, options); + loadRdf2(reader, baseURI, rdfFormat, defaultGraph, options); } @@ -180,7 +186,7 @@ * @throws Exception */ protected void loadRdf2(final Object source, final String baseURI, - final RDFFormat rdfFormat, final RDFParserOptions options) + final RDFFormat rdfFormat, final String defaultGraph, final RDFParserOptions options) throws Exception { if (source == null) @@ -198,6 +204,8 @@ if (log.isInfoEnabled()) log.info("format=" + rdfFormat + ", options=" + options); + this.defaultGraph = defaultGraph ; + final RDFParser parser = getParser(rdfFormat); // apply options to the parser @@ -212,7 +220,7 @@ // Note: reset so that rates are correct for each source loaded. stmtsAdded = 0; - + try { before(); Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRioLoader.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRioLoader.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/IRioLoader.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -72,12 +72,14 @@ * The base URL for those data. * @param rdfFormat * The interchange format. + * @param defaultGraph + * The default graph. * @param options * Options to be applied to the {@link RDFParser}. * @throws Exception */ public void loadRdf(Reader reader, String baseURL, RDFFormat rdfFormat, - RDFParserOptions options) throws Exception; + String defaultGraph, RDFParserOptions options) throws Exception; /** * Parse RDF data. @@ -88,11 +90,13 @@ * The base URL for those data. * @param rdfFormat * The interchange format. + * @param defaultGraph + * The default graph. * @param options * Options to be applied to the {@link RDFParser}. * @throws Exception */ public void loadRdf(InputStream is, String baseURI, RDFFormat rdfFormat, - RDFParserOptions options) throws Exception; + String defaultGraph, RDFParserOptions options) throws Exception; } Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/PresortRioLoader.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/PresortRioLoader.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/PresortRioLoader.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -23,11 +23,14 @@ */ package com.bigdata.rdf.rio; +import org.openrdf.model.Resource; import org.openrdf.model.Statement; import org.openrdf.model.Value; import org.openrdf.rio.RDFHandler; import org.openrdf.rio.RDFHandlerException; +import com.bigdata.rdf.model.BigdataURI; + /** * Statement handler for the RIO RDF Parser that writes on a * {@link StatementBuffer}. @@ -45,6 +48,12 @@ final protected IStatementBuffer<?> buffer; /** + * The value that will be used for the graph/context co-ordinate when + * loading data represented in a triple format into a quad store. + */ + private BigdataURI defaultGraphURI = null ; + + /** * Sets up parser to load RDF. * * @param buffer @@ -58,7 +67,7 @@ this.buffer = buffer; } - + /** * bulk insert the buffered data into the store. */ @@ -87,8 +96,11 @@ public RDFHandler newRDFHandler() { + defaultGraphURI = null != defaultGraph && 4 == buffer.getDatabase ().getSPOKeyArity () + ? buffer.getDatabase ().getValueFactory ().createURI ( defaultGraph ) + : null + ; return this; - } public void handleStatement( final Statement stmt ) { @@ -98,9 +110,13 @@ log.debug(stmt); } - + + Resource graph = stmt.getContext() ; + if ( null == graph + && null != defaultGraphURI ) // only true when we know we are loading a quad store + graph = defaultGraphURI ; // buffer the write (handles overflow). - buffer.add( stmt.getSubject(), stmt.getPredicate(), stmt.getObject(), stmt.getContext() ); + buffer.add( stmt.getSubject(), stmt.getPredicate(), stmt.getObject(), graph ); stmtsAdded++; Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPO.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPO.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPO.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -556,11 +556,18 @@ final int p = this.p.hashCode(); final int o = this.o.hashCode(); - - // Note: historical behavior was (s,p,o) based hash. - hashCode = 961 * ((int) (s ^ (s >>> 32))) + 31 - * ((int) (p ^ (p >>> 32))) + ((int) (o ^ (o >>> 32))); + /* + * Note: The historical behavior was based on the int64 term + * identifiers. Since the hash code is now computed from the int32 + * hash codes of the (s,p,o) IV objects, the original bit math was + * resulting in a hash code which was always zero (any 32 bit value + * shifted right by 32 bits is zero). + */ + hashCode = 961 * s + 31 * p + o; +// hashCode = 961 * ((int) (s ^ (s >>> 32))) + 31 +// * ((int) (p ^ (p >>> 32))) + ((int) (o ^ (o >>> 32))); + } return hashCode; Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DataLoader.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DataLoader.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DataLoader.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -640,7 +640,7 @@ final LoadStats totals = new LoadStats(); - loadData3(totals, reader, baseURL, rdfFormat, true/*endOfBatch*/); + loadData3(totals, reader, baseURL, rdfFormat, null, true/*endOfBatch*/); return totals; @@ -668,7 +668,7 @@ final LoadStats totals = new LoadStats(); - loadData3(totals, is, baseURL, rdfFormat, true/* endOfBatch */); + loadData3(totals, is, baseURL, rdfFormat, null, true/* endOfBatch */); return totals; @@ -704,7 +704,7 @@ final LoadStats totals = new LoadStats(); - loadData3(totals, is, baseURL, rdfFormat, true/*endOfBatch*/); + loadData3(totals, is, baseURL, rdfFormat, null, true/*endOfBatch*/); return totals; @@ -752,7 +752,7 @@ if(file.exists()) { loadFiles(totals, 0/* depth */, file, baseURL, - rdfFormat, filter, endOfBatch); + rdfFormat, null, filter, endOfBatch); return; @@ -778,7 +778,7 @@ try { - loadData3(totals, reader, baseURL, rdfFormat, endOfBatch); + loadData3(totals, reader, baseURL, rdfFormat, null, endOfBatch); } catch (Exception ex) { @@ -806,6 +806,9 @@ * The format of the file (optional, when not specified the * format is deduced for each file in turn using the * {@link RDFFormat} static methods). + * @param defaultGraph + * The value that will be used for the graph/context co-ordinate when + * loading data represented in a triple format into a quad store. * @param filter * A filter selecting the file names that will be loaded * (optional). When specified, the filter MUST accept directories @@ -816,7 +819,8 @@ * @throws IOException */ public LoadStats loadFiles(final File file, final String baseURI, - final RDFFormat rdfFormat, final FilenameFilter filter) + final RDFFormat rdfFormat, final String defaultGraph, + final FilenameFilter filter) throws IOException { if (file == null) @@ -824,7 +828,7 @@ final LoadStats totals = new LoadStats(); - loadFiles(totals, 0/* depth */, file, baseURI, rdfFormat, filter, true/* endOfBatch */ + loadFiles(totals, 0/* depth */, file, baseURI, rdfFormat, defaultGraph, filter, true/* endOfBatch */ ); return totals; @@ -833,7 +837,8 @@ protected void loadFiles(final LoadStats totals, final int depth, final File file, final String baseURI, final RDFFormat rdfFormat, - final FilenameFilter filter, final boolean endOfBatch) + final String defaultGraph, final FilenameFilter filter, + final boolean endOfBatch) throws IOException { if (file.isDirectory()) { @@ -853,7 +858,7 @@ // final RDFFormat fmt = RDFFormat.forFileName(f.toString(), // rdfFormat); - loadFiles(totals, depth + 1, f, baseURI, rdfFormat, filter, + loadFiles(totals, depth + 1, f, baseURI, rdfFormat, defaultGraph, filter, (depth == 0 && i < files.length ? false : endOfBatch)); } @@ -908,7 +913,7 @@ final String s = baseURI != null ? baseURI : file.toURI() .toString(); - loadData3(totals, reader, s, fmt, endOfBatch); + loadData3(totals, reader, s, fmt, defaultGraph, endOfBatch); return; @@ -944,7 +949,7 @@ */ protected void loadData3(final LoadStats totals, final Object source, final String baseURL, final RDFFormat rdfFormat, - final boolean endOfBatch) throws IOException { + final String defaultGraph, final boolean endOfBatch) throws IOException { final long begin = System.currentTimeMillis(); @@ -967,11 +972,10 @@ } // Setup the loader. - final PresortRioLoader loader = new PresortRioLoader(buffer); + final PresortRioLoader loader = new PresortRioLoader ( buffer ) ; // @todo review: disable auto-flush - caller will handle flush of the buffer. // loader.setFlush(false); - // add listener to log progress. loader.addRioLoaderListener( new RioLoaderListener() { @@ -995,12 +999,12 @@ if(source instanceof Reader) { - loader.loadRdf((Reader) source, baseURL, rdfFormat, parserOptions); + loader.loadRdf((Reader) source, baseURL, rdfFormat, defaultGraph, parserOptions); } else if (source instanceof InputStream) { loader.loadRdf((InputStream) source, baseURL, rdfFormat, - parserOptions); + defaultGraph, parserOptions); } else throw new AssertionError(); @@ -1356,7 +1360,7 @@ // rdfFormat, filter); dataLoader.loadFiles(totals, 0/* depth */, fileOrDir, baseURI, - rdfFormat, filter, true/* endOfBatch */ + rdfFormat, null, filter, true/* endOfBatch */ ); } Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/Splitter.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -714,7 +714,7 @@ try { // run the parser. new MyLoader(buffer).loadRdf(reader, baseURL, - defaultRDFFormat, s.parserOptions); + defaultRDFFormat, null, s.parserOptions); } finally { reader.close(); } Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/AbstractRIOTestCase.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/AbstractRIOTestCase.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/AbstractRIOTestCase.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -401,7 +401,7 @@ }); - loader.loadRdf((Reader) reader, baseURI, rdfFormat, options); + loader.loadRdf((Reader) reader, baseURI, rdfFormat, null, options); if (log.isInfoEnabled()) log.info("Done: " + resource); @@ -681,7 +681,7 @@ loader.loadRdf(new BufferedReader(new InputStreamReader( new FileInputStream(resource))), baseURI, rdfFormat, - options); + null, options); if(log.isInfoEnabled()) log.info("End of reparse: nerrors=" + nerrs + ", file=" Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/EDSAsyncLoader.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/EDSAsyncLoader.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/EDSAsyncLoader.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -161,6 +161,7 @@ valuesInitialCapacity,// bnodesInitialCapacity,// RDFFormat.RDFXML, // defaultFormat + null, // defaultGraph parserOptions, // parserOptions false, // deleteAfter poolSize, // parserPoolSize, Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAsynchronousStatementBufferFactory.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAsynchronousStatementBufferFactory.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestAsynchronousStatementBufferFactory.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -400,6 +400,7 @@ valuesInitialCapacity,// bnodesInitialCapacity,// RDFFormat.RDFXML, // defaultFormat + null, // defaultGraph parserOptions, // false, // deleteAfter parallel?5:1, // parserPoolSize, Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestOptionals.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestOptionals.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestOptionals.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -77,16 +77,16 @@ super(name); } - public void test_optionals_nextedSubquery() - { - - final Properties p = new Properties(getProperties()); - - p.setProperty(AbstractRelation.Options.NESTED_SUBQUERY, "true"); - - doOptionalsTest(p); - - } +// public void test_optionals_nextedSubquery() +// { +// +// final Properties p = new Properties(getProperties()); +// +// p.setProperty(AbstractRelation.Options.NESTED_SUBQUERY, "true"); +// +// doOptionalsTest(p); +// +// } public void test_optionals_pipeline() { Modified: branches/JOURNAL_HA_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/fastload.properties =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/fastload.properties 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/fastload.properties 2010-09-28 12:14:00 UTC (rev 3655) @@ -1,7 +1,8 @@ -# Be very careful when you use this configuration! This turns off incremental -# inference for load and retract, so you must explicitly force these operations, -# which requires punching through the SAIL layer. Of course, if you are not -# using inference then this is just the ticket and quite fast. +# This configuration turns off incremental inference for load and retract, so +# you must explicitly force these operations if you want to compute the closure +# of the knowledge base. Forcing the closure requires punching through the SAIL +# layer. Of course, if you are not using inference then this configuration is +# just the ticket and is quite fast. # set the initial and maximum extent of the journal com.bigdata.journal.AbstractJournal.initialExtent=209715200 Modified: branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/stress/LoadClosureAndQueryTest.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/stress/LoadClosureAndQueryTest.java 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/stress/LoadClosureAndQueryTest.java 2010-09-28 12:14:00 UTC (rev 3655) @@ -1204,7 +1204,7 @@ try { dataLoader.loadFiles(dataDir, null/* baseURI */, - null/* rdfFormat */, filter); + null/* rdfFormat */, null, /* defaultGraph */filter); } catch (IOException ex) { Modified: branches/JOURNAL_HA_BRANCH/build.xml =================================================================== --- branches/JOURNAL_HA_BRANCH/build.xml 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/build.xml 2010-09-28 12:14:00 UTC (rev 3655) @@ -2002,6 +2002,10 @@ <fileset dir="${bigdata.dir}/bigdata/lib"> <include name="**/*.jar" /> </fileset> + <fileset dir="${bigdata.dir}/bigdata-jini/lib/jini/lib"> + <include name="jini-core.jar" /> + <include name="jini-ext.jar" /> + </fileset> </copy> <!-- copy resources to Workbench webapp. --> Modified: branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster.config =================================================================== --- branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster.config 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster.config 2010-09-28 12:14:00 UTC (rev 3655) @@ -758,10 +758,11 @@ * have for your applications! */ "-Xmx1600m",// was 800 - /* Optionally, grab all/most of the max heap at once. This makes sense for - * DS but is less necessary for other bigdata services. + /* Pre-allocation of the DS heap is no longer recommended. + * + * See https://sourceforge.net/apps/trac/bigdata/ticket/157 + "-Xms800m", */ - "-Xms800m", // 1/2 of the max heap is a good value. /* * This option will keep the JVM "alive" even when it is memory starved * but perform of a memory starved JVM is terrible. Modified: branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster16.config =================================================================== --- branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster16.config 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataCluster16.config 2010-09-28 12:14:00 UTC (rev 3655) @@ -813,12 +813,11 @@ * http://blogs.msdn.com/ntdebugging/archive/2009/02/06/microsoft-windows-dynamic-cache-service.aspx */ "-Xmx9G", // Note: out of 32 available! - /* Optionally, grab all/most of the max heap at once. This makes sense for - * DS, but is less necessary for other bigdata services. If the machine is - * dedicated to the DataService then use the maximum heap. Otherwise 1/2 of - * the maximum heap is a good value. - */ + /* Pre-allocation of the DS heap is no longer recommended. + * + * See https://sourceforge.net/apps/trac/bigdata/ticket/157 "-Xms9G", + */ /* * FIXME This might not be required, so that should be tested. * However, you don't want the JVM to just die if it is being @@ -1298,11 +1297,11 @@ static private namespace = "U"+univNum+""; // minimum #of data services to run. - static private minDataServices = bigdata.dataServiceCount; +// static private minDataServices = bigdata.dataServiceCount; // unused // How long the master will wait to discover the minimum #of data // services that you specified (ms). - static private awaitDataServicesTimeout = 8000; +// static private awaitDataServicesTimeout = 8000; // unused. /* Multiplier for the scatter effect. */ Modified: branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataStandalone.config =================================================================== --- branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataStandalone.config 2010-09-28 11:50:52 UTC (rev 3654) +++ branches/JOURNAL_HA_BRANCH/src/resources/config/bigdataStandalone.config 2010-09-28 12:14:00 UTC (rev 3655) @@ -781,10 +781,11 @@ * have for your applications! */ "-Xmx4g",// was 800 - /* Optionally, grab all/most of the max heap at once. This makes sense for - * DS but is less necessary for other bigdata services. + /* Pre-allocation of the DS heap is no longer recommended. + * + * See https://sourceforge.net/apps/trac/bigdata/ticket/157 + "-Xms2G", */ - "-Xms2G", // 1/2 of the max heap is a good value. /* * This option will keep the JVM "alive" even when it is memory starved * but perform of a memory starved JVM is terrible. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2010-11-04 17:06:29
|
Revision: 3895 http://bigdata.svn.sourceforge.net/bigdata/?rev=3895&view=rev Author: thompsonbry Date: 2010-11-04 17:06:22 +0000 (Thu, 04 Nov 2010) Log Message: ----------- Merged from trunk [r3655:r3894]. Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/bigdata-perf/README.txt branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/BaseVocabulary.java branches/JOURNAL_HA_BRANCH/build.xml Property Changed: ---------------- branches/JOURNAL_HA_BRANCH/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/JOURNAL_HA_BRANCH/bigdata-perf/ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/ branches/JOURNAL_HA_BRANCH/dsi-utils/src/java/it/ branches/JOURNAL_HA_BRANCH/dsi-utils/src/test/it/unimi/ branches/JOURNAL_HA_BRANCH/osgi/ Property changes on: branches/JOURNAL_HA_BRANCH ___________________________________________________________________ Modified: svn:mergeinfo - /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/bugfix-btm:2594-2779 /trunk:2763-2785,2918-2980,3392-3437 + /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/bugfix-btm:2594-2779 /trunk:2763-2785,2918-2980,3392-3437,3656-3894 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-jini/src/java/com/bigdata/attr:2981-3437 + /trunk/bigdata-jini/src/java/com/bigdata/attr:2981-3437,3656-3894 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-jini/src/java/com/bigdata/disco:2981-3437 + /trunk/bigdata-jini/src/java/com/bigdata/disco:2981-3437,3656-3894 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-jini/src/java/com/bigdata/util/config:2981-3437 + /trunk/bigdata-jini/src/java/com/bigdata/util/config:2981-3437,3656-3894 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-perf ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-perf:2981-3437 + /trunk/bigdata-perf:2981-3437,3656-3894 Modified: branches/JOURNAL_HA_BRANCH/bigdata-perf/README.txt =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-perf/README.txt 2010-11-04 16:30:14 UTC (rev 3894) +++ branches/JOURNAL_HA_BRANCH/bigdata-perf/README.txt 2010-11-04 17:06:22 UTC (rev 3895) @@ -1,2 +1,6 @@ This module contains drivers for a variety of data sets and benchmarks used as -part of a performance test suite. \ No newline at end of file +part of a performance test suite. + +Note: You must run "ant bundleJar" in the top-level directory first. This will +build the bigdata code base and bundle together the various dependencies so they +will be available for the ant scripts in this module. Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-rdf/src/java/com/bigdata/rdf/util:2981-3437 + /trunk/bigdata-rdf/src/java/com/bigdata/rdf/util:2981-3437,3656-3894 Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/BaseVocabulary.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/BaseVocabulary.java 2010-11-04 16:30:14 UTC (rev 3894) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/BaseVocabulary.java 2010-11-04 17:06:22 UTC (rev 3895) @@ -65,6 +65,11 @@ final static public Logger log = Logger.getLogger(BaseVocabulary.class); /** + * The serialVersionUID as reported by the trunk on Oct 6, 2010. + */ + private static final long serialVersionUID = 1560142397515291331L; + + /** * The database that is the authority for the defined terms and term * identifiers. This will be <code>null</code> when the de-serialization * ctor is used. Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench ___________________________________________________________________ Modified: svn:mergeinfo - + /trunk/bigdata-sails/src/java/com/bigdata/rdf/sail/bench:3656-3894 Modified: branches/JOURNAL_HA_BRANCH/build.xml =================================================================== --- branches/JOURNAL_HA_BRANCH/build.xml 2010-11-04 16:30:14 UTC (rev 3894) +++ branches/JOURNAL_HA_BRANCH/build.xml 2010-11-04 17:06:22 UTC (rev 3895) @@ -2002,10 +2002,12 @@ <fileset dir="${bigdata.dir}/bigdata/lib"> <include name="**/*.jar" /> </fileset> +<!-- Jini should not be required for the Sesame WAR. <fileset dir="${bigdata.dir}/bigdata-jini/lib/jini/lib"> <include name="jini-core.jar" /> <include name="jini-ext.jar" /> </fileset> + --> </copy> <!-- copy resources to Workbench webapp. --> Property changes on: branches/JOURNAL_HA_BRANCH/dsi-utils/src/java/it ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/dsi-utils/src/java/it:2763-2785,2787-2887,2889-2916,2918-3437 + /trunk/dsi-utils/src/java/it:2763-2785,2787-2887,2889-2916,2918-3437,3656-3894 Property changes on: branches/JOURNAL_HA_BRANCH/dsi-utils/src/test/it/unimi ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/dsi-utils/src/test/it/unimi:2981-3437 + /trunk/dsi-utils/src/test/it/unimi:2981-3437,3656-3894 Property changes on: branches/JOURNAL_HA_BRANCH/osgi ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/osgi:2981-3437 + /trunk/osgi:2981-3437,3656-3894 This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2010-12-21 15:09:14
|
Revision: 4028 http://bigdata.svn.sourceforge.net/bigdata/?rev=4028&view=rev Author: mrpersonick Date: 2010-12-21 15:09:07 +0000 (Tue, 21 Dec 2010) Log Message: ----------- expanding free text search functionality to include search metadata support via SPARQL Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl2.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java Modified: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java 2010-12-21 15:05:59 UTC (rev 4027) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java 2010-12-21 15:09:07 UTC (rev 4028) @@ -63,8 +63,10 @@ /** * The namespace used for bigdata specific extensions. */ - String NAMESPACE = "http://www.bigdata.com/rdf#"; + final String NAMESPACE = "http://www.bigdata.com/rdf#"; + final String SEARCH_NAMESPACE = "http://www.bigdata.com/rdf/search#"; + /** * The namespace prefix used in SPARQL queries to signify query hints. You * can embed query hints into a SPARQL query as follows: @@ -150,7 +152,13 @@ * Note: The context position should be unbound when using statement * identifiers. */ - URI SEARCH = new URIImpl(NAMESPACE+"search"); + final URI SEARCH = new URIImpl(SEARCH_NAMESPACE+"search"); + + final URI RELEVANCE = new URIImpl(SEARCH_NAMESPACE+"relevance"); + + final URI RANK = new URIImpl(SEARCH_NAMESPACE+"rank"); + + final URI NUM_MATCHED_TOKENS = new URIImpl(SEARCH_NAMESPACE+"numMatchedTokens"); /** * Sesame has the notion of a "null" graph. Any time you insert a statement Modified: branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl2.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl2.java 2010-12-21 15:05:59 UTC (rev 4027) +++ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl2.java 2010-12-21 15:09:07 UTC (rev 4028) @@ -8,6 +8,7 @@ import java.util.HashSet; import java.util.Iterator; import java.util.LinkedHashMap; +import java.util.LinkedHashSet; import java.util.LinkedList; import java.util.List; import java.util.Map; @@ -751,11 +752,50 @@ // problem final Map<IPredicate, StatementPattern> searches = new HashMap<IPredicate, StatementPattern>(); + + final Set<StatementPattern> searchMetadata1 = + new LinkedHashSet<StatementPattern>(); + final Map<Var, Set<StatementPattern>> searchMetadata2 = + new LinkedHashMap<Var, Set<StatementPattern>>(); + Iterator<Map.Entry<StatementPattern, Boolean>> it = + stmtPatterns.entrySet().iterator(); + while (it.hasNext()) { + final StatementPattern sp = it.next().getKey(); + final Value s = sp.getSubjectVar().getValue(); + final Value p = sp.getPredicateVar().getValue(); + final Value o = sp.getObjectVar().getValue(); + if (s == null && p != null && o != null) { + if (BD.SEARCH.equals(p)) { + searchMetadata1.add(sp); + searchMetadata2.put(sp.getSubjectVar(), + new LinkedHashSet<StatementPattern>()); + it.remove(); + } + } + } + it = stmtPatterns.entrySet().iterator(); + while (it.hasNext()) { + final StatementPattern sp = it.next().getKey(); + final Value s = sp.getSubjectVar().getValue(); + final Value p = sp.getPredicateVar().getValue(); + final Value o = sp.getObjectVar().getValue(); + if (s == null && p != null && o == null) { + if (BD.RELEVANCE.equals(p)) { + final Var sVar = sp.getSubjectVar(); + Set<StatementPattern> metadata = searchMetadata2.get(sVar); + if (metadata != null) { + metadata.add(sp); + } + it.remove(); + } + } + } + for (Map.Entry<StatementPattern, Boolean> entry : stmtPatterns .entrySet()) { - StatementPattern sp = entry.getKey(); - boolean optional = entry.getValue(); - IPredicate tail = generateTail(sp, optional); + final StatementPattern sp = entry.getKey(); + final boolean optional = entry.getValue(); + final IPredicate tail = generateTail(sp, optional); // encountered a value not in the database lexicon if (tail == null) { if (log.isDebugEnabled()) { @@ -769,12 +809,17 @@ return null; } } - if (tail.getSolutionExpander() instanceof FreeTextSearchExpander) { - searches.put(tail, sp); - } tails.add(tail); } + for (StatementPattern sp : searchMetadata1) { + final Set<StatementPattern> metadata = + searchMetadata2.get(sp.getSubjectVar()); + final IPredicate tail = generateSearchTail(sp, metadata); + searches.put(tail, sp); + tails.add(tail); + } + /* * When in quads mode, we need to go through the free text searches and * make sure that they are properly filtered for the dataset where @@ -826,7 +871,7 @@ boolean needsFilter = true; // check the other tails one by one for (IPredicate<ISPO> tail : tails) { - ISolutionExpander<ISPO> expander = + final ISolutionExpander<ISPO> expander = tail.getSolutionExpander(); // only concerned with non-optional tails that are not // themselves magic searches @@ -837,7 +882,7 @@ // see if the search variable appears in this tail boolean appears = false; for (int i = 0; i < tail.arity(); i++) { - IVariableOrConstant term = tail.get(i); + final IVariableOrConstant term = tail.get(i); if (log.isDebugEnabled()) { log.debug(term); } @@ -857,8 +902,8 @@ if (log.isDebugEnabled()) { log.debug("needs filter: " + searchVar); } - FreeTextSearchExpander expander = (FreeTextSearchExpander) - search.getSolutionExpander(); + final FreeTextSearchExpander expander = + (FreeTextSearchExpander) search.getSolutionExpander(); expander.addNamedGraphsFilter(graphs); } } @@ -907,9 +952,9 @@ IAccessPath<ISPO> accessPath = database.getSPORelation() .getAccessPath(tail); accessPath = expander.getAccessPath(accessPath); - IChunkedOrderedIterator<ISPO> it = accessPath.iterator(); - while (it.hasNext()) { - log.debug(it.next().toString(database)); + IChunkedOrderedIterator<ISPO> it1 = accessPath.iterator(); + while (it1.hasNext()) { + log.debug(it1.next().toString(database)); } } } @@ -1257,23 +1302,6 @@ private IPredicate generateTail(final StatementPattern stmtPattern, final boolean optional) throws QueryEvaluationException { - // create a solution expander for free text search if necessary - ISolutionExpander<ISPO> expander = null; - final Value predValue = stmtPattern.getPredicateVar().getValue(); - if (log.isDebugEnabled()) { - log.debug(predValue); - } - if (predValue != null && BD.SEARCH.equals(predValue)) { - final Value objValue = stmtPattern.getObjectVar().getValue(); - if (log.isDebugEnabled()) { - log.debug(objValue); - } - if (objValue != null && objValue instanceof Literal) { - expander = new FreeTextSearchExpander(database, - (Literal) objValue); - } - } - // @todo why is [s] handled differently? // because [s] is the variable in free text searches, no need to test // to see if the free text search expander is in place @@ -1283,26 +1311,20 @@ return null; } - final IVariableOrConstant<IV> p; - if (expander == null) { - p = generateVariableOrConstant(stmtPattern.getPredicateVar()); - } else { - p = new Constant(DummyIV.INSTANCE); - } + final IVariableOrConstant<IV> p = generateVariableOrConstant( + stmtPattern.getPredicateVar()); if (p == null) { return null; } - final IVariableOrConstant<IV> o; - if (expander == null) { - o = generateVariableOrConstant(stmtPattern.getObjectVar()); - } else { - o = new Constant(DummyIV.INSTANCE); - } + final IVariableOrConstant<IV> o = generateVariableOrConstant( + stmtPattern.getObjectVar()); if (o == null) { return null; } - + + // for default and named graph expansion + ISolutionExpander<ISPO> expander = null; final IVariableOrConstant<IV> c; if (!database.isQuads()) { /* @@ -1361,79 +1383,68 @@ } System.err.println(stmtPattern.toString()); } - if (expander != null) { - /* - * @todo can this happen? If it does then we need to look at how - * to layer the expanders. - */ - // throw new AssertionError("expander already set"); - // we are doing a free text search, no need to do any named or - // default graph expansion work - c = null; - } else { - final Var cvar = stmtPattern.getContextVar(); - if (dataset == null) { - if (cvar == null) { - /* - * There is no dataset and there is no graph variable, - * so the default graph will be the RDF Merge of ALL - * graphs in the quad store. - * - * This code path uses an "expander" which strips off - * the context information and filters for the distinct - * (s,p,o) triples to realize the RDF Merge of the - * source graphs for the default graph. - */ + final Var cvar = stmtPattern.getContextVar(); + if (dataset == null) { + if (cvar == null) { + /* + * There is no dataset and there is no graph variable, + * so the default graph will be the RDF Merge of ALL + * graphs in the quad store. + * + * This code path uses an "expander" which strips off + * the context information and filters for the distinct + * (s,p,o) triples to realize the RDF Merge of the + * source graphs for the default graph. + */ + c = null; + expander = new DefaultGraphSolutionExpander(null/* ALL */); + } else { + /* + * There is no data set and there is a graph variable, + * so the query will run against all named graphs and + * [cvar] will be to the context of each (s,p,o,c) in + * turn. This handles constructions such as: + * + * "SELECT * WHERE {graph ?g {?g :p :o } }" + */ + expander = new NamedGraphSolutionExpander(null/* ALL */); + c = generateVariableOrConstant(cvar); + } + } else { // dataset != null + switch (stmtPattern.getScope()) { + case DEFAULT_CONTEXTS: { + /* + * Query against the RDF merge of zero or more source + * graphs. + */ + expander = new DefaultGraphSolutionExpander(dataset + .getDefaultGraphs()); + /* + * Note: cvar can not become bound since context is + * stripped for the default graph. + */ + if (cvar == null) c = null; - expander = new DefaultGraphSolutionExpander(null/* ALL */); + else + c = generateVariableOrConstant(cvar); + break; + } + case NAMED_CONTEXTS: { + /* + * Query against zero or more named graphs. + */ + expander = new NamedGraphSolutionExpander(dataset + .getNamedGraphs()); + if (cvar == null) {// || !cvar.hasValue()) { + c = null; } else { - /* - * There is no data set and there is a graph variable, - * so the query will run against all named graphs and - * [cvar] will be to the context of each (s,p,o,c) in - * turn. This handles constructions such as: - * - * "SELECT * WHERE {graph ?g {?g :p :o } }" - */ - expander = new NamedGraphSolutionExpander(null/* ALL */); c = generateVariableOrConstant(cvar); } - } else { // dataset != null - switch (stmtPattern.getScope()) { - case DEFAULT_CONTEXTS: { - /* - * Query against the RDF merge of zero or more source - * graphs. - */ - expander = new DefaultGraphSolutionExpander(dataset - .getDefaultGraphs()); - /* - * Note: cvar can not become bound since context is - * stripped for the default graph. - */ - if (cvar == null) - c = null; - else - c = generateVariableOrConstant(cvar); - break; - } - case NAMED_CONTEXTS: { - /* - * Query against zero or more named graphs. - */ - expander = new NamedGraphSolutionExpander(dataset - .getNamedGraphs()); - if (cvar == null) {// || !cvar.hasValue()) { - c = null; - } else { - c = generateVariableOrConstant(cvar); - } - break; - } - default: - throw new AssertionError(); - } + break; } + default: + throw new AssertionError(); + } } } @@ -1456,6 +1467,63 @@ s, p, o, c, optional, // optional filter, // filter on elements visited by the access path. + expander // named graphs expander + ); + + } + + private IPredicate generateSearchTail(final StatementPattern sp, + final Set<StatementPattern> metadata) + throws QueryEvaluationException { + + final Value predValue = sp.getPredicateVar().getValue(); + if (log.isDebugEnabled()) { + log.debug(predValue); + } + if (predValue == null || !BD.SEARCH.equals(predValue)) { + throw new IllegalArgumentException("not a valid magic search: " + sp); + } + final Value objValue = sp.getObjectVar().getValue(); + if (log.isDebugEnabled()) { + log.debug(objValue); + } + if (objValue == null || !(objValue instanceof Literal)) { + throw new IllegalArgumentException("not a valid magic search: " + sp); + } + + final ISolutionExpander expander = + new FreeTextSearchExpander(database, (Literal) objValue); + + final Var subjVar = sp.getSubjectVar(); + + final IVariableOrConstant<IV> search = + com.bigdata.relation.rule.Var.var(subjVar.getName()); + + IVariableOrConstant<IV> relevance = new Constant(DummyIV.INSTANCE); + + for (StatementPattern meta : metadata) { + if (!meta.getSubjectVar().equals(subjVar)) { + throw new IllegalArgumentException("illegal metadata: " + meta); + } + final Value pVal = meta.getPredicateVar().getValue(); + final Var oVar = meta.getObjectVar(); + if (pVal == null || oVar.hasValue()) { + throw new IllegalArgumentException("illegal metadata: " + meta); + } + if (BD.RELEVANCE.equals(pVal)) { + relevance = com.bigdata.relation.rule.Var.var(oVar.getName()); + } + } + + return new SPOPredicate( + new String[] { database.getSPORelation().getNamespace() }, + -1, // partitionId + search, // s = searchVar + relevance, // p = relevanceVar + new Constant(DummyIV.INSTANCE), // o = reserved + new Constant(DummyIV.INSTANCE), // c = reserved + false, // optional + null, // filter on elements visited by the access path. expander // free text search expander or named graphs expander ); @@ -1707,6 +1775,12 @@ /** * Override evaluation of StatementPatterns to recognize magic search * predicate. + * + * select * + * where { + * ?s bd:search "foo" . + * ?s bd:score ?score . + * } */ @Override public CloseableIteration<BindingSet, QueryEvaluationException> evaluate( Modified: branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java 2010-12-21 15:05:59 UTC (rev 4027) +++ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java 2010-12-21 15:09:07 UTC (rev 4028) @@ -13,6 +13,7 @@ import com.bigdata.rdf.internal.IV; import com.bigdata.rdf.internal.TermId; import com.bigdata.rdf.internal.VTE; +import com.bigdata.rdf.internal.XSDDoubleIV; import com.bigdata.rdf.model.BigdataValue; import com.bigdata.rdf.spo.ISPO; import com.bigdata.rdf.spo.SPO; @@ -305,9 +306,12 @@ } ISPO[] spos = new ISPO[hits.length]; for (int i = 0; i < hits.length; i++) { - IV s = new TermId(VTE.LITERAL, hits[i].getDocId()); - if (INFO) log.info("hit: " + s); - spos[i] = new SPO(s, null, null); + final IV s = new TermId(VTE.LITERAL, hits[i].getDocId()); + final IV p = new XSDDoubleIV(hits[i].getCosine()); + final IV o = null; // reserved + final IV c = null; // reserved + spos[i] = new SPO(s, p, o, c); + if (INFO) log.info("hit: " + spos[i]); } // Arrays.sort(spos, SPOKeyOrder.SPO.getComparator()); return spos; @@ -316,9 +320,12 @@ private ISPO[] convertWhenBound(IHit[] hits) { ISPO[] result = new ISPO[0]; for (IHit hit : hits) { - IV s = new TermId(VTE.LITERAL, hit.getDocId()); + final IV s = new TermId(VTE.LITERAL, hit.getDocId()); if (s == boundVal) { - result = new ISPO[] { new SPO(s, null, null) }; + final IV p = new XSDDoubleIV(hit.getCosine()); + final IV o = null; // reserved + final IV c = null; // reserved + result = new ISPO[] { new SPO(s, p, o, c) }; break; } } Modified: branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java 2010-12-21 15:05:59 UTC (rev 4027) +++ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java 2010-12-21 15:09:07 UTC (rev 4028) @@ -40,10 +40,10 @@ import org.openrdf.model.BNode; import org.openrdf.model.Graph; import org.openrdf.model.Literal; -import org.openrdf.model.Resource; import org.openrdf.model.Statement; import org.openrdf.model.URI; import org.openrdf.model.Value; +import org.openrdf.model.ValueFactory; import org.openrdf.model.impl.BNodeImpl; import org.openrdf.model.impl.GraphImpl; import org.openrdf.model.impl.LiteralImpl; @@ -665,4 +665,89 @@ } + public void testWithRelevance() throws Exception { + + final BigdataSail sail = getSail(); + try { + + sail.initialize(); + final BigdataSailRepository repo = new BigdataSailRepository(sail); + final BigdataSailRepositoryConnection cxn = + (BigdataSailRepositoryConnection) repo.getConnection(); + cxn.setAutoCommit(false); + + try { + + final ValueFactory vf = sail.getValueFactory(); + + final URI s1 = vf.createURI(BD.NAMESPACE+"s1"); + final URI s2 = vf.createURI(BD.NAMESPACE+"s2"); + final URI s3 = vf.createURI(BD.NAMESPACE+"s3"); + final URI s4 = vf.createURI(BD.NAMESPACE+"s4"); + final URI s5 = vf.createURI(BD.NAMESPACE+"s5"); + final URI s6 = vf.createURI(BD.NAMESPACE+"s6"); + final URI s7 = vf.createURI(BD.NAMESPACE+"s7"); + final Literal l1 = vf.createLiteral("how"); + final Literal l2 = vf.createLiteral("now"); + final Literal l3 = vf.createLiteral("brown"); + final Literal l4 = vf.createLiteral("cow"); + final Literal l5 = vf.createLiteral("how now"); + final Literal l6 = vf.createLiteral("brown cow"); + final Literal l7 = vf.createLiteral("how now brown cow"); + + cxn.add(s1, RDFS.LABEL, l1); + cxn.add(s2, RDFS.LABEL, l2); + cxn.add(s3, RDFS.LABEL, l3); + cxn.add(s4, RDFS.LABEL, l4); + cxn.add(s5, RDFS.LABEL, l5); + cxn.add(s6, RDFS.LABEL, l6); + cxn.add(s7, RDFS.LABEL, l7); + + /* + * Note: The either flush() or commit() is required to flush the + * statement buffers to the database before executing any operations + * that go around the sail. + */ + cxn.commit(); + +/**/ + if (log.isInfoEnabled()) { + log.info("\n" + sail.getDatabase().dumpStore()); + } + + { // run the query with no graphs specified + final String query = + "select ?s ?o ?score " + + "where " + + "{ " + + " ?s <"+RDFS.LABEL+"> ?o . " + + " ?o <"+BD.SEARCH+"> \"how now brown cow\" . " + + " ?o <"+BD.RELEVANCE+"> ?score . " + + "}"; + + final TupleQuery tupleQuery = + cxn.prepareTupleQuery(QueryLanguage.SPARQL, query); + tupleQuery.setIncludeInferred(true /* includeInferred */); + TupleQueryResult result = tupleQuery.evaluate(); + + while (result.hasNext()) { + System.err.println(result.next()); + } + + result = tupleQuery.evaluate(); +// Collection<BindingSet> answer = new LinkedList<BindingSet>(); +// answer.add(createBindingSet(new BindingImpl("s", alice))); +// +// compare(result, answer); + } + + } finally { + cxn.close(); + } + } finally { + sail.__tearDownUnitTest(); + } + + } + } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-01-06 20:13:53
|
Revision: 4061 http://bigdata.svn.sourceforge.net/bigdata/?rev=4061&view=rev Author: thompsonbry Date: 2011-01-06 20:13:43 +0000 (Thu, 06 Jan 2011) Log Message: ----------- Merge trunk to JOURNAL_HA_BRANCH [r3895:HEAD]. This merge brings in the change set API for the SAIL. Unit test failures in the HA branch at this time include: - TestChangeSets throws UnsupportedOperationException when running with quads. - TestNamedGraphs#testSearchQuery() fails with TestBigdataSailWithQuads (but not in the trunk). - TestBigdataSailEvaluationStrategy#test_free_text_search() fails with TestBigdataSailWithQuads, TestBigdataSailWithSids, and TestBigdataSailWithoutSids. These failures do not exist in the trunk. - BigdataSparqlTest#dataset-01, 03, 05, 06, 07, 08, 11, 12b sail with TestBigdataSailWithQuads (these test failures exist in the trunk as well). The text search related test errors were likely introduced in the JOURNAL_HA_BRANCH with some recent extensions to the SAIL free text search API. MikeP will look into these errors. Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/proc/AbstractKeyArrayIndexProcedure.java branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/JOURNAL_HA_BRANCH/bigdata/src/resources/logging/log4j.properties branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/SPOAssertionBuffer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/SPORetractionBuffer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/TruthMaintenance.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataStatementImpl.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/StatementBuffer.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/ISPO.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPO.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexRemover.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/StatementWriter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/ProxyBigdataSailTestCase.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSids.java Added Paths: ----------- branches/JOURNAL_HA_BRANCH/bigdata-compatibility/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/StatementWriter.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexMutation.java branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/changesets/ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestChangeSets.java Removed Paths: ------------- branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/StatementWriter.java Property Changed: ---------------- branches/JOURNAL_HA_BRANCH/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/JOURNAL_HA_BRANCH/bigdata-perf/ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/ branches/JOURNAL_HA_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/ branches/JOURNAL_HA_BRANCH/dsi-utils/src/java/it/ branches/JOURNAL_HA_BRANCH/dsi-utils/src/test/it/unimi/ branches/JOURNAL_HA_BRANCH/osgi/ Property changes on: branches/JOURNAL_HA_BRANCH ___________________________________________________________________ Modified: svn:mergeinfo - /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/bugfix-btm:2594-2779 /trunk:2763-2785,2918-2980,3392-3437,3656-3894 + /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/bugfix-btm:2594-2779 /trunk:2763-2785,2918-2980,3392-3437,3656-3894,3896-4059 Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/proc/AbstractKeyArrayIndexProcedure.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/proc/AbstractKeyArrayIndexProcedure.java 2011-01-06 19:38:57 UTC (rev 4060) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/btree/proc/AbstractKeyArrayIndexProcedure.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -39,6 +39,7 @@ import java.io.ObjectInput; import java.io.ObjectOutput; import java.io.OutputStream; +import java.util.Arrays; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; @@ -795,18 +796,34 @@ IResultHandler<ResultBitBuffer, ResultBitBuffer> { private final boolean[] results; + + /** + * I added this so I could encode information about tuple modification + * that takes more than one boolean to encode. For example, SPOs can + * be: INSERTED, REMOVED, UPDATED, NO_OP (2 bits). + */ + private final int multiplier; + private final AtomicInteger onCount = new AtomicInteger(); public ResultBitBufferHandler(final int nkeys) { + + this(nkeys, 1); + + } + + public ResultBitBufferHandler(final int nkeys, final int multiplier) { - results = new boolean[nkeys]; + results = new boolean[nkeys*multiplier]; + this.multiplier = multiplier; } public void aggregate(final ResultBitBuffer result, final Split split) { - System.arraycopy(result.getResult(), 0, results, split.fromIndex, - split.ntuples); + System.arraycopy(result.getResult(), 0, results, + split.fromIndex*multiplier, + split.ntuples*multiplier); onCount.addAndGet(result.getOnCount()); Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2011-01-06 19:38:57 UTC (rev 4060) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -88,22 +88,10 @@ import com.bigdata.relation.locator.IResourceLocator; import com.bigdata.resources.ResourceManager; import com.bigdata.rwstore.IAllocationContext; -import com.bigdata.service.DataService; -import com.bigdata.service.EmbeddedClient; -import com.bigdata.service.IBigdataClient; -import com.bigdata.service.IBigdataFederation; -import com.bigdata.service.jini.JiniClient; import com.bigdata.util.ChecksumUtility; /** * <p> -<<<<<<< .working - * The journal is an append-only persistence capable data structure supporting - * atomic commit, named indices, and transactions. Writes are logically appended - * to the journal to minimize disk head movement. - * </p> - * <p> -======= * The journal is a persistence capable data structure supporting atomic commit, * named indices, and full transactions. The {@link BufferMode#DiskRW} mode * provides an persistence scheme based on reusable allocation slots while the @@ -111,50 +99,13 @@ * Journals may be configured in highly available quorums. * </p> * <p> ->>>>>>> .merge-right.r3391 * This class is an abstract implementation of the {@link IJournal} interface * that does not implement the {@link IConcurrencyManager}, -<<<<<<< .working - * {@link IResourceManager}, or {@link ITransactionService} interfaces. There - * are several classes which DO support all of these features, relying on the - * {@link AbstractJournal} for their underlying persistence store. These - * include: - * <dl> - * <dt>{@link Journal}</dt> - * <dd>A concrete implementation that may be used for a standalone immortal - * database complete with concurrency control and transaction management.</dd> - * <dt>{@link DataService}</dt> - * <dd>A class supporting remote clients, key-range partitioned indices, - * concurrency, and scale-out.</dd> - * <dt>{@link IBigdataClient}</dt> - * <dd>Clients connect to an {@link IBigdataFederation}, which is the basis for - * the scale-out architecture. There are several variants of a federation - * available, including: - * <dl> - * <dt>{@link LocalDataServiceClient}</dt> - * <dd>Purely local operations against a {@link DataService} with full - * concurrency controls and transaction management</dd> - * <dt>{@link EmbeddedClient}</dt> - * <dd>Operations against a collection of services running in the same JVM with - * full concurrency controls, transaction management, and key-range partitioned - * indices.</dd> - * <dt>{@link JiniClient}</dt> - * <dd>Operations against a collection of services running on a distributed - * services framework such as Jini with full concurrency controls, transaction - * management, and key-range partitioned indices. This is the scale-out - * solution.</dd> - * </dl> - * </dd> - * </dl> - * </p> - * <h2>Limitations</h2> -======= * {@link IResourceManager}, or {@link ITransactionService} interfaces. The * {@link Journal} provides a concrete implementation that may be used for a * standalone database complete with concurrency control and transaction * management. * </p> <h2>Limitations</h2> ->>>>>>> .merge-right.r3391 * <p> * The {@link IIndexStore} implementation on this class is NOT thread-safe. The * basic limitation is that the mutable {@link BTree} is NOT thread-safe. The Modified: branches/JOURNAL_HA_BRANCH/bigdata/src/resources/logging/log4j.properties =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata/src/resources/logging/log4j.properties 2011-01-06 19:38:57 UTC (rev 4060) +++ branches/JOURNAL_HA_BRANCH/bigdata/src/resources/logging/log4j.properties 2011-01-06 20:13:43 UTC (rev 4061) @@ -56,6 +56,7 @@ #log4j.logger.com.bigdata.io.WriteCacheService=TRACE #log4j.logger.com.bigdata.journal.AbstractBufferStrategy=TRACE #log4j.logger.com.bigdata.resources=INFO +#log4j.logger.com.bigdata.rwstore.RWStore=TRACE log4j.logger.com.bigdata.journal.ha.HAServer=ALL log4j.logger.com.bigdata.journal.ha.HAConnect=ALL log4j.logger.com.bigdata.journal.ha.SocketMessage=ALL @@ -64,7 +65,7 @@ #log4j.logger.com.bigdata.journal.Name2Addr=INFO #log4j.logger.com.bigdata.journal.AbstractTask=INFO #log4j.logger.com.bigdata.journal.WriteExecutorService=INFO -#log4j.logger.com.bigdata.service.AbstractTransactionService=INFO +#log4j.logger.com.bigdata.service.AbstractTransactionService=TRACE #log4j.logger.com.bigdata.journal.AbstractLocalTransactionManager=INFO log4j.logger.com.bigdata.concurrent.TxDag=WARN log4j.logger.com.bigdata.concurrent.NonBlockingLockManager=WARN Deleted: branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java =================================================================== --- trunk/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java 2011-01-05 22:42:01 UTC (rev 4059) +++ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -1,276 +0,0 @@ -/* - -Copyright (C) SYSTAP, LLC 2006-2008. All rights reserved. - -Contact: - SYSTAP, LLC - 4501 Tower Road - Greensboro, NC 27410 - lic...@bi... - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; version 2 of the License. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - -*/ -/* - * Created on Nov 19, 2010 - */ -package com.bigdata.journal; - -import java.io.File; -import java.io.IOException; -import java.util.Properties; -import java.util.UUID; - -import junit.framework.TestCase2; - -import com.bigdata.Banner; -import com.bigdata.btree.IIndex; -import com.bigdata.btree.IndexMetadata; - -/** - * Test suite for binary compatibility, portability, and forward compatibility - * or automated migration of persistent stores and persistence or serialization - * capable objects across different bigdata releases. The tests in this suite - * rely on artifacts which are archived within SVN. - * - * @todo create w/ small extent and truncate (RW store does not support - * truncate). - * - * @todo test binary migration and forward compatibility. - * - * @todo stubs to create and organize artifacts,etc. - * - * @todo data driven test suite? - * - * @todo create artifact for each release, name the artifacts systematically, - * e.g., test.release.(RW|WORM).jnl or test.release.seg. Collect a list of - * the created artifacts and run each test against each of the versions of - * the artifact. - * - * @todo Force artifact file name case for file system compatibility? - * - * @todo test journal (WORM and RW), btree, index segment, row store, persistent - * data structures (checkpoints, index metadata, tuple serializers, etc.), - * RDF layer, RMI message formats, etc. - * - * @todo Specific tests for - * <p> - * Name2Addr and DefaultKeyBuilderFactory portability problem. See - * https://sourceforge.net/apps/trac/bigdata/ticket/193 - * <p> - * WORM global row store resolution problem introduced in the - * JOURNAL_HA_BRANCH. See - * https://sourceforge.net/apps/trac/bigdata/ticket/171#comment:5 - * <p> - * Sparse row store JDK encoding problem: - * https://sourceforge.net/apps/trac/bigdata/ticket/107 - */ -public class TestBinaryCompatibility extends TestCase2 { - - /** - * - */ - public TestBinaryCompatibility() { - } - - /** - * @param name - */ - public TestBinaryCompatibility(String name) { - super(name); - } - - /** - * @todo munge the release version into a name that is compatibility with - * the file system ("." to "_"). Store artifacts at each release? At - * each release in which an incompatibility is introduced? At each - * release in which a persistence capable data structure or change is - * introduced? - */ - static protected final File artifactDir = new File( - "bigdata-compatibility/src/resources/artifacts"); - - protected static class Version { - private final String version; - private final String revision; - public Version(String version,String revision) { - this.version = version; - this.revision = revision; - } - - /** - * The bigdata version number associated with the release. This is in - * the form <code>xx.yy.zz</code> - */ - public String getVersion() { - return version; - } - - /** - * The SVN repository revision associated with the release. This is in - * the form <code>####</code>. - */ - public String getRevision() { - return revision; - } - } - - /** - * Known release versions. - */ - protected static Version V_0_83_2 = new Version("0.83.2", "3349"); - - /** - * Tested Versions. - */ - protected Version[] versions = new Version[] { - V_0_83_2 - }; - - protected void setUp() throws Exception { - - Banner.banner(); - - super.setUp(); - - if (!artifactDir.exists()) { - - if (!artifactDir.mkdirs()) { - - throw new IOException("Could not create: " + artifactDir); - - } - - } - - for (Version version : versions) { - - final File versionDir = new File(artifactDir, version.getVersion()); - - if (!versionDir.exists()) { - - if (!versionDir.mkdirs()) { - - throw new IOException("Could not create: " + versionDir); - - } - - } - - } - - } - - protected void tearDown() throws Exception { - - super.tearDown(); - - } - - /** - * @throws Throwable - * - * @todo Each 'test' should run an instance of a class which knows how to - * create the appropriate artifacts and how to test them. - */ - public void test_WORM_compatibility_with_JOURNAL_HA_BRANCH() - throws Throwable { - - final Version version = V_0_83_2; - - final File versionDir = new File(artifactDir, version.getVersion()); - - final File artifactFile = new File(versionDir, getName() - + BufferMode.DiskWORM + Journal.Options.JNL); - - if (!artifactFile.exists()) { - - createArtifact(artifactFile); - - } - - verifyArtifact(artifactFile); - - } - - protected void createArtifact(final File artifactFile) throws Throwable { - - if (log.isInfoEnabled()) - log.info("Creating: " + artifactFile); - - final Properties properties = new Properties(); - - properties.setProperty(Journal.Options.FILE, artifactFile.toString()); - - properties.setProperty(Journal.Options.INITIAL_EXTENT, "" - + Journal.Options.minimumInitialExtent); - - final Journal journal = new Journal(properties); - - try { - - final IndexMetadata md = new IndexMetadata(UUID.randomUUID()); - - final IIndex ndx = journal.registerIndex("kb.spo.SPO", md); - - ndx.insert(1,1); - - journal.commit(); - - // reduce to minimum footprint. - journal.truncate(); - - } catch (Throwable t) { - - journal.destroy(); - - throw new RuntimeException(t); - - } finally { - - if (journal.isOpen()) - journal.close(); - - } - - } - - protected void verifyArtifact(final File artifactFile) throws Throwable { - - if (log.isInfoEnabled()) - log.info("Verifying: " + artifactFile); - - final Properties properties = new Properties(); - - properties.setProperty(Journal.Options.FILE, artifactFile.toString()); - - final Journal journal = new Journal(properties); - - try { - - final IIndex ndx = journal.getIndex("kb.spo.SPO"); - - assertNotNull(ndx); - - assertEquals(1,ndx.lookup(1)); - - } finally { - - journal.close(); - - } - - } - -} Copied: branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java (from rev 4059, trunk/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java) =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java (rev 0) +++ branches/JOURNAL_HA_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/TestBinaryCompatibility.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -0,0 +1,276 @@ +/* + +Copyright (C) SYSTAP, LLC 2006-2008. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ +/* + * Created on Nov 19, 2010 + */ +package com.bigdata.journal; + +import java.io.File; +import java.io.IOException; +import java.util.Properties; +import java.util.UUID; + +import junit.framework.TestCase2; + +import com.bigdata.Banner; +import com.bigdata.btree.IIndex; +import com.bigdata.btree.IndexMetadata; + +/** + * Test suite for binary compatibility, portability, and forward compatibility + * or automated migration of persistent stores and persistence or serialization + * capable objects across different bigdata releases. The tests in this suite + * rely on artifacts which are archived within SVN. + * + * @todo create w/ small extent and truncate (RW store does not support + * truncate). + * + * @todo test binary migration and forward compatibility. + * + * @todo stubs to create and organize artifacts,etc. + * + * @todo data driven test suite? + * + * @todo create artifact for each release, name the artifacts systematically, + * e.g., test.release.(RW|WORM).jnl or test.release.seg. Collect a list of + * the created artifacts and run each test against each of the versions of + * the artifact. + * + * @todo Force artifact file name case for file system compatibility? + * + * @todo test journal (WORM and RW), btree, index segment, row store, persistent + * data structures (checkpoints, index metadata, tuple serializers, etc.), + * RDF layer, RMI message formats, etc. + * + * @todo Specific tests for + * <p> + * Name2Addr and DefaultKeyBuilderFactory portability problem. See + * https://sourceforge.net/apps/trac/bigdata/ticket/193 + * <p> + * WORM global row store resolution problem introduced in the + * JOURNAL_HA_BRANCH. See + * https://sourceforge.net/apps/trac/bigdata/ticket/171#comment:5 + * <p> + * Sparse row store JDK encoding problem: + * https://sourceforge.net/apps/trac/bigdata/ticket/107 + */ +public class TestBinaryCompatibility extends TestCase2 { + + /** + * + */ + public TestBinaryCompatibility() { + } + + /** + * @param name + */ + public TestBinaryCompatibility(String name) { + super(name); + } + + /** + * @todo munge the release version into a name that is compatibility with + * the file system ("." to "_"). Store artifacts at each release? At + * each release in which an incompatibility is introduced? At each + * release in which a persistence capable data structure or change is + * introduced? + */ + static protected final File artifactDir = new File( + "bigdata-compatibility/src/resources/artifacts"); + + protected static class Version { + private final String version; + private final String revision; + public Version(String version,String revision) { + this.version = version; + this.revision = revision; + } + + /** + * The bigdata version number associated with the release. This is in + * the form <code>xx.yy.zz</code> + */ + public String getVersion() { + return version; + } + + /** + * The SVN repository revision associated with the release. This is in + * the form <code>####</code>. + */ + public String getRevision() { + return revision; + } + } + + /** + * Known release versions. + */ + protected static Version V_0_83_2 = new Version("0.83.2", "3349"); + + /** + * Tested Versions. + */ + protected Version[] versions = new Version[] { + V_0_83_2 + }; + + protected void setUp() throws Exception { + + Banner.banner(); + + super.setUp(); + + if (!artifactDir.exists()) { + + if (!artifactDir.mkdirs()) { + + throw new IOException("Could not create: " + artifactDir); + + } + + } + + for (Version version : versions) { + + final File versionDir = new File(artifactDir, version.getVersion()); + + if (!versionDir.exists()) { + + if (!versionDir.mkdirs()) { + + throw new IOException("Could not create: " + versionDir); + + } + + } + + } + + } + + protected void tearDown() throws Exception { + + super.tearDown(); + + } + + /** + * @throws Throwable + * + * @todo Each 'test' should run an instance of a class which knows how to + * create the appropriate artifacts and how to test them. + */ + public void test_WORM_compatibility_with_JOURNAL_HA_BRANCH() + throws Throwable { + + final Version version = V_0_83_2; + + final File versionDir = new File(artifactDir, version.getVersion()); + + final File artifactFile = new File(versionDir, getName() + + BufferMode.DiskWORM + Journal.Options.JNL); + + if (!artifactFile.exists()) { + + createArtifact(artifactFile); + + } + + verifyArtifact(artifactFile); + + } + + protected void createArtifact(final File artifactFile) throws Throwable { + + if (log.isInfoEnabled()) + log.info("Creating: " + artifactFile); + + final Properties properties = new Properties(); + + properties.setProperty(Journal.Options.FILE, artifactFile.toString()); + + properties.setProperty(Journal.Options.INITIAL_EXTENT, "" + + Journal.Options.minimumInitialExtent); + + final Journal journal = new Journal(properties); + + try { + + final IndexMetadata md = new IndexMetadata(UUID.randomUUID()); + + final IIndex ndx = journal.registerIndex("kb.spo.SPO", md); + + ndx.insert(1,1); + + journal.commit(); + + // reduce to minimum footprint. + journal.truncate(); + + } catch (Throwable t) { + + journal.destroy(); + + throw new RuntimeException(t); + + } finally { + + if (journal.isOpen()) + journal.close(); + + } + + } + + protected void verifyArtifact(final File artifactFile) throws Throwable { + + if (log.isInfoEnabled()) + log.info("Verifying: " + artifactFile); + + final Properties properties = new Properties(); + + properties.setProperty(Journal.Options.FILE, artifactFile.toString()); + + final Journal journal = new Journal(properties); + + try { + + final IIndex ndx = journal.getIndex("kb.spo.SPO"); + + assertNotNull(ndx); + + assertEquals(1,ndx.lookup(1)); + + } finally { + + journal.close(); + + } + + } + +} Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/attr ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-jini/src/java/com/bigdata/attr:2981-3437,3656-3894 + /trunk/bigdata-jini/src/java/com/bigdata/attr:2981-3437,3656-3894,3896-4059 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/disco ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-jini/src/java/com/bigdata/disco:2981-3437,3656-3894 + /trunk/bigdata-jini/src/java/com/bigdata/disco:2981-3437,3656-3894,3896-4059 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-jini/src/java/com/bigdata/util/config ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-jini/src/java/com/bigdata/util/config:2981-3437,3656-3894 + /trunk/bigdata-jini/src/java/com/bigdata/util/config:2981-3437,3656-3894,3896-4059 Property changes on: branches/JOURNAL_HA_BRANCH/bigdata-perf ___________________________________________________________________ Modified: svn:mergeinfo - /trunk/bigdata-perf:2981-3437,3656-3894 + /trunk/bigdata-perf:2981-3437,3656-3894,3896-4059 Deleted: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java =================================================================== --- trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java 2011-01-05 22:42:01 UTC (rev 4059) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -1,98 +0,0 @@ -package com.bigdata.rdf.changesets; - -import java.util.Comparator; -import com.bigdata.rdf.spo.ISPO; -import com.bigdata.rdf.spo.SPOComparator; - -public class ChangeRecord implements IChangeRecord { - - private final ISPO stmt; - - private final ChangeAction action; - -// private final StatementEnum oldType; - - public ChangeRecord(final ISPO stmt, final ChangeAction action) { - -// this(stmt, action, null); -// -// } -// -// public ChangeRecord(final BigdataStatement stmt, final ChangeAction action, -// final StatementEnum oldType) { -// - this.stmt = stmt; - this.action = action; -// this.oldType = oldType; - - } - - public ChangeAction getChangeAction() { - - return action; - - } - -// public StatementEnum getOldStatementType() { -// -// return oldType; -// -// } - - public ISPO getStatement() { - - return stmt; - - } - - @Override - public boolean equals(Object o) { - - if (o == this) - return true; - - if (o == null || o instanceof IChangeRecord == false) - return false; - - final IChangeRecord rec = (IChangeRecord) o; - - final ISPO stmt2 = rec.getStatement(); - - // statements are equal - if (stmt == stmt2 || - (stmt != null && stmt2 != null && stmt.equals(stmt2))) { - - // actions are equal - return action == rec.getChangeAction(); - - } - - return false; - - } - - public String toString() { - - StringBuilder sb = new StringBuilder(); - - sb.append(action).append(": ").append(stmt); - - return sb.toString(); - - } - - public static final Comparator<IChangeRecord> COMPARATOR = - new Comparator<IChangeRecord>() { - - public int compare(final IChangeRecord r1, final IChangeRecord r2) { - - final ISPO spo1 = r1.getStatement(); - final ISPO spo2 = r2.getStatement(); - - return SPOComparator.INSTANCE.compare(spo1, spo2); - - } - - }; - -} Copied: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java (from rev 4059, trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java) =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java (rev 0) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ChangeRecord.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -0,0 +1,98 @@ +package com.bigdata.rdf.changesets; + +import java.util.Comparator; +import com.bigdata.rdf.spo.ISPO; +import com.bigdata.rdf.spo.SPOComparator; + +public class ChangeRecord implements IChangeRecord { + + private final ISPO stmt; + + private final ChangeAction action; + +// private final StatementEnum oldType; + + public ChangeRecord(final ISPO stmt, final ChangeAction action) { + +// this(stmt, action, null); +// +// } +// +// public ChangeRecord(final BigdataStatement stmt, final ChangeAction action, +// final StatementEnum oldType) { +// + this.stmt = stmt; + this.action = action; +// this.oldType = oldType; + + } + + public ChangeAction getChangeAction() { + + return action; + + } + +// public StatementEnum getOldStatementType() { +// +// return oldType; +// +// } + + public ISPO getStatement() { + + return stmt; + + } + + @Override + public boolean equals(Object o) { + + if (o == this) + return true; + + if (o == null || o instanceof IChangeRecord == false) + return false; + + final IChangeRecord rec = (IChangeRecord) o; + + final ISPO stmt2 = rec.getStatement(); + + // statements are equal + if (stmt == stmt2 || + (stmt != null && stmt2 != null && stmt.equals(stmt2))) { + + // actions are equal + return action == rec.getChangeAction(); + + } + + return false; + + } + + public String toString() { + + StringBuilder sb = new StringBuilder(); + + sb.append(action).append(": ").append(stmt); + + return sb.toString(); + + } + + public static final Comparator<IChangeRecord> COMPARATOR = + new Comparator<IChangeRecord>() { + + public int compare(final IChangeRecord r1, final IChangeRecord r2) { + + final ISPO spo1 = r1.getStatement(); + final ISPO spo2 = r2.getStatement(); + + return SPOComparator.INSTANCE.compare(spo1, spo2); + + } + + }; + +} Deleted: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java =================================================================== --- trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java 2011-01-05 22:42:01 UTC (rev 4059) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -1,38 +0,0 @@ -package com.bigdata.rdf.changesets; - -/** - * Provides detailed information on changes made to statements in the database. - * Change records are generated for any statements that are used in - * addStatement() or removeStatements() operations on the SAIL connection, as - * well as any inferred statements that are added or removed as a result of - * truth maintenance when the database has inference enabled. Change records - * will be sent to an instance of this class via the - * {@link #changeEvent(IChangeRecord)} method. These events will - * occur on an ongoing basis as statements are added to or removed from the - * indices. It is the change log's responsibility to collect change records. - * When the transaction is actually committed (or aborted), the change log will - * receive notification via {@link #transactionCommited()} or - * {@link #transactionAborted()}. - */ -public interface IChangeLog { - - /** - * Occurs when a statement add or remove is flushed to the indices (but - * not yet committed). - * - * @param record - * the {@link IChangeRecord} - */ - void changeEvent(final IChangeRecord record); - - /** - * Occurs when the current SAIL transaction is committed. - */ - void transactionCommited(); - - /** - * Occurs if the current SAIL transaction is aborted. - */ - void transactionAborted(); - -} Copied: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java (from rev 4059, trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java) =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java (rev 0) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeLog.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -0,0 +1,38 @@ +package com.bigdata.rdf.changesets; + +/** + * Provides detailed information on changes made to statements in the database. + * Change records are generated for any statements that are used in + * addStatement() or removeStatements() operations on the SAIL connection, as + * well as any inferred statements that are added or removed as a result of + * truth maintenance when the database has inference enabled. Change records + * will be sent to an instance of this class via the + * {@link #changeEvent(IChangeRecord)} method. These events will + * occur on an ongoing basis as statements are added to or removed from the + * indices. It is the change log's responsibility to collect change records. + * When the transaction is actually committed (or aborted), the change log will + * receive notification via {@link #transactionCommited()} or + * {@link #transactionAborted()}. + */ +public interface IChangeLog { + + /** + * Occurs when a statement add or remove is flushed to the indices (but + * not yet committed). + * + * @param record + * the {@link IChangeRecord} + */ + void changeEvent(final IChangeRecord record); + + /** + * Occurs when the current SAIL transaction is committed. + */ + void transactionCommited(); + + /** + * Occurs if the current SAIL transaction is aborted. + */ + void transactionAborted(); + +} Deleted: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java =================================================================== --- trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java 2011-01-05 22:42:01 UTC (rev 4059) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -1,120 +0,0 @@ -package com.bigdata.rdf.changesets; - -import com.bigdata.rdf.model.BigdataStatement; -import com.bigdata.rdf.model.StatementEnum; -import com.bigdata.rdf.spo.ISPO; - -/** - * Provides detailed information on changes made to statements in the database. - * Change records are generated for any statements that are used in - * addStatement() or removeStatements() operations on the SAIL connection, as - * well as any inferred statements that are added or removed as a result of - * truth maintenance when the database has inference enabled. - * <p> - * See {@link IChangeLog}. - */ -public interface IChangeRecord { - - /** - * Attempting to add or remove statements can have a number of different - * effects. This enum captures the different actions that can take place as - * a result of trying to add or remove a statement from the database. - */ - public enum ChangeAction { - - /** - * The focus statement was not in the database before and will be - * in the database after the commit. This can be the result of either - * explicit addStatement() operations on the SAIL connection, or from - * new inferences being generated via truth maintenance when the - * database has inference enabled. If the focus statement has a - * statement type of explicit then it was added via an addStatement() - * operation. If the focus statement has a statement type of inferred - * then it was added via truth maintenance. - */ - INSERTED, - - /** - * The focus statement was in the database before and will not - * be in the database after the commit. When the database has inference - * and truth maintenance enabled, the statement that is the focus of - * this change record was either an explicit statement that was the - * subject of a removeStatements() operation on the connection, or it - * was an inferred statement that was removed as a result of truth - * maintenance. Either way, the statement is no longer provable as an - * inference using other statements still in the database after the - * commit. If it were still provable, the explicit statement would have - * had its type changed to inferred, and the inferred statement would - * have remained untouched by truth maintenance. If an inferred - * statement was the subject of a removeStatement() operation on the - * connection it would have resulted in a no-op, since inferences can - * only be removed via truth maintenance. - */ - REMOVED, - - /** - * This change action can only occur when inference and truth - * maintenance are enabled on the database. Sometimes an attempt at - * statement addition or removal via an addStatement() or - * removeStatements() operation on the connection will result in a type - * change rather than an actual assertion or deletion. When in - * inference mode, statements can have one of three statement types: - * explicit, inferred, or axiom (see {@link StatementEnum}). There are - * several reasons why a statement will change type rather than be - * asserted or deleted: - * <p> - * <ul> - * <li> A statement is asserted, but already exists in the database as - * an inference or an axiom. The existing statement will have its type - * changed from inference or axiom to explicit. </li> - * <li> An explicit statement is retracted, but is still provable by - * other means. It will have its type changed from explicit to - * inference. </li> - * <li> An explicit statement is retracted, but is one of the axioms - * needed for inference. It will have its type changed from explicit to - * axiom. </li> - * </ul> - */ - UPDATED, - -// /** -// * This change action can occur for one of two reasons: -// * <p> -// * <ul> -// * <li> A statement is asserted, but already exists in the database as -// * an explicit statement. </li> -// * <li> An inferred statement or an axiom is retracted. Only explicit -// * statements can be retracted via removeStatements() operations. </li> -// * </ul> -// */ -// NO_OP - - } - - /** - * Return the ISPO that is the focus of this change record. - * - * @return - * the {@link ISPO} - */ - ISPO getStatement(); - - /** - * Return the change action for this change record. - * - * @return - * the {@link ChangeAction} - */ - ChangeAction getChangeAction(); - -// /** -// * If the change action is {@link ChangeAction#TYPE_CHANGE}, this method -// * will return the old statement type of the focus statement. The -// * new statement type is available on the focus statement itself. -// * -// * @return -// * the old statement type of the focus statement -// */ -// StatementEnum getOldStatementType(); - -} Copied: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java (from rev 4059, trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java) =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java (rev 0) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/IChangeRecord.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -0,0 +1,120 @@ +package com.bigdata.rdf.changesets; + +import com.bigdata.rdf.model.BigdataStatement; +import com.bigdata.rdf.model.StatementEnum; +import com.bigdata.rdf.spo.ISPO; + +/** + * Provides detailed information on changes made to statements in the database. + * Change records are generated for any statements that are used in + * addStatement() or removeStatements() operations on the SAIL connection, as + * well as any inferred statements that are added or removed as a result of + * truth maintenance when the database has inference enabled. + * <p> + * See {@link IChangeLog}. + */ +public interface IChangeRecord { + + /** + * Attempting to add or remove statements can have a number of different + * effects. This enum captures the different actions that can take place as + * a result of trying to add or remove a statement from the database. + */ + public enum ChangeAction { + + /** + * The focus statement was not in the database before and will be + * in the database after the commit. This can be the result of either + * explicit addStatement() operations on the SAIL connection, or from + * new inferences being generated via truth maintenance when the + * database has inference enabled. If the focus statement has a + * statement type of explicit then it was added via an addStatement() + * operation. If the focus statement has a statement type of inferred + * then it was added via truth maintenance. + */ + INSERTED, + + /** + * The focus statement was in the database before and will not + * be in the database after the commit. When the database has inference + * and truth maintenance enabled, the statement that is the focus of + * this change record was either an explicit statement that was the + * subject of a removeStatements() operation on the connection, or it + * was an inferred statement that was removed as a result of truth + * maintenance. Either way, the statement is no longer provable as an + * inference using other statements still in the database after the + * commit. If it were still provable, the explicit statement would have + * had its type changed to inferred, and the inferred statement would + * have remained untouched by truth maintenance. If an inferred + * statement was the subject of a removeStatement() operation on the + * connection it would have resulted in a no-op, since inferences can + * only be removed via truth maintenance. + */ + REMOVED, + + /** + * This change action can only occur when inference and truth + * maintenance are enabled on the database. Sometimes an attempt at + * statement addition or removal via an addStatement() or + * removeStatements() operation on the connection will result in a type + * change rather than an actual assertion or deletion. When in + * inference mode, statements can have one of three statement types: + * explicit, inferred, or axiom (see {@link StatementEnum}). There are + * several reasons why a statement will change type rather than be + * asserted or deleted: + * <p> + * <ul> + * <li> A statement is asserted, but already exists in the database as + * an inference or an axiom. The existing statement will have its type + * changed from inference or axiom to explicit. </li> + * <li> An explicit statement is retracted, but is still provable by + * other means. It will have its type changed from explicit to + * inference. </li> + * <li> An explicit statement is retracted, but is one of the axioms + * needed for inference. It will have its type changed from explicit to + * axiom. </li> + * </ul> + */ + UPDATED, + +// /** +// * This change action can occur for one of two reasons: +// * <p> +// * <ul> +// * <li> A statement is asserted, but already exists in the database as +// * an explicit statement. </li> +// * <li> An inferred statement or an axiom is retracted. Only explicit +// * statements can be retracted via removeStatements() operations. </li> +// * </ul> +// */ +// NO_OP + + } + + /** + * Return the ISPO that is the focus of this change record. + * + * @return + * the {@link ISPO} + */ + ISPO getStatement(); + + /** + * Return the change action for this change record. + * + * @return + * the {@link ChangeAction} + */ + ChangeAction getChangeAction(); + +// /** +// * If the change action is {@link ChangeAction#TYPE_CHANGE}, this method +// * will return the old statement type of the focus statement. The +// * new statement type is available on the focus statement itself. +// * +// * @return +// * the old statement type of the focus statement +// */ +// StatementEnum getOldStatementType(); + +} Deleted: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java =================================================================== --- trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java 2011-01-05 22:42:01 UTC (rev 4059) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -1,163 +0,0 @@ -package com.bigdata.rdf.changesets; - -import java.util.Collection; -import java.util.HashMap; -import java.util.LinkedList; -import java.util.Map; -import org.apache.log4j.Logger; -import com.bigdata.rdf.model.BigdataStatement; -import com.bigdata.rdf.spo.ISPO; -import com.bigdata.rdf.store.AbstractTripleStore; -import com.bigdata.rdf.store.BigdataStatementIterator; -import com.bigdata.striterator.ChunkedArrayIterator; - -/** - * This is a very simple implementation of a change log. NOTE: This is not - * a particularly great implementation. First of all it ends up storing - * two copies of the change set. Secondly it needs to be smarter about - * concurrency, or maybe we can be smart about it when we do the - * implementation on the other side (the SAIL connection can just write - * change events to a buffer and then the buffer can be drained by - * another thread that doesn't block the actual read/write operations, - * although then we need to be careful not to issue the committed() - * notification before the buffer is drained). - * - * @author mike - * - */ -public class InMemChangeLog implements IChangeLog { - - protected static final Logger log = Logger.getLogger(InMemChangeLog.class); - - /** - * Running tally of new changes since the last commit notification. - */ - private final Map<ISPO,IChangeRecord> changeSet = - new HashMap<ISPO, IChangeRecord>(); - - /** - * Keep a record of the change set as of the last commit. - */ - private final Map<ISPO,IChangeRecord> committed = - new HashMap<ISPO, IChangeRecord>(); - - /** - * See {@link IChangeLog#changeEvent(IChangeRecord)}. - */ - public synchronized void changeEvent(final IChangeRecord record) { - - if (log.isInfoEnabled()) - log.info(record); - - changeSet.put(record.getStatement(), record); - - } - - /** - * See {@link IChangeLog#transactionCommited()}. - */ - public synchronized void transactionCommited() { - - if (log.isInfoEnabled()) - log.info("transaction committed"); - - committed.clear(); - - committed.putAll(changeSet); - - changeSet.clear(); - - } - - /** - * See {@link IChangeLog#transactionAborted()}. - */ - public synchronized void transactionAborted() { - - if (log.isInfoEnabled()) - log.info("transaction aborted"); - - changeSet.clear(); - - } - - /** - * Return the change set as of the last commmit point. - * - * @return - * a collection of {@link IChangeRecord}s as of the last commit - * point - */ - public Collection<IChangeRecord> getLastCommit() { - - return committed.values(); - - } - - /** - * Return the change set as of the last commmit point, using the supplied - * database to resolve ISPOs to BigdataStatements. - * - * @return - * a collection of {@link IChangeRecord}s as of the last commit - * point - */ - public Collection<IChangeRecord> getLastCommit(final AbstractTripleStore db) { - - return resolve(db, committed.values()); - - } - - /** - * Use the supplied database to turn a set of ISPO change records into - * BigdataStatement change records. BigdataStatements also implement - * ISPO, the difference being that BigdataStatements also contain - * materialized RDF terms for the 3 (or 4) positions, in addition to just - * the internal identifiers (IVs) for those terms. - * - * @param db - * the database containing the lexicon needed to materialize - * the BigdataStatement objects - * @param unresolved - * the ISPO change records that came from IChangeLog notification - * events - * @return - * the fully resolves BigdataStatement change records - */ - private Collection<IChangeRecord> resolve(final AbstractTripleStore db, - final Collection<IChangeRecord> unresolved) { - - final Collection<IChangeRecord> resolved = - new LinkedList<IChangeRecord>(); - - // collect up the ISPOs out of the unresolved change records - final ISPO[] spos = new ISPO[unresolved.size()]; - int i = 0; - for (IChangeRecord rec : unresolved) { - spos[i++] = rec.getStatement(); - } - - // use the database to resolve them into BigdataStatements - final BigdataStatementIterator it = - db.asStatementIterator( - new ChunkedArrayIterator<ISPO>(i, spos, null/* keyOrder */)); - - /* - * the BigdataStatementIterator will produce BigdataStatement objects - * in the same order as the original ISPO array - */ - for (IChangeRecord rec : unresolved) { - - final BigdataStatement stmt = it.next(); - - resolved.add(new ChangeRecord(stmt, rec.getChangeAction())); - - } - - return resolved; - - } - - - -} Copied: branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java (from rev 4059, trunk/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java) =================================================================== --- branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java (rev 0) +++ branches/JOURNAL_HA_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/InMemChangeLog.java 2011-01-06 20:13:43 UTC (rev 4061) @@ -0,0 +1,163 @@ +package com.bigdata.rdf.changesets; + +import java.util.Collection; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.Map; +import org.apache.log4j.Logger; +import com.bigdata.rdf.model.BigdataStatement; +import com.bigdata.rdf.spo.ISPO; +import com.bigdata.rdf.store.AbstractTripleStore; +import com.bigdata.rdf.store.BigdataStatementIterator; +import com.bigdata.striterator.ChunkedArrayIterator; + +/** + * This is a very simple implementation of a change log. NOTE: This is not + * a particularly great implementation. First of all it ends up storing + * two copies of the change set. Secondly it needs to be smarter about + * concurrency, or maybe we can be smart about it when we do the + * implementation on the other side (the SAIL connection can just write + * change events to a buffer and then the buffer can be drained by + * another thread that doesn't block the actual read/write operations, + * although then we need to be careful not to issue the committed() + * notification before the buffer is drained). + * + * @author mike + * + */ +public class InMemChangeLog implements IChangeLog { + + protected static final Logger log = Logger.getLogger(InMemChangeLog.class); + + /** + * Running tally of new changes since the last commit notification. + */ + private final Map<ISPO,IChangeRecord> changeSet = + new HashMap<ISPO, IChangeRecord>(); + + /** + * Keep a record of the change set as of the last commit. + */ + private final Map<ISPO,IChangeRecord> committed = + new HashMap<ISPO, IChangeRecord>(); + + /** + * See {@link IChangeLog#changeEvent(IChangeRecord)}. + */ + public synchronized void changeEvent(final IChangeRecord record) { + + if (log.isInfoEnabled()) + log.info(record); + + changeSet.put(record.getStatement(), record); + + } + + /** + * See {@link IChangeLog#transactionCommited()}. + */ + public synchronized void transactionCommited() { + + if (log.isInfoEnabled()) + log.info("transaction committed"); + + committed.clear(); + + committed.putAll(changeSet); + + changeSet.clear(); + + } + + /** + * See {@link IChangeLog#transactionAborted()}. + */ + public synchronized void transactionAborted() { + + if (log.isInfoEnabled()) + log.info("transaction aborted"); + + changeSet.clear(); + + } + + /** + * Return the change set as of the last commmit point. + * + * @return + * a collection of {@link IChangeRecord}s as of the last commit + * point + */ + public Collection<IChangeRecord> getLastCommit() { + + return committed.values(); + + } + + /** + * Return the change set as of the last commmit point, using the supplied + * database to resolve ISPOs to BigdataStatements. + * + * @return + * a collection of {@link IChangeRecord}s as of the last commit + * point + */ + public Collection<IChangeRecord> getLastCommit(final AbstractTripleStore db) { + + return resolve(db, committed.values()); + + } + + /** + * Use the supplied database to turn a set of ISPO change records into + * BigdataStatement change records. BigdataStatements also implement + * ISPO, the difference being that BigdataStatements also contain + * materialized RDF terms for the 3 (or 4) positions, in addition to just + * the internal identifiers (IVs) for those terms. + * + * @param db + * the database containing the lexicon needed to materialize + * the BigdataStatement objects + * @param unresolved + * the ISP... [truncated message content] |
From: <tho...@us...> - 2011-01-07 14:15:22
|
Revision: 4066 http://bigdata.svn.sourceforge.net/bigdata/?rev=4066&view=rev Author: thompsonbry Date: 2011-01-07 14:15:16 +0000 (Fri, 07 Jan 2011) Log Message: ----------- Removed some dependencies which had been introduced by Martyn for the RESTful RDF repository stuff which MikeP developed. The apache httpclient libraries were already present in the bigdata-sails/lib directory. Modified Paths: -------------- branches/JOURNAL_HA_BRANCH/.classpath Removed Paths: ------------- branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/commons-httpclient-3.1/ branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/commons-httpclient-3.1.jar branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/commons-httpclient-3.1.tar.gz branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/httpclient-4.0.1.jar branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/httpmime-4.0.1.jar Modified: branches/JOURNAL_HA_BRANCH/.classpath =================================================================== --- branches/JOURNAL_HA_BRANCH/.classpath 2011-01-06 21:02:44 UTC (rev 4065) +++ branches/JOURNAL_HA_BRANCH/.classpath 2011-01-07 14:15:16 UTC (rev 4066) @@ -16,11 +16,11 @@ <classpathentry kind="src" path="bigdata/src/samples"/> <classpathentry kind="src" path="dsi-utils/src/test"/> <classpathentry kind="lib" path="bigdata-jini/lib/apache/zookeeper-3.2.1.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/commons-httpclient.jar"/> <classpathentry kind="lib" path="bigdata-sails/lib/servlet-api.jar"/> <classpathentry kind="lib" path="bigdata/lib/dsi-utils-1.0.6-020610.jar"/> <classpathentry kind="lib" path="bigdata/lib/lgpl-utils-1.0.6-020610.jar"/> <classpathentry kind="lib" path="bigdata-rdf/lib/nxparser-6-22-2010.jar"/> + <classpathentry kind="lib" path="bigdata-sails/lib/commons-httpclient.jar"/> <classpathentry kind="src" path="lgpl-utils/src/java"/> <classpathentry kind="src" path="lgpl-utils/src/test"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4j-3_6.jar"/> Deleted: branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/commons-httpclient-3.1.jar =================================================================== (Binary files differ) Deleted: branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/commons-httpclient-3.1.tar.gz =================================================================== (Binary files differ) Deleted: branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/httpclient-4.0.1.jar =================================================================== (Binary files differ) Deleted: branches/JOURNAL_HA_BRANCH/bigdata/lib/apache/httpmime-4.0.1.jar =================================================================== (Binary files differ) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |